Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
βŒ€
ParentId
stringlengths
1
6
βŒ€
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
βŒ€
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
βŒ€
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
βŒ€
LastEditorUserId
stringlengths
1
6
βŒ€
OwnerUserId
stringlengths
1
6
βŒ€
Tags
list
3060
2
null
2686
1
null
Buzzwords: interpolation, resampling, smoothing. Your problem is similar to one encountered frequently in demography: people might have census counts broken down into age intervals, for example, and such intervals are not always of constant width. You want to interpolate the distribution by age. What this shares with your problem, aside from the variable width (= variable time intervals), is that the data tend to be non-negative. In addition, many such datasets can have noise, but it has a particular form of negative correlation: a count that appears in one bin will not appear in neighboring bins, but might have been assigned to the wrong bin. For example, older people may tend to round their ages to the nearest five years. They are not overlooked but they might contribute to the wrong age group. By and large, though, the data are complete and reliable. In terms of this analogy we're talking about a full census; in your datasets you have actual electric bills, actual enrollments, and so on. So it's just a question of apportioning the data reasonably to a set of intervals useful for further analysis (such as equally spaced times for time series analysis): that's where interpolation and resampling are involved. There are many interpolation techniques. The commonest in demography were developed for simple calculation and are based on polynomial splines. Many share a trick worth knowing, regardless of how you plan to process your data: don't attempt to interpolate the raw data; instead, interpolate their cumulative sum. The latter will be monotonically increasing due to the non-negativity of the original values, and therefore will tend to be relatively smooth. This is why polynomial splines can work at all. Another advantage of this approach is that although the fit may deviate from the data points (slightly, one hopes), overall it correctly reproduces the totals, so that nothing is net lost or gained. Of course, after fitting the cumulative values (as a function of time or age), you take first differences to estimate totals within any bin you like. The simplest example of this approach is a linear spline: just connect successive points on the plot of cumulative $x$ vs. cumulative $t$ by line segments. Estimate the counts in any time interval $[t_0, t_1]$ by reading off the values $x_0$ and $x_1$ of the splined curve at $t_0$ and $t_1$ respectively and using $x_1 - x_0$. Better splines (cubic in some areas; quintic in many demographic apps) sometimes improve the estimates. This is equivalent to your intuition of weighting the data and gives it a nice graphical interpretation.
null
CC BY-SA 2.5
null
2010-09-24T17:57:25.143
2010-09-24T17:57:25.143
null
null
919
null
3061
1
null
null
7
2155
Does Lin's [concordance correlation coefficient](http://en.wikipedia.org/wiki/Concordance_correlation_coefficient) assume that the 2 data sets have linear or monotonic tendencies? Or, can I measure the concordance between 2 data sets that have a sinusoidal tendency?
Does the concordance correlation coefficient make linearity or monotone assumptions?
CC BY-SA 3.0
null
2010-09-24T18:40:39.993
2017-06-01T11:19:26.733
2017-06-01T11:19:26.733
11887
1417
[ "correlation" ]
3062
1
null
null
1
456
Is there a statistical procedure that can measure the concordance between two sets of data that have, say, a sinusoidal tendency?
How do we find concordance between two sets of non-linear, non-monotonic data
CC BY-SA 2.5
null
2010-09-24T19:09:28.577
2010-09-24T19:21:21.743
null
null
1417
[ "correlation" ]
3063
2
null
3062
2
null
short answer (the question is rather imprecise). Say $y_{1t}$ and $y_{2t}$ are your two sets, then $z_t=y_{1t}-y_{2t}$. Then it amounts to checking whether $z_t$ is bounded (in your example it will be, if you sample $y_{1}$ and $y_{2}$ over a long enough amount to time).
null
CC BY-SA 2.5
null
2010-09-24T19:21:21.743
2010-09-24T19:21:21.743
null
null
603
null
3064
1
3349
null
10
462
I have the test results of a blood test administered to 2500 people four times at six-month intervals. The results primarily consist of two measures of immune response - one in the presence of certain tuberculosis antigens, one in the absence. Currently, each test evaluates to either positive or negative based on the difference between the antigen response and the nil response (with the idea being that if your immune system responds to TB antigens, you've likely been exposed to the bacterium itself at some point). In essence, the test supposes that a non-exposed individual's distributions of nil and TB responses should be basically identical, whereas a person with TB exposure will have TB responses drawn from a different distribution (of higher values). Caveat: the responses are very, very non-normal, and values clump at both the natural floor and the instrument-truncated ceiling. However, it's seems pretty clear in this longitudinal setting that we're getting "false positives" (no actual gold standard for latent tuberculosis, I fear) that are caused by (typically small) fluctuations in the antigen and nil responses. While this might be hard to avoid in some situations (you may only get one chance to test someone), there are many situations in which people are routinely tested for TB every year or so - in the US, this is common for healthcare workers, the military, homeless people staying at shelters, and so on. It seems a shame to ignore prior test results because the extant criteria happen to be cross-sectional. I think that what I'd like to do is what I crudely conceive of as longitudinal mixture analysis. Much like the cross-sectional criteria, I'd like to be able to estimate the probability that an individual's TB and nil responses are drawn from the same distribution - but have that estimate incorporate prior test results, as well as information from the sample as a whole (e.g., can I use the sample-wide distribution of within-individual variabilities to improve my estimates of a specific individual's distribution of nil or TB?). The estimated probability would need to be able to change over time, of course, to account for the possibility of new infection. I've gotten myself totally twisted around trying to think about this in unusual ways, but I feel like this conceptualization is as good as any I'm going to come up with. If something doesn't make sense, please feel free to ask for clarification. If my understanding of the situation seems wrong, please feel free to tell me. Thank you so much for your help. In response to Srikant: It's a case of latent classification (TB-infected or not) using the two continuous (but non-normal and truncated) test results. Right now, that classification is done using a cutoff (in its simplified form, TB - nil > .35 -> positive). With test results presented as (nil, TB, result), the basic archetypes* are: Probable Negative: (0.06, 0.15, -) (0.24, 0.23, -) (0.09, 0.11, -) (0.16, 0.15, -) Probable Positive: (0.05, 3.75, +) (0.05, 1.56, +) (0.06, 5.02, +) (0.08, 4.43, +) Wobbler: (0.05, 0.29, -) (0.09, 0.68, +) (0.08, 0.31, -) (0.07, 0.28, -) The positive on the second test for the Wobbler is pretty clearly an aberration, but how would you model that? While one line of my thinking is to estimate the "true difference" between TB and nil at each time point using a repeated-measures multilevel model, it occurred to me that what I really want to know is if the person's nil response and TB response are drawn from the same distribution, or if their immune system recognizes the TB antigens and activates, producing an increased response. As for what could cause a positive test other than infection: I'm not sure. I suspect it's typically just within-person variation in results, but there's certainly a possibility of other factors. We do have questionnaires from each time point, but I haven't looked into those too much yet. *Fabricated but illustrative data
Longitudinal comparison of two distributions
CC BY-SA 2.5
null
2010-09-24T19:54:13.507
2010-10-06T23:46:47.417
2010-09-29T20:51:01.840
71
71
[ "repeated-measures" ]
3065
2
null
3052
5
null
You could use some sort of Monte Carlo approach, using for instance the moving average of your data. Take a moving average of the data, using a window of a reasonable size (I guess it's up to you deciding how wide). Throughs in your data will (of course) be characterized by a lower average, so now you need to find some "threshold" to define "low". To do that you randomly swap the values of your data (e.g. using `sample()`) and recalculate the moving average for your swapped data. Repeat this last passage a reasonably high amount of times (>5000) and store all the averages of these trials. So essentially you will have a matrix with 5000 lines, one per trial, each one containing the moving average for that trial. At this point for each column you pick the 5% (or 1% or whatever you want) quantile, that is the value under which lies only 5% of the means of the randomized data. You now have a "confidence limit" (I'm not sure if that is the correct statistical term) to compare your original data with. If you find a part of your data that is lower than this limit then you can call that a through. Of course, bare in mind that not this nor any other mathematical method could ever give you any indication of biological significance, although I'm sure you're well aware of that. EDIT - an example ``` require(ares) # for the ma (moving average) function # Some data with peaks and throughs values <- cos(0.12 * 1:100) + 0.3 * rnorm(100) plot(values, t="l") # Calculate the moving average with a window of 10 points mov.avg <- ma(values, 1, 10, FALSE) numSwaps <- 1000 mov.avg.swp <- matrix(0, nrow=numSwaps, ncol=length(mov.avg)) # The swapping may take a while, so we display a progress bar prog <- txtProgressBar(0, numSwaps, style=3) for (i in 1:numSwaps) { # Swap the data val.swp <- sample(values) # Calculate the moving average mov.avg.swp[i,] <- ma(val.swp, 1, 10, FALSE) setTxtProgressBar(prog, i) } # Now find the 1% and 5% quantiles for each column limits.1 <- apply(mov.avg.swp, 2, quantile, 0.01, na.rm=T) limits.5 <- apply(mov.avg.swp, 2, quantile, 0.05, na.rm=T) # Plot the limits points(limits.5, t="l", col="orange", lwd=2) points(limits.1, t="l", col="red", lwd=2) ``` This will just allow you to graphically find the regions, but you can easily find them using something on the lines of `which(values>limits.5)`.
null
CC BY-SA 2.5
null
2010-09-24T20:12:41.937
2010-09-24T21:37:23.963
2010-09-24T21:37:23.963
582
582
null
3066
2
null
3048
11
null
Lets assume mat_pages[] contains pages in the columns (which you want to cluster) and individuals in the rows. You can cluster pages based on individual data in Rby using the following command: ``` pc <- prcomp(x=mat_pages,center=TRUE,scale=TRUE) ``` The loadings matrix is the matrix of eigenvectors of the SVD decomposition of the data. They give the relative weight of each PAGE in the calculation of scores. Loadings with larger absolute values have more influence in determining the score of the corresponding principle component. However, I should also point out the short coming of using PCA to cluster pages. The reason for this is that loadings give larger weights to the PAGES with higher variation, regardless of whether this variation is actually because of the PAGE content or some other reason (may be technical or individual variation). The loadings do not necessarily reflect the true differences between groups, which (maybe) your main interest. BUT, this clustering truly reflects the differences in the group under the assumption that all the pages have same variance (I don't know if this is a valid assumption). If you have a powerful computing facilities (which may be possible given your data size) - using hierarchical models may be a good idea. In R, it can be done using lme4 package. --- What do do after you have the scores? This is a crude suggestion and the analysis depends greatly on how the data looks like. Also, I would guess this process would be highly infeasible to group the data of magnitude that you have. ``` pc.col <- paste("page", 1:27000, sep=".") pdf("principle-components.pdf") plot(pc$x[,1:2]) ## Just look at the 1st two loadings (as you can see the groupings in a plane) dev.off() ``` Hopefully, this can give you a picture of how the data is grouped into. Warning: this is not what I would recommend. --- My recommendation: Problems like these arise frequently in genomics.In your case pages corresponds to genes and individuals corresponds to patients (basically individuals is has the same meaning as in genomics) You want to cluster the pages based on data. You can use a lot of clustering packages in R and have been pointed in other answers. A fundamental problem with packages is like hclust is how to determine the number of clusters. A few of my favorite ones are: - pvclust (Gives you clusters and also gives a p-value for each cluster. Using the p-value you can determine the statistically significant clusters. Problem: requires a lot of computational power and I am not sure if it will work with data of your size) - hopach (Gives you the estimated number of clusters, and the clusters) - there are other packages available in Bioconductor, please check them out in the task view. You can also use clustering algos like k-means etc. I am sure I saw a thread in this forum about clustering. The answers were very detailed. It was asked by Tal Galili if I remember correctly.
null
CC BY-SA 2.5
null
2010-09-25T00:23:28.127
2010-09-28T16:05:34.673
2010-09-28T16:05:34.673
1307
1307
null
3067
2
null
175
7
null
There are two statistical distance measures that are specifically catered to detecting outliers and then considering whether such outliers should be removed from your linear regression. The first one is Cook's distance. You can find a pretty good explanation of it at Wikipedia: [http://en.wikipedia.org/wiki/Cook%27s_distance](http://en.wikipedia.org/wiki/Cook%27s_distance). The higher the Cook's distance is the more influential (impact on regression coefficient) the observation is. The typical cut-off point to consider removing the observation is a Cook's distance = 4/n (n is sample size). The second one is DFFITS which is also well covered by Wikipedia: [http://en.wikipedia.org/wiki/DFFITS](http://en.wikipedia.org/wiki/DFFITS). The typical cut-off point to consider removing an observation is a DFFITS value of 2 times sqrt(k/n) where k is number of variables and n is the sample size. Both measures usually give you similar results leading to similar observation selection.
null
CC BY-SA 2.5
null
2010-09-25T00:38:39.800
2010-09-25T00:38:39.800
null
null
1329
null
3068
2
null
2966
0
null
One could view the recorded times as biased estimates of the runner's latent ability. Many factors would cause the time to be worse than the latent best time, such as a bad start, headwind, a stumble, mis-judgement of pace, etc, while very few would cause the recorded times to be better than the latent best, such as a strong tailwind or running downhill. I am not terribly familiar with regression with biased errors, but apparently one can use the gamma family when performing GLM regression; one would use time as the dependent variable and observed times as the dependent variable.
null
CC BY-SA 2.5
null
2010-09-25T03:51:59.883
2010-09-25T03:51:59.883
null
null
795
null
3069
1
3070
null
18
13346
Among Matlab and Python, which language is good for general statistical data analysis? What are the pros and cons, other than accessibility, for each?
Among Matlab and Python, which language is good for statistical analysis?
CC BY-SA 3.0
null
2010-09-25T04:09:33.167
2016-07-13T21:41:22.603
2016-07-11T23:31:00.323
28666
null
[ "matlab", "python" ]
3070
2
null
3069
30
null
As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are not being productive enough, and you will fall back to using your default best language. At the very least, I would suggest you try to become equally proficient in a number of languages (I would suggest R as well). What I like about Matlab: - I am proficient in it. - It is the lingua franca among numerical analysts. - the profiling tool is very good. This is the only reason I use Matlab instead of octave. - There is a freeware clone, octave, which has good compliance with the reference implementation. What I do not like about Matlab: - There is not a good system to manage third party (free or otherwise) packages and scripts. Mathworks controls the 'central file exchange', and installation of add-on packages seems very clunky, nothing like the excellent system that R has. Furthermore, Mathworks has no incentive to improve this situation, because they make money on selling toolboxes, which compete with freeware packages; - Licenses for parallel computation in Matlab are insanely expensive; - Much of the m-code, including many of the toolbox functions, and some builtins, were designed to be obviously correct, at the expense of efficiency and/or usability. The most glaring example of this is Matlab's median function, which performs a sort of the data, then takes the middle value. This has been the wrong algorithm since the 70's. - saving graphs to file is dodgy at best in Matlab. - I have not found my user experience to have improved over the last 5 years (when I started using Matlab instead of octave), even though Mathworks continues to add bells and whistles. This indicates that I am not their target customer, rather they are looking to expand market share by making things worse for power users. - There are now 2 ways to do object-oriented programming in Matlab, which is confusing at best. Legacy code using the old style will persist for some time. - The Matlab UI is written in Java, which has unpleasant ideas about memory management.
null
CC BY-SA 2.5
null
2010-09-25T04:53:48.313
2010-09-25T04:53:48.313
null
null
795
null
3071
2
null
3069
12
null
Lets break it down into three areas (off the top of my head) where programming meets statistics: data crunching, numerical routines (optimization and such) and statistical libraries (modeling, etc). On the first, the biggest difference is that Python is a general purpose programming language. Matlab is great as long as your world is roughly isomorphic to a fortran numeric array. Once you start dealing with data munging and related issues, Python outshines Matlab. For example, see Greg Wilson's book: [Data Crunching: Solve Everyday Problems Using Java, Python, and more](http://rads.stackoverflow.com/amzn/click/0974514071). On the second, Matlab really does shine with numeric work. A lot of the research community uses it and if you're looking for say, some algorithm related to a paper in compressed sensing, you're far more likely to find an implementation in Matlab. On the other hand, Matlab is kind of the PHP of scientific computing -- it strives to have a function for everything under the sun. The resulting aesthetics and architecture are maddening if you're a programming language geek, but in utilitarian terms, it gets the job done. A lot of this has become less relvant with the rise of Numpy/Scipy, you're just as likely to find optimization and machine learning libraries available for Python. Interfacing with C is about as easy in either language. On the availability of statistical libraries for modeling and such, both are somewhat lacking when compared to something like R. (Though I suspect both will meet the needs for 80% of people doing statistical work.) For the Python side of things see this question: [Python as a statistics workbench](https://stats.stackexchange.com/questions/1595/python-as-a-statistics-workbench). For the Matlab side, I know there's a statistics toolbox, but I'll let someone more knowledgeable fill in the blanks (my experience with Matlab is limited to numerical work unrelated to statistics).
null
CC BY-SA 2.5
null
2010-09-25T05:25:21.730
2010-09-25T05:25:21.730
2017-04-13T12:44:21.160
-1
251
null
3072
2
null
3051
3
null
I can do this easily in Matlab and duck while you downvote me: ``` %given vector x, windowsize, slide idx1 = 1:slide:numel(x); idx2 = min(numel(x) + 1,idx1 + windowsize); %sic on +1 here and no -1; cx = [0;cumsum(x(:))]; %pad out a zero, perform a cumulative sum; rv = (cx(idx2) - cx(idx1)) / windowsize; %tada! the answer! ``` as a side effect, `idx1` is the index of the element in the sum. I am sure this can be easily translated into R. The idiom `first:skip:last` in Matlab gives the array first, first+skip, first+2skip, ..., first + n skip, where the last element in the array is no greater than `last`. edit: I had omitted the averaging part (divide by `windowsize`).
null
CC BY-SA 2.5
null
2010-09-25T05:51:58.807
2010-09-26T16:00:42.393
2010-09-26T16:00:42.393
795
795
null
3073
2
null
3051
3
null
[shabbychef's answer](https://stats.stackexchange.com/questions/3051/mean-of-a-sliding-window-in-r/3072#3072) in R: ``` slideMean<-function(x,windowsize=3,slide=2){ idx1<-seq(1,length(x),by=slide); idx1+windowsize->idx2; idx2[idx2>(length(x)+1)]<-length(x)+1; c(0,cumsum(x))->cx; return((cx[idx2]-cx[idx1])/windowsize); } ``` EDIT: Indices you're looking for are just `idx1`... this function can be easily modified to return them also, but it is almost equally fast to recreate them with another call to `seq(1,length(x),by=slide)`.
null
CC BY-SA 2.5
null
2010-09-25T08:31:02.847
2010-09-25T08:39:03.437
2017-04-13T12:44:33.237
-1
null
null
3075
2
null
3064
3
null
Tricky Matt, as many real-world stats problems are! I would start be defining your study aims/objectives. Without knowing the true status of the subjects it will be hard to define the probability distributions for the TB+ and TB- test. Do you have questionairres regarding previous TB infection (or better, medical histories). Also I still test TB+ due to an immunisation in childhood - several decades ago - so previous immunisations need to be considered. It seems to me your intrinsic question is: Does repeated TB testing affect test outcome? It would be worth getting a copy of [Peter Diggle's Analysis of Longitudinal Data](http://rads.stackoverflow.com/amzn/click/0198524846). Do some exploratory data analysis, particularly scatter plot matrices of the nil-test results at each time versus each other, and the TB test results at each time versus each other; and the TB vs nil scatter plots (at each time). Also take the differences (TB test - Nil test) and do the scatter plot matrices. Try transformations of the data and redo these - I imagine log(TB) - log(Nil) may help if the TB results are very large relative to Nil. Look for linear relations in the correlations structure. Another approach would be to take the defined test result (positive/ negative) and model this logitudibnally using a non-linear mixed effects model (logit link). Do some individuals flip between testing TB+ to TB- and is this related to their Nil test, TB test, TB - Nil or some transformation of test results?
null
CC BY-SA 2.5
null
2010-09-25T09:39:40.313
2010-09-25T09:39:40.313
null
null
521
null
3076
2
null
2910
8
null
This overlaps with Shane's answer, but in my view there are two main piers: - Reproducibility; not only because you won't end with results that are made "somehow" but also be able to rerun the analysis faster (on other data or with slightly changed parameters) and have more time to think about the results. For a huge data, you can first test your ideas on some small "playset" and then easily extend on the whole data. - Good documentation; commented scripts under version control, some research journal, even ticket system for more complex projects. Improves reproducibility, makes error tracking easier and writing final reports trivial.
null
CC BY-SA 2.5
null
2010-09-25T15:45:57.583
2010-09-25T15:45:57.583
null
null
null
null
3077
2
null
1735
2
null
One answer is to do a seemingly unrelated regression. Suppose that you only have a single predictor plus an intercept. Create a data set (or data matrix) like ``` wo 1 wp 0 0 so 0 0 1 sp ``` where 'wo' is the outcome in the winter season and 'wp' is the winter predictor/"X" value and 'so' is the summer outcome value and 'sp' is the summer predictor. The 1s represent the summer and winter intercept terms. Basically, you have two sets of variables: summer variables and winter variables. In the summer, all the winter variables are set to 0 and vice-versa in the summer. After you run a regression on the full set of summer and winter variables using the data template above, you get a full covariance matrix that can be used to compare the coefficients for the winter months to the summer months using standard regression procedures. This is one way of implementing @kwak's suggestion of having different slope coefficients for each season.
null
CC BY-SA 2.5
null
2010-09-25T16:17:28.180
2010-09-25T16:17:28.180
null
null
401
null
3078
2
null
1001
7
null
For measuring the bin frequencies of two distributions, a pretty good test is the Chi Square test. It is exactly what it is designed for. And, it is even nonparametric. The distribution don't even have to be normal or symmetric. It is much better than the Kolmogorov-Smirnov test that is known to be weak in fitting the tails of the distribution where the fitting or diagnosing is often the most important. Spearman's correlation won't be so precise in terms of capturing the similarities of your actual bin frequencies. It will just tell you that your overall ranking of observations for the two distributions are similar. Instead, when calculating the Chi Square test (long hand so to speak) you will be able to observe readily which bin frequencies differentials are the most responsible for driving down the overall p value of the Chi Square test. Another pretty good test is the Anderson-Darling test. It is one of the best tests to diagnose the fit between two distributions. However, in terms of giving information about the specific bin frequencies I suspect that the Chi Square test gives you more information.
null
CC BY-SA 2.5
null
2010-09-25T16:27:42.863
2010-09-25T16:27:42.863
null
null
1329
null
3079
2
null
3059
5
null
There is a wrinkle that you need to worry about. In the case of matching, you are throwing away observations (i.e., those that aren't matched and don't make it into your analysis) and some might be replicated. These decisions aren't random; they are a function of covariates. As a result, creating confidence intervals in this context are a bit complicated. To calculate the appropriate standard errors, see [Large Sample Properties of (Matching Estimators for Average Treatment Effects](http://ksghome.harvard.edu/~.aabadie.academic.ksg/sme.pdf) by Abadie and Imbens. Additionally, Abadie also has a paper on [On the Failure of the Bootstrap for Matching Estimators](http://ideas.repec.org/p/nbr/nberte/0325.html). To implement Abadie-Imbens standard errors, see the Matching package in R by Jas Sekhon. On the question of which estimator to believe, it depends on how well you think that matching is correctly controlling for confounders, I'd be inclined to believe that approach. It seems that your first set of analyses doesn't control for those factors in any way? If you think that they are important, then you probably wouldn't be inclined to believe those results.
null
CC BY-SA 2.5
null
2010-09-25T16:30:42.857
2010-09-25T16:30:42.857
null
null
401
null
3080
1
3099
null
11
778
You so often come across in the press various studies that conclude directionally opposite results. Those can be related to the testing of a new prescription drug or the merit of a specific nutrient or anything else for that matter. When two such studies arrive at conflicting results how can you tell which one of the two is closest to the truth?
How to detect which one is the better study when they give you conflicting results?
CC BY-SA 2.5
null
2010-09-26T04:25:32.470
2010-09-27T03:26:45.223
2010-09-27T00:44:35.023
183
1329
[ "hypothesis-testing", "clinical-trials" ]
3081
2
null
3080
8
null
The [meta analysis](http://jeromyanglim.blogspot.com/2009/12/meta-analysis-tips-resources-and.html) literature is relevant to your question. Using meta-analytic techniques you could generate an estimate of the effect of interest pooled across studies. Such techniques often weight studies in terms of their sample size. Within the meta analysis context researchers talk about fixed effect and random effect models (see [Hunter and Schmidt, 2002](http://onlinelibrary.wiley.com/doi/10.1111/1468-2389.00156/abstract)). A fixed effect model assumes that all studies are estimating the same population effect. A random-effects model assumes that studies differ in the population effect that is being estimated. A random-effects model is typically more appropriate. As more studies accumulate looking at a particular relationship, more sophisticated approaches become possible. For example, you can code studies in terms of various properties, such as perceived quality, and then examine empirically whether the effect size varies with these study characteristics. Beyond quality there may be some theoretically relevant differences between the studies which would moderate the relationship (e.g., characteristic of the sample, dosage levels, etc.). In general, I tend to trust studies with: - bigger sample sizes - greater methodological rigour - a confirmatory orientation (e.g., not a study where they tested for correlations between 100 different nutrients and 50 health outcomes) - absence of conflict of interest (e.g., not by a company with a commercial interest in showing a relationship; not by a researcher who has an incentive to find a significant result) But that said you need to keep random sampling and theoretically meaningful differences between studies as a plausible explanation of conflicting study findings.
null
CC BY-SA 2.5
null
2010-09-26T06:18:11.867
2010-09-26T11:09:08.063
2010-09-26T11:09:08.063
183
183
null
3082
1
null
null
24
5585
This is a follow-up to a Stackoverflow [question about shuffling an array randomly](https://stackoverflow.com/questions/3796786/random-number-generator-without-dupes-in-javascript/3796821#3796821). There are established algorithms (such as the [Knuth-Fisher-Yates Shuffle](http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle)) that one should use to shuffle an array, rather than relying on "naive" ad-hoc implementations. I am now interested in proving (or disproving) that my naive algorithm is broken (as in: does not generate all possible permutations with equal probability). Here is the algorithm: > Loop a couple of times (length of array should do), and in every iteration, get two random array indexes and swap the two elements there. Obviously, this needs more random numbers than KFY (twice as much), but aside from that does it work properly? And what would be the appropriate number of iterations (is "length of array" enough)?
What is wrong with this "naive" shuffling algorithm?
CC BY-SA 2.5
null
2010-09-26T07:37:14.583
2018-09-11T01:54:32.910
2017-05-23T12:39:26.167
-1
1421
[ "combinatorics", "randomness" ]
3083
2
null
3082
2
null
Bear in mind I am not a statistician, but I'll put my 2cents. I made a little test in R (careful, it's very slow for high `numTrials`, the code can probably be optimized): ``` numElements <- 1000 numTrials <- 5000 swapVec <- function() { vec.swp <- vec for (i in 1:numElements) { i <- sample(1:numElements) j <- sample(1:numElements) tmp <- vec.swp[i] vec.swp[i] <- vec.swp[j] vec.swp[j] <- tmp } return (vec.swp) } # Create a normally distributed array of numElements length vec <- rnorm(numElements) # Do several "swapping trials" so we can make some stats on them swaps <- vec prog <- txtProgressBar(0, numTrials, style=3) for (t in 1:numTrials) { swaps <- rbind(swaps, swapVec()) setTxtProgressBar(prog, t) } ``` This will generate a matrix `swaps` with `numTrials+1` rows (one per trial + the original) and `numElements` columns (one per each vector element). If the method is correct the distribution of each column (i.e. of the values for each element over the trials) should not be different from the distribution of the original data. Because our original data was normally distributed we would expect all the columns not to deviate from that. If we run ``` par(mfrow= c(2,2)) # Our original data hist(swaps[1,], 100, col="black", freq=FALSE, main="Original") # Three "randomly" chosen columns hist(swaps[,1], 100, col="black", freq=FALSE, main="Trial # 1") hist(swaps[,257], 100, col="black", freq=FALSE, main="Trial # 257") hist(swaps[,844], 100, col="black", freq=FALSE, main="Trial # 844") ``` We get: ![Histograms of random trials](https://i.stack.imgur.com/QZJVZ.jpg) which looks very promising. Now, if we want to statistically confirm the distributions do not deviate from the original I think we could use a Kolmogorov-Smirnov test (please can some statistician confirm this is right?) and do, for instance ``` ks.test(swaps[1, ], swaps[, 234]) ``` Which gives us p=0.9926 If we check all of the columns: ``` ks.results <- apply(swaps, 2, function(col){ks.test(swaps[1,], col)}) p.values <- unlist(lapply(ks.results, function(x){x$p.value}) ``` And we run ``` hist(p.values, 100, col="black") ``` we get: ![Histogram of Kolmogorov-Smirnov test p values](https://i.stack.imgur.com/Bcno2.jpg) So, for the great majority of the elements of the array, your swap method has given a good result, as you can also see looking at the quartiles. ``` 1> quantile(p.values) 0% 25% 50% 75% 100% 0.6819832 0.9963731 0.9999188 0.9999996 1.0000000 ``` Note that, obviously, with a lower number of trials the situation is not as good: 50 trials ``` 1> quantile(p.values) 0% 25% 50% 75% 100% 0.0003399635 0.2920976389 0.5583204486 0.8103852744 0.9999165730 ``` 100 trials ``` 0% 25% 50% 75% 100% 0.001434198 0.327553996 0.596603804 0.828037097 0.999999591 ``` 500 trials ``` 0% 25% 50% 75% 100% 0.007834701 0.504698404 0.764231550 0.934223503 0.999995887 ```
null
CC BY-SA 2.5
null
2010-09-26T09:41:18.333
2010-09-26T21:29:29.830
2010-09-26T21:29:29.830
582
582
null
3084
2
null
1875
3
null
When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it would normally be accurate to predict "100 percent chance of rain in North America". Bear in mind that the models are good at predicting dynamics and poor at predicting thermodynamics.
null
CC BY-SA 2.5
null
2010-09-26T10:48:15.957
2010-09-26T10:48:15.957
null
null
1054
null
3085
2
null
3082
8
null
I think your simple algorithm will shuffle the cards correctly as the number shuffles tends to infinity. Suppose you have three cards: {A,B,C}. Assume that your cards begin in the following order: A,B,C. Then after one shuffle you have following combinations: ``` {A,B,C}, {A,B,C}, {A,B,C} #You get this if choose the same RN twice. {A,C,B}, {A,C,B} {C,B,A}, {C,B,A} {B,A,C}, {B,A,C} ``` Hence, the probability of card A of being in position {1,2,3} is {5/9, 2/9, 2/9}. If we shuffle the cards a second time, then: ``` Pr(A in position 1 after 2 shuffles) = 5/9*Pr(A in position 1 after 1 shuffle) + 2/9*Pr(A in position 2 after 1 shuffle) + 2/9*Pr(A in position 3 after 1 shuffle) ``` This gives 0.407. Using the same idea, we can form a recurrence relationship, i.e: ``` Pr(A in position 1 after n shuffles) = 5/9*Pr(A in position 1 after (n-1) shuffles) + 2/9*Pr(A in position 2 after (n-1) shuffles) + 2/9*Pr(A in position 3 after (n-1) shuffles). ``` Coding this up in R (see code below), gives probability of card A of being in position {1,2,3} as {0.33334, 0.33333, 0.33333} after ten shuffles. R code ``` ## m is the probability matrix of card position ## Row is position ## Col is card A, B, C m = matrix(0, nrow=3, ncol=3) m[1,1] = 1; m[2,2] = 1; m[3,3] = 1 ## Transition matrix m_trans = matrix(2/9, nrow=3, ncol=3) m_trans[1,1] = 5/9; m_trans[2,2] = 5/9; m_trans[3,3] = 5/9 for(i in 1:10){ old_m = m m[1,1] = sum(m_trans[,1]*old_m[,1]) m[2,1] = sum(m_trans[,2]*old_m[,1]) m[3,1] = sum(m_trans[,3]*old_m[,1]) m[1,2] = sum(m_trans[,1]*old_m[,2]) m[2,2] = sum(m_trans[,2]*old_m[,2]) m[3,2] = sum(m_trans[,3]*old_m[,2]) m[1,3] = sum(m_trans[,1]*old_m[,3]) m[2,3] = sum(m_trans[,2]*old_m[,3]) m[3,3] = sum(m_trans[,3]*old_m[,3]) } m ```
null
CC BY-SA 2.5
null
2010-09-26T13:04:04.267
2010-09-26T17:28:00.303
2010-09-26T17:28:00.303
8
8
null
3086
1
3135
null
9
12714
Currently I'm analysing around 300 items in the field of education. I'm interested in the dimensionality of the dataset. So I compute a matrix of tetrachoric correlation. The goal is to do a factor analysis on this matrix. By curiosity I compare to a matrix of Pearson correlation, and the results are different. When I compute differences between the matrices I have slight differences : no null mean with min and max ranging from $-0.3$ to $0.3$. Any idea? Usually you read that, for dichotomous item, values are closed. Further, I have missing values because, due to very hard and very easy items, some modalities are never observed together. In this case do you have any good proposition (and reasons) to handle this problem? Suppress these items, Imputation, ...
Differences between tetrachoric and Pearson correlation
CC BY-SA 2.5
null
2010-09-26T13:40:48.663
2012-06-27T19:27:33.020
2010-09-27T10:27:38.730
183
1154
[ "correlation", "factor-analysis", "psychometrics", "psychology" ]
3087
2
null
3082
12
null
It is broken, although if you perform enough shuffles it can be an excellent approximation (as the previous answers have indicated). Just to get a handle on what's going on, consider how often your algorithm will generate shuffles of a $k$ element array in which the first element is fixed, $k \ge 2$. When permutations are generated with equal probability, this should happen $1/k$ of the time. Let $p_n$ be the relative frequency of this occurrence after $n$ shuffles with your algorithm. Let's be generous, too, and suppose you are actually selecting distinct pairs of indexes uniformly at random for your shuffles, so that each pair is selected with probability $1/{k \choose 2}$ = $2/\left( k (k-1) \right)$. (This means there are no "trivial" shuffles wasted. On the other hand, it totally breaks your algorithm for a two-element array, because you alternate between fixing the two elements and swapping them, so if you stop after a predetermined number of steps, there is no randomness to the outcome whatsoever!) This frequency satisfies a simple recurrence, because the first element is found in its original place after $n+1$ shuffles in two disjoint ways. One is that it was fixed after $n$ shuffles and the next shuffle does not move the first element. The other is that it was moved after $n$ shuffles but the $n+1^{st}$ shuffle moves it back. The chance of not moving the first element equals ${k-1 \choose 2}/{k \choose 2}$ = $(k-2)/k$, whereas the chance of moving the first element back equals $1/{k \choose 2}$ = $2/\left( k (k-1) \right)$. Whence: $$p_0 =1$$ because the first element starts out in its rightful place; $$p_{n+1} = \frac{k-2}{k} p_n + \frac{2}{k(k-1)} \left( 1 - p_n \right).$$ The solution is $$p_n = 1/k + \left( \frac{k-3}{k-1} \right) ^n \frac{k-1}{k}.$$ Subtracting $1/k$, we see that the frequency is wrong by $\left( \frac{k-3}{k-1} \right) ^n \frac{k-1}{k}$. For large $k$ and $n$, a good approximation is $\frac{k-1}{k} \exp(-\frac{2n}{k-1})$. This shows that the error in this particular frequency will decrease exponentially with the number of swaps relative to the size of the array ($n/k$), indicating it will be difficult to detect with large arrays if you have made a relatively large number of swaps--but the error is always there. It is difficult to provide a comprehensive analysis of the errors in all frequencies. It's likely they will behave like this one, though, which shows that at a minimum you would need $n$ (the number of swaps) to be large enough to make the error acceptably small. An approximate solution is $$n \gt \frac{1}{2} \left(1 - (k-1) \log(\epsilon) \right)$$ where $\epsilon$ should be very small compared to $1/k$. This implies $n$ should be several times $k$ for even crude approximations (i.e., where $\epsilon$ is on the order of $0.01$ times $1/k$ or so.) All this begs the question: why would you choose to use an algorithm that is not quite (but only approximately) correct, employs exactly the same techniques as another algorithm that is provably correct, and yet which requires more computation? # Edit Thilo's comment is apt (and I was hoping nobody would point this out, so I could be spared this extra work!). Let me explain the logic. - If you make sure to generate actual swaps each time, you're utterly screwed. The problem I pointed out for the case $k=2$ extends to all arrays. Only half of all the possible permutations can be obtained by applying an even number of swaps; the other half is obtained by applying an odd number of swaps. Thus, in this situation, you can never generate anywhere near a uniform distribution of permutations (but there are so many possible ones that a simulation study for any sizable $k$ will be unable to detect the problem). That's really bad. - Therefore it is wise to generate swaps at random by generating the two positions independently at random. This means there is a $1/k$ chance each time of swapping an element with itself; that is, of doing nothing. This process effectively slows down the algorithm a little bit: after $n$ steps, we expect only about $\frac{k-1}{k} N \lt N$ true swaps to have occurred. - Notice that the size of the error decreases monotonically with the number of distinct swaps. Therefore, conducting fewer swaps on average also increases the error, on average. But this is a price you should be willing to pay in order to overcome the problem described in the first bullet. Consequently, my error estimate is conservatively low, approximately by a factor of $(k-1)/k$. I also wanted to point out an interesting apparent exception: a close look at the error formula suggests that there is no error in the case $k=3$. This is not a mistake: it is correct. However, here I have examined only one statistic related to the uniform distribution of permutations. The fact that the algorithm can reproduce this one statistic when $k=3$ (namely, getting the right frequency of permutations that fix any given position) does not guarantee the permutations have indeed been distributed uniformly. Indeed, after $2n$ actual swaps, the only possible permutations that can be generated are $(123)$, $(321)$, and the identity. Only the latter fixes any given position, so indeed exactly one-third of the permutations fix a position. But half the permutations are missing! In the other case, after $2n+1$ actual swaps, the only possible permutations are $(12)$, $(23)$, and $(13)$. Again, exactly one of these will fix any given position, so again we obtain the correct frequency of permutations fixing that position, but again we obtain only half of the possible permutations. This little example helps reveal the main strands of the argument: by being "generous" we conservatively underestimate the error rate for one particular statistic. Because that error rate is nonzero for all $k \ge 4$, we see that the algorithm is broken. Furthermore, by analyzing the decay in the error rate for this statistic we establish a lower bound on the number of iterations of the algorithm needed to have any hope at all of approximating a uniform distribution of permutations.
null
CC BY-SA 2.5
null
2010-09-26T16:23:19.830
2010-09-27T15:21:15.717
2020-06-11T14:32:37.003
-1
919
null
3088
2
null
3080
5
null
I would hold off on considering meta-analysis until you've scrutinized sources if potential bias or variation in the target populations. If these are studies of treatment effects, was treatment randomly assigned? Were there deviations from the protocol? Was there noncompliance? Is there missing outcome data? Were the samples drawn from the same frame? Was there refusal to participate? Implementation errors? Were standard errors computed correctly, accounting for clustering and robust to various parametric assumptions? Only after you have answered these questions do I think meta-analysis issues start to enter the picture. It must be rare that for any two studies meta-analysis is appropriate, unless you are willing to make sone heroic assumptions.
null
CC BY-SA 2.5
null
2010-09-26T16:26:44.410
2010-09-26T16:26:44.410
null
null
1424
null
3089
2
null
3086
7
null
Tetrachoric coefficient and Phi coefficient are indeed different. The tetrachoric coefficient is suitable for the following problem: Suppose there are two judges who judge cakes, say, on some continuous scale, then based on a fixed, perhaps unknown, cutoff, pronounce the cakes as "bad" or "good". Suppose the latent continuous metric of the two judges has correlation coefficient $\rho$. Now generate 300 cakes, have both judges taste each of them, and generate a 2x2 contingency table of "judge 1 bad/good" vs "judge 2 bad/good". Based on the data in this contingency table, the sample tetrachoric coefficient is an estimator (I believe it is the MLE, in fact), of the 'latent' correlation $\rho$. Note that the cutoffs employed by the two judges need not be known. The Phi coefficient views the pronouncements "bad", "good" themselves as the variable of interest, coded as 0/1, and is the sample Pearson coefficient of the 0/1 data. These are not the same. edit in response to @pbneau's comments: my suspicion was that the tetrachoic and phi coefficients would diverge in the limit cases: as $\rho \to 0$ and as the cutoffs for the latent rating move away from the mean ratings. I tested this with my own code (in Matlab) for tetrachoric and phi coefficient. I tested with zero mean, unit variance Gaussian latent ratings with population correlations of 0.01 and 0.25, and with cutoffs of 0,0 and 1.5,-0.5. I ran 2048 experiments, each with 2048 'cakes'. The scatter fits for tetrachoric versus phi are shown here: ![bunch o scatter fits](https://i.stack.imgur.com/w5vg8.jpg) (looks like the image upload thing is not working; top row is $c1 = c2 = 0$, bottom is $c1 = 1.5, c2 = -0.5$, left column is $\rho = 0.01,$ right column is $\rho = 0.25$. The best fits along top row are $\rho^* = 1.5 \phi + 0$, $\rho^* = 1.4 \phi + 0.01$, along the bottom row, $\rho^* = 2.2 \phi$ and $\rho^* = 3.1 \phi - 0.04$. perhaps I can get this image hosted somewhere else that doesn't squash them so much...) I'm not sure you can read the text on these (the preview looks bad); the upshot is that when the cutoffs are at the population mean, and thus the contingency tables are 'balanced' row-wise and col-wise (top row of plots), you get good correlation between the two metrics, but the tetrachoric tends to be a bit larger than phi. When the cutoffs are somewhat imbalanced, you get slightly worse correlation between the metrics, and the phi appears to 'shink' towards zero. Thus my initial intuition was only half correct: the worst case appears to be as the latent $\rho$ moves away from zero, and the cutoffs move away from the latent means.
null
CC BY-SA 2.5
null
2010-09-26T16:30:04.203
2010-09-28T00:12:55.770
2010-09-28T00:12:55.770
795
795
null
3090
2
null
725
3
null
The best place to look for free/open source capabilities of this nature is GRASS GIS. The image processing manual is [here](https://grass.osgeo.org/gdp/imagery/grass4_image_processing.pdf). Because this is constantly undergoing development, it would be worthwhile posting an inquiry on one of the GRASS user lists (found through links on the home page [here](https://grass.osgeo.org).
null
CC BY-SA 4.0
null
2010-09-26T16:36:50.307
2022-08-28T14:53:06.907
2022-08-28T14:53:06.907
79696
919
null
3091
1
3095
null
7
1815
I've seen a little bit here about the difference between statistical inference for random samples, and what happens when we actually have population data. Most arguments seem to suggest you "never actually have the population" and the population data you think you have represents some unobservable super population which is the data-generating process. But say, we have a state, covered by counties, and we have data on some environmental aspect of each county, for example, the total area of forest. We want to see if some binary county-level outcome is related to forest - let's say presence or absence of a disease (thinking Lyme disease here). As far as I can tell, this qualifies as "population" level data. Would running a regular logistic regression, and the normal tests on parameters, yield correct estimates? One thing that comes into this is whether the "samples" are independent. In this case, forest in county A is probably related to forest in the county B next door, meaning a reduction in the degrees of freedom and imprecision in standard error estimates (as I understand it). But even if we had totally independent data, would standard, sample-based statistical inference be appropriate here?
Regression analysis and parameter estimates with populations
CC BY-SA 2.5
null
2010-09-26T18:17:03.283
2010-09-26T20:32:55.893
2010-09-26T19:17:10.030
null
null
[ "hypothesis-testing", "regression", "logistic" ]
3092
2
null
3091
1
null
I work in a related area using what is called the Population Approach, in pharmacometrics. Basically you have a sample of individuals from the general population, and a moderate-size sample of observations of each individual, plus demographic covariates for these individuals. To model them, we want to make levels of model. We want to make an overall model that applies to the individuals that we hope will be representative of the entire population. The parameters of this overall model are called "fixed effects". Below that level, we have a model of each individual, because we expect individuals to differ from each other. So we have additional parameters that are different for each individual, and these are called "random effects". A key part to be estimated in the overall model is a measure of the variability of the random effects. I.e. we know the individuals will differ, but we want to model that variability. A further level of modeling is the variability of individual observations. Applying this to your field is not something I fully understand, but I would recommend WinBugs as a modeling tool, and a book "Markov Chain Monte Carlo in Practice" by Gilks, Richardson, and Spiegelhalter. There are example problems in there that look like they might be applicable to your problem.
null
CC BY-SA 2.5
null
2010-09-26T18:47:29.700
2010-09-26T18:47:29.700
null
null
1270
null
3093
2
null
3091
2
null
What you describe seems to refer to a particular case of multilevel modeling, where data are organized into a hierarchical structure; in your case, forests (1st level unit) nested in counties nested in states, but see [Under what conditions should one use multilevel/hierarchical analysis?](https://stats.stackexchange.com/q/1995/930). Now, the "particular" case comes from the fact that you want to account for the spatial proximity which might be a vector for the propagation of [Lyme disease](http://en.wikipedia.org/wiki/Lyme_disease) (correct me if I am wrong), as is done in epidemiology where one is interested in studying the geography of infectious disease. In the usual case, we can use so-called spatial models like the multiple membership model or the conditional autoregressive model, among others. I enclose a couple of references about these approaches at the end, but I think you will find more references by looking at related studies in ecology or epidemiology. Now, I think that you may pay a particular attention at the following paper of Langford et al. which features multilevel modeling with spatially correlated data: > Langford, IH, Leyland, AHL, Rasbash, and Goldstein, H (1999). Multilevel modelling of the geographical distributions of diseases. Journal of Royal Statistical Society C, 48, 253-268. Harvey Goldstein is the author of an excellent book on multilevel modeling, [Multilevel Statistical Models](http://www.cmm.bristol.ac.uk/team/HG_Personal/multbook1995.pdf) (the 2nd edition is available for free). Finally, the book of Andrew Gelman, [Data Analysis Using Regression and Multilevel/Hierarchical Models](http://www.stat.columbia.edu/~gelman/arm/), may provide additional clues about hierarchical/multilevel modeling. About software, I know there is the R [spdep](http://cran.r-project.org/web/packages/spdep/index.html) package for modeling spatially correlated outcomes, but there are some examples of analysis of spatial hierachical data with WinBUGS on the [BUGS Project](http://www.mrc-bsu.cam.ac.uk/bugs/weblinks/webresource.shtml). References - Browne, W.J., Goldstein, H. and Rasbash, J. (2001) Multiple membership multiple classification (MMMC) models. Statistical Modelling, 1, 103-124. - Lichstein, JW, Simons, TR, Shriner, SA, and Franzreb, KE (2002). Spatial autocorrelation and autoregressive models in ecology. Ecological Monographs, 72(3), 445-463. - Feldkircher, M (2007). A Spatial CAR Model applied to a Cross-Country Growth Regression. - Lawson, AB, Browne, WJ, and Vidal Rodeiro, CL (2003). Disease mapping with WinBUGS and MLwiN. John Wiley & Sons.
null
CC BY-SA 2.5
null
2010-09-26T19:46:28.427
2010-09-26T19:46:28.427
2017-04-13T12:44:40.807
-1
930
null
3094
2
null
3052
2
null
Some of the [Bioconductor](http://www.bioconductor.org)'s packages (e.g., [ShortRead](http://www.bioconductor.org/packages/2.3/bioc/html/ShortRead.html), [Biostrings](http://www.bioconductor.org/packages/2.2/bioc/html/Biostrings.html), [BSgenome](http://www.bioconductor.org/packages/2.3/bioc/html/BSgenome.html), [IRanges](http://www.bioconductor.org/packages/2.3/bioc/html/IRanges.html), [genomeIntervals](http://www.bioconductor.org/packages/2.6/bioc/html/genomeIntervals.html)) offer facilities for dealing with genome positions or coverage vectors, e.g. for [ChIP-seq](http://en.wikipedia.org/wiki/Chip-Sequencing) and identifying enriched regions. As for the other answers, I agree that any method relying on ordered observations with some threshold-based filter would allow to isolate low signal within a specific bandwith. Maybe you can also look at the methods used to identify so-called "islands" > Zang, C, Schones, DE, Zeng, C, Cui, K, Zhao, K, and Peng, W (2009). A clustering approach for identification of enriched domains from histone modification ChIP-Seq data. Bioinformatics, 25(15), 1952-1958.
null
CC BY-SA 2.5
null
2010-09-26T20:15:09.507
2010-09-26T20:15:09.507
null
null
930
null
3095
2
null
3091
5
null
I like what chl has written. Despite that, I am moved to discuss whether this situation necessarily requires a complicated model. But first, let's begin with responses to some comments. (1) You don't lose any degrees of freedom due to spatial correlation of forest cover. This is an explanatory variable, not the one you're trying to model. You might lose "degrees of freedom" when the residuals of the dependent variable exhibit spatial autocorrelation. Such correlation is not necessarily a given even when a map of the dependent variable itself suggests strong spatial correlation. The reason is that the correlation in the map likely derives from the correlation in the forest cover (and other spatially distributed covariates). Remember, in such models you don't ask whether the "samples are independent"--it's usually obvious they are not--but whether any deviations in them from their modeled values are independent. (2) Therefore, a conditional autoregressive model might not be necessary. I think this modeling choice would be attractive only if you want to test a theory of contagion. Now to answer the original question: yes, first run an ordinary logistic (or Poisson) model, because as a general principle it's good to try simple (but reasonable) models first. Provided its residuals do not exhibit strong spatial correlation, you are then ok with the results. If there is evidence of correlation and evidence that it will appreciably affect your answers (coefficients, predictions, or whatever), consider using a generalized linear geostatistical model (GLM). These are described in Diggle & Riberio, Model-based Geostatistics (a relatively inexpensive and accessible text), which itself documents several R packages for making the estimates and some associated EDA tools (geoR and geoRglm). The GLM approach lets you simultaneously fit your model and assess the degree of spatial autocorrelation. The principal limitations I have found in these packages are (1) they don't handle anisotropy well--you can detect it but it's hard to incorporate it in a model--and (2) they don't have a provision for nested variograms, which somewhat limits your ability to model the spatial correlation. Both of these are no problem for smallish datasets, because you need (typically) hundreds to thousands of observations, or more, to model correlation at this level of detail. Finally, a word about the "population" question. I assume you are interested in more than a mere description of the data: you seek information about a possible association between disease and other observable factors. Even when you have a comprehensive description of the data for a spatial region, it still does not act like a census, because the outcomes could have turned out otherwise. Next year, with identical forest cover, there will be a slightly different pattern of disease. In other regions of the country or the world and at other times, exactly the same combinations of explanatory variable values are likely to produce varying rates of disease. Thus, you're modeling a process, not a population.
null
CC BY-SA 2.5
null
2010-09-26T20:32:55.893
2010-09-26T20:32:55.893
null
null
919
null
3096
2
null
2299
6
null
Actually, Benford's Law is an incredibly powerful method. This is because the Benford's frequency distribution of first digit is applicable to all sorts of data set that occur in the real or natural world. You are right that you can use Benford's Law in only certain circumstances. You say that the data has to have a uniform log distribution. Technically, this is absolutely correct. But, you could describe the requirement in a much simpler and lenient way. All you need is that the data set range crosses at least one order of magnitude. Let's say from 1 to 9 or 10 to 99 or 100 to 999. If it crosses two orders of magnitude, you are in business. And, Benford's Law should be pretty helpful. The beauty of Benford's Law is that it helps you narrow your investigation really quickly on the needle(s) within the hay stack of data. You look for the anomalies whereby the frequency of first digit is much different than Benford frequencies. Once you notice that there are two many 6s, you then use Benford's Law to focus on just the 6s; but, you take it now to the first two digits (60, 61, 62, 63, etc...). Now, maybe you find out there are a lot more 63s then what Benford suggest (you would do that by calculating Benford's frequency: log(1+1/63) that gives you a value close to 0%). So, you use Benford to the first three digits. By the time you find out there are way too many 632s (or whatever by calculating Benford's frequency: log (1+1/632)) than expected you are probably on to something. Not all anomalies are frauds. But, most frauds are anomalies. If the data set that Marc Hauser manipulated are natural unconstrained data with a related range that was wide enough, then Benford's Law would be a pretty good diagnostic tool. I am sure there are other good diagnostic tools also detecting unlikely patterns and by combining them with Benford's Law you could most probably have investigated the Marc Hauser affair effectively (taking into consideration the mentioned data requirement of Benford's Law). I explain Benford's Law a bit more in this short presentation that you can see here: [http://www.slideshare.net/gaetanlion/benfords-law-4669483](http://www.slideshare.net/gaetanlion/benfords-law-4669483)
null
CC BY-SA 2.5
null
2010-09-26T22:37:18.040
2010-09-26T22:53:23.677
2010-09-26T22:53:23.677
1329
1329
null
3098
2
null
2948
2
null
I just came across the tnet package for R. The creator seems to be researching on community discovery in weighted and bipartite (two-mode) graphs. [http://opsahl.co.uk/tnet/content/view/15/27/](http://opsahl.co.uk/tnet/content/view/15/27/) I have not yet use it.
null
CC BY-SA 2.5
null
2010-09-27T01:33:45.997
2010-09-27T01:33:45.997
null
null
null
null
3099
2
null
3080
3
null
I think [Jeromy's answer](https://stats.stackexchange.com/questions/3080/how-to-detect-which-one-is-the-better-study-when-they-give-you-conflicting-result/3088#3088) is sufficient if you are examining two experimental studies or an actual meta-analysis. But often times we are faced with examining two non-experimental studies, and are tasked with assessing the validity of those two disparate findings. As [Cyrus's grocery list of questions](https://stats.stackexchange.com/questions/3080/how-to-detect-which-one-is-the-better-study-when-they-give-you-conflicting-result/3088#3088) suggests, the topic itself is not amenable to short response, and whole books are in essence aimed to address such a question. For anyone interested in conducting research on non-experimental data, I would highly suggest you read [Experimental and quasi-experimental designs for generalized causal inference](http://books.google.com/books?id=o7jaAAAAMAAJ&q) by William R. Shadish, Thomas D. Cook, Donald Thomas Campbell (Also I have heard that the older versions of this text are just as good). Several items Jeromy referred to (bigger sample sizes, and greater methodological rigour), and everything that Cyrus mentions would be considered what Campbell and Cook refer to as "Internal Validity". These include aspects of the research design and the statistical methods used to assess the relationship between X and Y. In particular as critics we are concerned about aspects of either that could bias the results, and diminish the reliability of the findings. As this is a forum devoted to statistical analysis, much of the answers are centered around statistical methods to ensure unbiased estimates of whatever relationship you are assessing. But their are other aspects of the research design unrelated to statistical analysis that diminish the validity of the findings no matter what rigourous lengths one goes to in their statistical analysis (such as Cyrus's mention of several aspects of experiment fidelity can be addressed but not solved with statistical methods, and if they occur will always diminish the validity of the studies results). There are many other aspects of internal validity that become crucial to assess in comparing results of non-experimental studies that are not mentioned here, and aspects of research designs that can distinguish reliability of findings. I don't think it is quite appropriate to go into too much detail here, but I would often take the results of a quasi-experimental study (such as an interrupted time series or a matched case-control) more seriously than I would a study that is not quasi experimental, regardless of the other aspects Jeromy or Cyrus mentioned (of course within some reason). Campbell and Cook also refer to the "external validity" of studies. This aspect of research design is often much smaller in scope, and does not deserve as much attention as internal validity. External validity essentially deals with the generalizability of the findings, and I would say laymen can often assess external validity reasonably well as long as they are familiar with the subject. Long story short read Shadish's, Cook's and Campbell's book.
null
CC BY-SA 2.5
null
2010-09-27T03:26:45.223
2010-09-27T03:26:45.223
2017-04-13T12:44:29.013
-1
1036
null
3100
1
null
null
4
369
I previously asked how to estimate the latent potential of a runner who [ran the 100 metres each day for 200 days](https://stats.stackexchange.com/questions/2966/estimating-latent-performance-potential-based-on-a-sequence-of-observations). Latent skill was defined as "the latent time it would take the individual to run if they (a) applied maximal effort; and (b) had a reasonably good run for them (i.e., no major problems with the run; but still a typical run)." Now assume that I estimated latent skill for the 100 metres for each of the 200 days, but that I also had data on the same 200 days but this time on running the 400 metres. Obviously I could repeat whatever process I adopted for the 100 metres to form an estimate of latent skill for the 400 metres at each of the 200 time points. In both cases I would expect the time to complete the runs to generally get faster with practice, but that raw data would vary from day to day. I want to quantify the degree of consistency of the two curves. I don't really want to quantify the consistency of the observed data. If it makes a difference, the two methods I were considering using for estimating the effect of time, were nonlinear regression and [isotonic regression](http://en.wikipedia.org/wiki/Isotonic_regression). My question: - Thus, what is a good way to quantify and calculate the consistency of the fitted curves for the 100 and 400 metres? Initial thoughts: I had a few initial thoughts: - estimate the fitted values for both curves and correlate the fitted values - Use a parametric model like $\theta_1 \exp(-\theta_2t) + \theta_3 + \epsilon$ ($t$ is an index of day) and then quantify the degree to which constraining $\theta_2$ (the parameter that determines shape) to be equal across the 100 and 400 metres would lead to poorer fit.
Quantifying the degree of consistency of two fitted curves
CC BY-SA 2.5
null
2010-09-27T06:24:37.780
2017-11-03T15:20:24.033
2017-11-03T15:20:24.033
11887
183
[ "regression", "nonlinear-regression", "curve-fitting", "isotonic" ]
3101
1
3760
null
7
1172
When trying to assess a validity of a claim relying on statistics, I was taught (in the school of epidemiology) that the scale to use is β€œ[the pyramid of evidence](http://www.r-statistics.com/2010/02/for-fun-correlation-of-us-state-with-the-number-of-clicks-on-online-banners/)β€œ ![alt text](https://i.stack.imgur.com/EOI9q.gif) However, when conducting a discussion in the field of economics or political science, it is often very difficult (to impossible) to recreate a political situation so to allow experimenting. Since this is the case, my questions are: - can this be mitigated in any way? - In what way can (or does) political/economic disciplines build their arguments (using statistics) in a valid way? The motivation for my question started from reading a discussion about the privatization of the academic system in Israel. e.g: Should the charging of students be the same for all disciplines or different. Should the running of the university be performed by the academic staff or by outside managers - and so on. One things that appears to happen in the discussion is that the people supporting the privatization (economists mostly) seems to be using statistics of all sorts to support their claims. While the people from the other side of the argument are not as equipped with them (usually people from the humanities). The economists seems to blame the humanities of not using numeric data. While the humanities accuse the economists of being to speculative - and that the data they bring is open to too many interpretations. I am trying to understand if these two groups of disciplines can create a more fruitful dialog, or is it just not possible due to the complexity of the subject matter, and the limitation of our control over it. The following discussion has a nice link in it, but it was stopped quite early: - Lies, Damn Lies and Statistics Also a bunch of good relevant discussion where introduced here: - https://stats.stackexchange.com/search?q=causal-inference p.s: Due to the subjectivity of the topic - I am marking this as community wiki.
How much can the "pyramid of evidence" be applied to economics and political sciences?
CC BY-SA 2.5
null
2010-09-27T07:25:32.120
2010-10-19T14:34:44.807
2017-04-13T12:44:33.237
-1
253
[ "causality", "inference" ]
3102
2
null
3100
3
null
My understanding is that your sample size is 200 observations for each running length (i.e. the sample size is such that doubling the number of observations has a non trivial effect on accuracy). Denote $y_1$ the 100 meters series ($y_2$ the 400 meters series). I suggest pooling both series unto $y$ and adding a dummy $l_i=I(i\in y_1)$ (where $I()$ is the indicator function) and performing an isotonic regression of $\log(y)$ on $date$ and the dummies ($l$ as well as the $z$ controling for injuries, running style ect....). Because of the $\log()$ transformation on the left hand side, the value of the coefficient of $l$ can be understood as the average percentage increase in running time, al else equal, due to running a 4 time longer length. The doubling of sample size will improve accuracy since you have in effect only added a single parameter to the model. Also, re-reading my previous answer, i noticed i had linked to the wrong article. By the same author, the correct reference is 'Bayesian Isotonic Regression and Trend Analysis', B. Neelon, D. B. Dunson (the previous link was to the version of there approach adapted to the case of binary dependant variable). A non gated version of the article is [here](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.5142&rep=rep1&type=pdf)
null
CC BY-SA 2.5
null
2010-09-27T08:59:59.863
2010-09-27T11:42:29.113
2010-09-27T11:42:29.113
603
603
null
3104
1
3108
null
12
8095
I like introducing probability by discussing the [Boy or Girl](http://en.wikipedia.org/wiki/Boy_or_Girl_paradox) or [Bertrand](http://en.wikipedia.org/wiki/Bertrand_paradox_%28probability%29) paradox. What other (short) problem/game provides a motivating introduction to probability? (One answer per response, please) P.S. This is about a gentle introduction to probability, but in my opinion it is relevant for statistics teaching as it allows to further discuss about discrete events, Bayes's theorem, probabilistic/measurable space, etc.
What is your favorite problem for an introduction to probability?
CC BY-SA 2.5
null
2010-09-27T10:11:37.493
2015-04-11T17:17:06.273
2011-05-20T18:22:17.697
930
930
[ "teaching" ]
3105
1
null
null
8
4114
I wanted to find out what kind of different usages of stochastic processes theory in EE & CS are out there. For example, I find these kinds of usages interesting: - using stochastic signal as carrier that is modulated by the information signal for communication - using stochastic process analysis for improving photographed pictures after long exposition It would be good if every example would be in separate answer, so it would be possible to up-vote those that are 'the best'. I would classify good example as one that: - truly takes advantage of probability and statistics theory - has greater 'usefulness' then other examples
What are the examples for stochastic processes in Electrical Engineering and Computer Science?
CC BY-SA 2.5
null
2010-09-27T10:19:33.873
2010-09-28T01:25:32.580
null
null
null
[ "stochastic-processes" ]
3106
2
null
3105
4
null
What about monitoring network congestion. An underlying stochastic process is assumed to be driving the congestion, and the process has reached an equilibrium state and is stationary. That is, process characteristics such as the mean level does not depend on time. You can then use this the stochastic process to give you an idea of when you check the network for possible congestion. Some recent papers: - N. Dueld, C. Lund, and M. Thorup. Optimal combination of sampled network measurements. Proceedings of IMC '05, Internet Measurement Conference, 91-104, 2005. - B. M. Parker, S. G. Gilmour, and J. Schormans. Measurement of packet loss probability by optimal design of packet probing experiments. IET Communications, 3:979-991, 2009.
null
CC BY-SA 2.5
null
2010-09-27T10:36:49.463
2010-09-27T10:36:49.463
null
null
8
null
3107
2
null
3104
9
null
A standard example is the [Monty-Hall](http://en.wikipedia.org/wiki/Monty_Hall_problem) game. Here's how I approach this example: - Give the class sets of three cards and get them to play the game in pairs. - Each pair plays the game following a particular strategy, i.e. always switching doors. - Afterwards, I use the number of times the class won to calculate a Monte-Carlo estimate of winning.
null
CC BY-SA 2.5
null
2010-09-27T10:45:17.213
2010-09-27T10:45:17.213
null
null
8
null
3108
2
null
3104
11
null
A good example of showing how people are non-random is to get the class to write down a number between 1 and 10. You then ask the 1's, 2's, .. to stand up. What happens is that the majority of the class choose 7 and very few choose 1 and 10. This leads on to interesting questions, such as: - How should you choose a random number. - Designing an experiment? - What do we mean by random?
null
CC BY-SA 2.5
null
2010-09-27T10:48:47.553
2010-09-27T10:48:47.553
null
null
8
null
3109
1
3326
null
5
140
For my thesis I am implementing a tracking algorithm that tracks clusters of datapoints. However, I have a hard time finding research papers and/or an overview of commonly used algoritms in this subject.
Tracking Algorithms
CC BY-SA 2.5
null
2010-09-27T11:03:45.737
2010-10-05T03:20:46.547
2010-09-27T11:12:02.683
183
1411
[ "machine-learning" ]
3110
2
null
3105
3
null
Google's Pagerank [algorithm](http://en.wikipedia.org/wiki/PageRank#Simplified_algorithm) is a stochastic approximation (through Markov's stationary state) of the proportion of visits received asymptotically by website $x_i$.
null
CC BY-SA 2.5
null
2010-09-27T11:48:54.790
2010-09-27T11:48:54.790
null
null
603
null
3111
2
null
3104
3
null
The Drunkard's Walk by Leonard Mlodinow is full of such examples, including one on the meaning of a Positive HIV test that is 99.9% accurate. Using bayesian statistics, the actual odds of a positive test are less than 10% (a similar example is detailed in chapter two of Agresti's Introduction to Categorical Data Analysis book). Another example (i break the one example per answer but this is essentially the same problem from conditional probability) is from the Simpson trial, where one of Simpson’s lawyers, Alan Dershowitz, noted that even though Simpson beat his wife, that hardly mattered, because in the United States, four million women are battered every year by their male partners, yet only one in 2,500 is ultimately murdered by her partner (1 in 1000), so, by the 'reasonable doubt' criterion, this is irrelevant. The jury found that argument persuasive, but it’s spurious. The relevant question was what percentage of all battered women who are murdered are killed by their abusers, which ain't 1 in 1000, but rather 9 in 10.
null
CC BY-SA 2.5
null
2010-09-27T12:00:06.260
2010-09-27T12:05:58.620
2010-09-27T12:05:58.620
603
603
null
3112
1
3127
null
8
11531
I am using the epitools in R for calculating the confidence interval of relative risk. [http://bm2.genes.nig.ac.jp/RGM2/R_current/library/epitools/man/riskratio.html](http://bm2.genes.nig.ac.jp/RGM2/R_current/library/epitools/man/riskratio.html) There are three methods inside for calculations: namely Wald, Small and Boot. I want to find some article describing the three methods, but I can't find any, can anyone help? Thanks!
Calculation of Relative Risk Confidence Interval
CC BY-SA 2.5
null
2010-09-27T13:18:11.573
2010-09-27T22:05:45.907
null
null
588
[ "confidence-interval", "relative-risk" ]
3113
1
null
null
14
9544
I'm curious to know if anyone has a specific reference (text or journal article) to support the common practice in the medical literature of performing sample size calculation using methods that are parametric (i.e. assuming a normal distribution and a certain variance of measurements) when the analysis of the primary trial outcome will be done using non-parametric methods. An example: primary outcome is time to vomiting after giving a certain drug, which is known to have a mean value of 20 minutes (SD 6 minutes), but has a noticeably right-skewed distribution. The sample size calculation is done with the assumptions listed above, using the formula $n(\text{per-group})=f(\alpha,\beta) \times (2\sigma^2 /(\mu_1 - \mu_2)^2 )$, where $f(\alpha, \beta)$ changes based on the desired $\alpha$ and $\beta$ errors. However, because of the skewness of the distribution, the analysis of the primary outcome will be based on ranks (non-parametric method such as the Mann Whitney U test). Is this schema supportable by authors in the statistical literature, or should non-parametric sample size estimates be performed (and how would these be done)? My thoughts are that, for ease of calculation, it is acceptable to do the above practice. After all, sample size estimates are just that - estimates that make several assumptions already - all of which are likely slightly (or very!) imprecise. However, I am curious to know what others think, and specifically to know if there are any references to support this line of reasoning. Many thanks for any assistance.
Parametric sample size calculation and non-parametric analysis
CC BY-SA 2.5
null
2010-09-27T13:29:01.003
2010-11-23T19:45:50.180
2010-11-23T15:23:13.617
null
561
[ "nonparametric", "sample-size" ]
3114
2
null
3048
1
null
Dimensionality reduction is basically applying clustering algorithm to the attributes (columns). Because of the fairly large dimensionality of your dataset, you might try to use SOM (self-organizing map/Kohonen net) to create a map for individuals or pages. You can then seen whether the are meaningful (interpretable) patterns.
null
CC BY-SA 2.5
null
2010-09-27T14:10:46.990
2010-09-27T14:10:46.990
null
null
null
null
3115
1
null
null
15
16854
How do you tell if the correlations at different lags obtained from the cross-correlation (ccf function) of two time series are significant.
Cross-correlation significance in R
CC BY-SA 2.5
null
2010-09-27T14:42:20.510
2015-06-04T17:53:00.033
null
null
null
[ "r", "statistical-significance", "cross-correlation" ]
3116
2
null
3104
6
null
I really like any problem that has some result that is counter-intuitive to what we'd like to think. The problems thus far are classics in the field of probability, so I'll add my favorite classic problem: [The Birthday Problem](http://en.wikipedia.org/wiki/Birthday_problem). I always found it amazing that there was such a high probability of having two people with the same birthday with such a small sample.
null
CC BY-SA 2.5
null
2010-09-27T15:13:38.850
2010-09-27T15:13:38.850
null
null
1118
null
3117
1
3122
null
11
196
I've been studying hierachial reinforcement learning problems, and while a lot of papers propose algorithms for learning a policy, they all seem to assume they know in advance a graph structure describing the hierarchy of the actions in the domain. For example, [The MAXQ Method for Hierarchial Reinforcement Learning](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9.313&rep=rep1&type=ps) by Dietterich describes a graph of actions and sub-tasks for a simple Taxi domain, but not how this graph was discovered. How would you learn the hierarchy of this graph, and not just the policy? In other words, using the paper's example, if a Taxi were driving around aimlessly, with little prior knowledge of the world, and only the primitive move-left/move-right/etc actions to take, how would it learn higher level actions like go-pick-up-passenger? If I'm understanding the paper correctly (and I may not be), it proposes how to update the policy for these high-level actions, but not how they're formed to begin with.
Learning the Structure of a Hierarchical Reinforcement Task
CC BY-SA 2.5
null
2010-09-27T16:31:58.150
2010-09-27T18:14:16.977
2010-09-27T16:54:53.537
741
741
[ "machine-learning" ]
3118
2
null
1557
11
null
As mentioned by others, James-Stein is not often used directly, but is really the first paper on shrinkage, which in turn is used pretty much everywhere in single and multiple regression. The link between James-Stein and modern estimation is explained in detail [in this paper](http://www-stat.stanford.edu/~candes/papers/NonlinearEstimation.pdf) by E.Candes. Going back to your question, I think James-Stein is an intellectual non-curiosity, in the sense that it was intellectual for sure, but had an incredibly disruptive effect on Statistics, and nobody could dismiss it as a curiosity afterwards. Everyone thought that empirical means were an admissible estimator, and Stein proved them wrong with a counterexample. The rest is history.
null
CC BY-SA 2.5
null
2010-09-27T16:35:22.000
2010-09-27T16:35:22.000
null
null
30
null
3119
1
3124
null
3
146
I have 2 classifiers with different scales. How can I adjust one classifier to the scale of the other without loss of quality? On the scatter plot we have [2 solutions plotted (x1, x2) against each other](http://i55.tinypic.com/2v0fm28.png). I believe using linear combination as a link function is not appropriate in this particular case. Help is much appreciated.
How can I adjust classifier to the scale of the other
CC BY-SA 2.5
null
2010-09-27T16:53:52.353
2010-09-27T19:46:54.843
2010-09-27T19:42:06.567
919
null
[ "classification", "data-transformation", "scales", "exploratory-data-analysis" ]
3120
2
null
1357
7
null
More generally, mutual information is invariant under any smooth and uniquely invertible transformation of the variables. See "Estimating mutual information" by A Kraskov, H StΓΆgbauer, P Grassberger - Physical Review E, 2004 [[http://arxiv.org/pdf/cond-mat/0305641](http://arxiv.org/pdf/cond-mat/0305641)]
null
CC BY-SA 2.5
null
2010-09-27T17:39:46.393
2010-09-27T17:39:46.393
null
null
439
null
3121
1
null
null
2
1365
I would like to know the formula or method of calculating this. For example, I want to know how many numbers I must pick in order to have 95% confidence that I will be within 2% of the average. I'm also interested in doing a similiar analysis for the median as well. Update: Numbers are not put back in the pool. I think this has to do with combinations rather than trials? For example, if I pick 1 number or 99 numbers there are 100 combinations, if I pick 100 numbers there is 1 combination etc. I think you could figure out all of the combinations when you pick 1 ball, 2 balls, 3 balls etc and see how many are within 2% of the median. Im interested in seeing just how reliable the average price statistic is for the housing market. For example, in a single month 5% of the houses may sell, or maybe 20% will. Housing prices are all over the map but I wanted to do something simplistic like this. And now I have thought of something else...let's say I have 1-500 instead of 1-100. If I pick from 1-100 and pick 10 balls(10% of them), Im guessing that the answer would differ from taking 50 balls out of 1-500(10% of them as well)?
Numbers 1-100, calculating how many I must pick to get x% of reliability
CC BY-SA 2.5
null
2010-09-27T17:50:10.193
2010-09-29T02:36:36.603
2010-09-29T02:36:36.603
521
null
[ "sample-size", "experiment-design" ]
3122
2
null
3117
9
null
According to this [paper](http://rlai.cs.ualberta.ca/papers/barto03recent.pdf) > In the current state-of-the-art, the designer of an RL system typically uses prior knowledge about the task to add a specific set of options to the set of primitive actions available to the agent. Also see section 6.2 Learning Task Hierarchies in the same paper. The first idea that comes to my mind is that if you don't know task hierarchies, you should start with non-hierachial reinforcement learning and trying to discover the structure afterwards or while learning, i.e. you are trying to generalize your model. To me this task looks similar to Bayesian model merging technique for HMM (for example see this [thesis](http://www.icsi.berkeley.edu/NTL/papers/stolcke-thesis.pdf))
null
CC BY-SA 2.5
null
2010-09-27T18:05:25.237
2010-09-27T18:14:16.977
2010-09-27T18:14:16.977
1075
1075
null
3123
2
null
3121
3
null
I guess that you are looking for [experimental design for estimating the mean](http://books.google.gr/books?id=WK7l-csM0-gC&lpg=PA176&ots=8wTHuo4PrW&dq=%22experimental%20design%20for%20estimating%20the%20proportion%22&pg=PA175#v=onepage&q&f=false). I am not aware of something similar for the median though.
null
CC BY-SA 2.5
null
2010-09-27T18:34:36.773
2010-09-27T18:34:36.773
null
null
339
null
3124
2
null
3119
3
null
John Tukey described a simple method that works well in practice. Pick three representative points within the scatterplot: one near each end and one in the middle. Experiment with Box-Cox (power + log) transforms of either or both variables, applying them only to the coordinates of the three representatives. (Tukey did this by hand; a spreadsheet speeds up the calculation.) You want to make the slope between the left point and the middle point closely equal the slope between the middle and the right point. Applying the resulting transformation(s) will linearize the plot. (It basically has to: if you can get these points to line up, everything will line up.) We can hope for homoscedasticity, too. Your plot seems to exhibit a little more scatter on the right hand side. This suggests choosing Box-Cox parameters (powers) less than 1. For example, I can eyeball three points on your plot at (0.2, 0.05), (0.5, 0.2), and (0.7, 0.4). Using just powers (which are Box-Cox transformations up to an affine change) I get slopes of 0.70 and 0.71 when leaving the x-coordinate unchanged but taking the 0.3 power of the y-coordinate. The 0.3 power isn't much fun--one would prefer the 1/2 or maybe the 1/3 power--so I also tried taking the square root (1/2 power) of x and the logarithm (0 "power") of y. The slopes are now 5.33 and 5.35: exactly equal within the imprecision of my coordinate estimates. This suggests you should consider using $\left(\sqrt{x_1}, \log{x_2} \right)$ for your variables. If this makes the scatter around the resulting linear scatterplot a little uneven (heteroscedastic), go back and try less severe powers, such as 1 and 1/3. When you're done you can apply affine transformations (rescaling and recentering) to get the re-expressed variables in the range $[0, 1/2]$ if you really need that. Reference: Tukey, J, EDA. Addison-Wesley (1977).
null
CC BY-SA 2.5
null
2010-09-27T19:41:30.373
2010-09-27T19:46:54.843
2010-09-27T19:46:54.843
919
919
null
3125
2
null
3121
3
null
I did a little simulation in R, which resulted in 98. If you go to 5% of the mean then you're down to 93 and if you go to 10% then it's 76. Similar results are obtained for the median. Please, read whuber's comments to my answer The code I used: ``` numbers <- 1:100 numTrials <- 1000 means <- matrix(nrow = numTrials, ncol = length(numbers)) for (n in 1:length(numbers)) { for (t in 1:numTrials) { vals <- sample(numbers, n) means[t, n] <- mean(vals) # or median if you want the median } } realAvg <- mean(numbers) uplim <- realAvg + 0.05*realAvg downlim <- realAvg - 0.05*realAvg res <- unlist(apply(means, 2, function(x){ length(which(x>downlim & x<uplim))/length(x)})) plot(1:length(numbers), res*100, t="l", xlab="Numbers picked", ylab="% trials with mean within limits") abline(h=95, col="red") print(which(res>=0.95)) ``` ![Numbers picked vs % good trials](https://i.stack.imgur.com/f6WSF.jpg) PS: anyone would like to explain the "sawtooth" look of the graph?
null
CC BY-SA 2.5
null
2010-09-27T20:03:37.737
2010-09-28T21:11:22.577
2010-09-28T21:11:22.577
582
582
null
3126
2
null
3112
3
null
I bookmarked this thread from r-help a while back: - Summary, was Re: Confidence interval for relative risk and you might find the referenced PDF by Michael Dewey helpful: - confidence intervals for risk ratios If you can though, get a copy of the following book. I know it covers the unconditional likelihood and bootstrap methods for sure, and I suspect the small sample adjustment too (don't have a copy handy to check for the last): - Biostatistical methods: the assessment of relative risks
null
CC BY-SA 2.5
null
2010-09-27T20:12:24.250
2010-09-27T20:12:24.250
null
null
251
null
3127
2
null
3112
10
null
The three options that are proposed in `riskratio()` refer to an asymptotic or large sample approach, an approximation for small sample, a resampling approach (asymptotic bootstrap, i.e. not based on percentile or bias-corrected). The former is described in Rothman's book (as referenced in the online help), chap. 14, pp. 241-244. The latter is relatively trivial so I will skip it. The small sample approach is just an adjustment on the calculation of the estimated relative risk. If we consider the following table of counts for subjects cross-classififed according to their exposure and disease status, ``` Exposed Non-exposed Total Cases a1 a0 m1 Non-case b1 b0 m0 Total n1 n0 N ``` the MLE of the risk ratio (RR), $\text{RR}=R_1/R_0$, is $\text{RR}=\frac{a_1/n_1}{a_0/n_0}$. In the large sample approach, a score statistic (for testing $R_1=R_0$, or equivalently, $\text{RR}=1$) is used, $\chi_S=\frac{a_1-\tilde a_1}{V^{1/2}}$, where the numerator reflects the difference between the oberved and expected counts for exposed cases and $V=(m_1n_1m_0n_0)/(n^2(n-1))$ is the variance of $a_1$. Now, that's all for computing the $p$-value because we know that $\chi_S$ follow a chi-square distribution. In fact, the three $p$-values (mid-$p$, Fisher exact test, and $\chi^2$-test) that are returned by `riskratio()` are computed in the `tab2by2.test()` function. For more information on mid-$p$, you can refer to > Berry and Armitage (1995). Mid-P confidence intervals: a brief review. The Statistician, 44(4), 417-423. Now, for computing the $100(1-\alpha)$ CIs, this asymptotic approach yields an approximate SD estimate for $\ln(\text{RR})$ of $(\frac{1}{a_1}-\frac{1}{n_1}+\frac{1}{a_0}-\frac{1}{n_0})^{1/2}$, and the Wald limits are found to be $\exp(\ln(\text{RR}))\pm Z_c \text{SD}(\ln(\text{RR}))$, where $Z_c$ is the corresponding quantile for the standard normal distribution. The small sample approach makes use of an adjusted RR estimator: we just replace the denominator $a_0/n_0$ by $(a_0+1)/(n_0+1)$. As to how to decide whether we should rely on the large or small sample approach, it is mainly by checking expected cell frequencies; for the $\chi_S$ to be valid, $\tilde a_1$, $m_1-\tilde a_1$, $n_1-\tilde a_1$ and $m_0-n_1+\tilde a_1$ should be $> 5$. Working through the example of Rothman (p. 243), ``` sel <- matrix(c(2,9,12,7), 2, 2) riskratio(sel, rev="row") ``` which yields ``` $data Outcome Predictor Disease1 Disease2 Total Exposed2 9 7 16 Exposed1 2 12 14 Total 11 19 30 $measure risk ratio with 95% C.I. Predictor estimate lower upper Exposed2 1.000000 NA NA Exposed1 1.959184 1.080254 3.553240 $p.value two-sided Predictor midp.exact fisher.exact chi.square Exposed2 NA NA NA Exposed1 0.02332167 0.02588706 0.01733469 $correction [1] FALSE attr(,"method") [1] "Unconditional MLE & normal approximation (Wald) CI" ``` By hand, we would get $\text{RR} = (12/14)/(7/16)=1.96$, $\tilde a_1 = 19\times 14 / 30= 8.87$, $V = (8.87\times 11\times 16)/ \big(30\times (30-1)\big)= 1.79$, $\chi_S = (12-8.87)/\sqrt{1.79}= 2.34$, $\text{SD}(\ln(\text{RR})) = \left( 1/12-1/14+1/7-1/16 \right)^{1/2}=0.304$, $95\% \text{CIs} = \exp\big(\ln(1.96)\pm 1.645\times0.304\big)=[1.2;3.2]\quad \text{(rounded)}$. The following papers also addresses the construction of the test statistic for the RR or the OR: - Miettinen and Nurminen (1985). Comparative analysis of two rates. *Statistics in Medicine, 4: 213-226. - Becker (1989). A comparison of maximum likelihood and Jewell's estimators of the odds ratio and relative risk in single 2 Γ— 2 tables. Statistics in Medicine, 8(8): 987-996. - Tian, Tang, Ng, and Chan (2008). Confidence intervals for the risk ratio under inverse sampling. Statistics in Medicine, 27(17), 3301-3324. - Walter and Cook (1991). A comparison of several point estimators of the odds ratio in a single 2 x 2 contingency table. Biometrics, 47(3): 795-811. Notes - As far as I know, there's no reference to relative risk in Selvin's book (also referenced in the online help). - Alan Agresti has also some code for relative risk.
null
CC BY-SA 2.5
null
2010-09-27T20:41:45.253
2010-09-27T22:05:45.907
2010-09-27T22:05:45.907
919
930
null
3128
2
null
3115
23
null
The variance of the cross-correlation coefficient under the null hypothesis of zero correlation is approx $1/n$ where $n$ is the length of the series. The coefficients are also asymptotically normal. So approximate critical values (at the 5% level) are $\pm 2/\sqrt{n}$. These critical values are plotted automatically in R using `ccf(x,y)`.
null
CC BY-SA 2.5
null
2010-09-27T22:23:31.053
2010-09-27T23:35:18.967
2010-09-27T23:35:18.967
159
159
null
3129
2
null
3113
6
null
It sounds dodgy to me. Nonparametric methods almost always involve more degrees of freedom than parametric methods and so need more data. In your particular example, the Mann-Whitney test has lower power than the t-test and so more data are required for the same specified power and size. A simple way to do sample size calculation for any method (non-parametric or otherwise) is to use a bootstrap approach.
null
CC BY-SA 2.5
null
2010-09-27T23:47:03.250
2010-09-27T23:47:03.250
null
null
159
null
3130
1
3131
null
9
158
Suppose we have a set A and a subset B. If we know |A|, then we can calculate |B| by finding the probability p that an element chosen uniformly at random from A belongs to B. Specifically |A|p=|B|. Suppose we generate n elements of A uniformly at random and use this data to estimate p (number of elements in B divided by n) and hence estimate |B|. > How reliable is this estimate? I.e. how can we compute the error? As a side question, is there a name for this technique? (it seems to be a mathematical version of the [mark-and-recapture](http://en.wikipedia.org/wiki/Mark_and_recapture) technique)
Error in estimating the size of a set?
CC BY-SA 2.5
null
2010-09-28T01:03:48.430
2010-09-28T01:39:19.667
null
null
386
[ "estimation" ]
3131
2
null
3130
8
null
You are estimating proportions. For concreteness, imagine that A is the population of voters and B is the set of voters who vote for a particular candidate. Thus, p would be the percentage of voters who would vote for that candidate. Let: $\pi$ be the true percentage of people who would vote for the candidate In other words: $\pi = \frac{|B|}{|A|}$ Then each one of your samples is a bernoulli trial with probability $\pi$ or equivalently you can imagine that each one of your samples is a poll of potential voters asking them whether they would vote for the candidate. Thus, the MLE of $\pi$ is given by: $p = \frac{n_B}{n}$ where $n_B$ is the number of people who said they would vote for candidate or the number of elements which belong to the set B in your sample of size $n$. The standard error for your estimate is: $\sqrt{\frac{\pi (1-\pi)}{n}}$ The above can be approximated by using the MLE for $\pi$ i.e., by: $\sqrt{\frac{p (1-p)}{n}}$
null
CC BY-SA 2.5
null
2010-09-28T01:21:27.053
2010-09-28T01:39:19.667
2010-09-28T01:39:19.667
null
null
null
3132
2
null
3105
2
null
As you have alluded to, communications engineering (and signal processing in general) is filled with stochastic processes. See, for example: - Chapter 9 and later of Communications Systems by Carlson and Crilly - Chapter 13 of A Course in Digtial Signal Processing by Porat This is only two textbook examples. You will find many (many) more. It's fair to say that your mobile phone, the internet and etcetera all rely extremely heavily on our understanding of stochastic processes.
null
CC BY-SA 2.5
null
2010-09-28T01:25:32.580
2010-09-28T01:25:32.580
null
null
352
null
3133
2
null
2513
2
null
the short answer to your question is that with a sample size of n = 1280, a formal statistical test for equal variances is not really necessary. with that sample size, the variance 'estimates' you get are certainty-equivalents [almost certainly very close to the true population variances]. so you can just look at the two values and see if they look reasonably close or not. a comment about your linear model: i assume there is a random error term in it as well. i think there should also be an intercept term, unless you are sure the regression line goes thru the origin. also, do you treat the participant effect as a random effect? it seems you should, if each contributes more than one pair of ratings to the data. but that might cause problems with the usual methods for testing for effects other than a regression effect [for which the usual models do not have such terms]. this is because the usual models assume the different before-after [ratings] pairs are independent. but the random effect creates correlations between the ratings pairs for the same individual. such a model would have to be studied further to see if the hypothesis of no differential A'-rating effect still reduces to a test of equal variances. here is a bit more about the pitman test [altho what follows is probably unnecessary for you, in view of the 'short answer' above]: like tests for equality of variances for two [or more] independent samples, the pitman test is apparently sensitive to the non-normality of the responses. you don't mention the scale on which the variable rating is measured, but especially if it is a likert-type scale, the assumption of normality for the responses may be questionable. [wilcox](http://jeb.sagepub.com/content/15/3/237.abstract) did a study of the pitman and other tests of variance equality for paired data and wrote: But, it is concluded that, in terms of controlling both Type I and Type II errors, a satisfactory solution does not yet exist. [this was in 1990.] this doesn't necessarily mean that the pitman test would be invalid for your data, but it does suggest that some caution in using it might be appropriate. [if i remember correctly, the difficulty with homogeneity-of-variance tests arises when the data distribution [for rating in your case] has a heavy tail [or tails]. then the actual level of the test can exceed the nominal level [of .05, say], resulting in too often falsely rejecting the hypothesis of equal variances when, in fact, they are equal. [better power, but also worse false rejection rate.] it seems unlikely that responses on a likert scale would have heavy tails [or too big a 4$^{th}$ moment - as compared with its variance], but it might not hurt to see if the usual kertosis measure [4$^{th}$ central moment divided by variance$^2$ exceeds 3 [which is then an indication of heavy tails]. [grambsch](http://biomet.oxfordjournals.org/content/81/2/359.abstract) in 1994 compared the pitman test to alternative procedures - some of which are more complicated to carry out. some numerical results in her paper suggest that if the distributions of rating are uniform or have the shape of a half-parabola over the range of responses [i.e. have light tails], the pitman test is conservative: the actual type 1 error for a nominal value of $\alpha = .05$ can be anywhere from .005 to about .05 - depending on the actual shape of the distributions and the sample size. [her results deal with samples sizes of 50 or less. i suppose with your sample size of 1280, her conclusions would have been somewhat different.] if you are interested, she presents a modification of the pitman test that keeps the actual significance level close to the nominal value of .05, for both light- and heavy-tailed distributions.
null
CC BY-SA 2.5
null
2010-09-28T02:27:31.433
2010-09-28T02:27:31.433
null
null
1112
null
3134
2
null
3121
1
null
A couple of points first: 1) The answer will be very large, so we are safe to assume the [Central Limit Theorem](http://en.wikipedia.org/wiki/Central_limit_theorem) is in operation, even if the distribution is all 1s and 100s (i.e. Bernoulli). 2) We can use normal deviates (z-scores) rather than t-tests because the required N will be very large - this is a simplification. 3. If we are sampling without replacement the problem statement seems to have only one interpretation - the numbers 1,...,100 - a uniform discrete distribution. whuber and nico have already provided answers, so I will deal with sampling with replacement and an undefined probability mass function (PMF). Approach 1: Assume a [discrete uniform distribution](http://en.wikipedia.org/wiki/Uniform_distribution_%28discrete%29) and sampling with replacement. The mean is 50.5, the SD is sqrt((n^2 - 1)/12) ~ 29 We want a maximum error of 2% so delta = abs(est_mean - true_mean)/true_mean = 1.01 Assume a normal distribution, the standard error of the mean is: \begin{align} SEM &= SD/sqrt(n)\\ sqrt(n) &= SD/SEM\\ n &= (SD/SEM)^2\\ n &= (SD/(delta/1.96))^2\\ n &= (29/(1.01/1.96))^2\\ n &\approx 3167\\ \end{align} Approach 2: Assume the worst case. I'm pretty sure this is a 50:50 split in the [pmf ](http://en.wikipedia.org/wiki/Probability_mass_function) between 1 and 100. If so our mean is 50.5 and our [SD](http://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable) is sqrt(49.5^2)=49.5. \begin{align} SEM &= SD/sqrt(n)\\ sqrt(n) &= SD/SEM\\ n &= (SD/SEM)^2\\ n &= (SD/(delta/1.96))^2\\ n &= (49.5/(1.01/1.96))^2\\ n &\approx 9227\\ \end{align} I'll let someone else do the sample sizes for medians.
null
CC BY-SA 2.5
null
2010-09-28T07:19:03.973
2010-09-29T02:35:18.350
2010-09-29T02:35:18.350
521
521
null
3135
2
null
3086
11
null
My best bet is that you are facing large imbalance between your response categories, for some of your items. If you assume that your binary responses reflect individual locations on an underlying latent (i.e., continuous) trait, then correlating the two variables is ok, provided the cutoff is close to the mean of the bivariate density, as shown below (here cutoffs were set symmetrically at $(.5,.5)$, for a correlation of 0.5): ![alt text](https://i.stack.imgur.com/ceUmY.png) In this case, Pearson correlation will underestimate the true linear relationship between the two latent traits, especially in the mid-range of the correlation metric. On the other hand, when the cutoffs are clearly asymmetrical on both continuous variables, the tetrachoric correlation will generally overestimate the true relationship. The following picture illustrates the ideal case. ``` library(polycor) set.seed(101) n <- 500 rho <- seq(0,1,length=500) pc1 <- pc2 <- tc <- numeric(500) for (i in 1:500) { data <- rmvnorm(n, c(0, 0), matrix(c(1, rho[i], rho[i], 1), 2, 2)) x <- data[,1]; y <- data[,2] xb <- ifelse(x>=mean(x), 1, 0); yb <- ifelse(y>=mean(y), 1, 0) pc1[i] <- cor(x, y) pc2[i] <- cor(xb, yb) tc[i] <- polychor(xb, yb) } plot(pc1, pc2, cex=.6, col="red", xlab="True linear relationship", ylab="Observed correlation") lines(lowess(pc1, pc2), col="red", lwd=2) abline(0, 1, col="lightgray") points(pc1, tc, cex=.6, col="blue") lines(lowess(pc1, tc), col="blue", lwd=2) legend("topleft", c("Pearson (0/1)","Tetrachoric"), col=c(2,4), lty=1, bty="n") ``` ![alt text](https://i.stack.imgur.com/FZ14p.png) Now, you can play with the value of the cutoff, $\tau$, and see what happens when it is asymmetric and largely departs from the mean of the joint density of $x$ and $y$. To complement @shabbychef's response, the phi coefficient is generally used with "truly" categorical variables (no hypothesis about a continuous generating process are made) and reduces to Pearson correlation in this case ($\sqrt{\chi^2}/n$). The problem is then to factor out a correlation matrix constructed in such a way because communalities become meaningless. To avoid this problem, we may rely on parametric item response modeling, e.g. mixed-effects logistic model (in this case, no need to worry about the cutoff, since it is estimated), or non-parametric model, like Mokken scaling. In the latest case, we only assume monotonicity on the latent trait, but no functional form relating one's location on the latent trait and the outcome (i.e., the probability of endorsing the item). However, in your case, it would be a pain and would not allow you to identify a structure in your correlation matrix. But it may be used afterwards. Finally, John Uebersax provides an in-depth discussion on the use of tetrachoric correlation in relation to latent trait modeling, see [Introduction to the Tetrachoric and Polychoric Correlation Coefficients](http://www.john-uebersax.com/stat/tetra.htm). Also, Nunnally discussed a long ago the advantages/disadvantages of relying on Pearson vs. Tetrachoric correlation coefficients in Factor Analysis, see e.g. pp. 570-573 (3rd ed.). References - O'Connor, B. Cautions Regarding Item-Level Factor Analyses. - Bernstein, I.H., Teng, G. (1989). Factoring items and factoring scales are different: Spurious evidence for multidimensionality due to item categorization. Psychological Bulletin, 105, 467-477. - Edwards, J.H. and Edwards, A.W.F. (1984). Approximating the tetrachoric correlation coefficient. Biometrics, 40, 563. - Castellan, N.J. (1966). On the estimation of the tetrachoric correlation coefficient. Psychometrika, 31(1), 67-73. - Fitzgerald, P., Knuiman, M.W., Divitini, M.L., and Bartholomew, H.C. (1999). Effect of dichotomising a continuous variable on the assessment of familial aggregation: an empirical study using body mass index data from the Busselton Health Study. J. Epidemiol. Biostat., 4(4), 321-327. - Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory (Third ed.). McGraw-Hill.
null
CC BY-SA 3.0
null
2010-09-28T07:40:33.087
2012-06-27T19:27:33.020
2012-06-27T19:27:33.020
930
930
null
3136
1
3138
null
52
253356
I have a data set with following structure: ``` a word | number of occurrence of a word in a document | a document id ``` How can I perform a test for normal distribution in R? Probably it is an easy question but I am a R newbie.
How to perform a test using R to see if data follows normal distribution
CC BY-SA 2.5
null
2010-09-28T08:29:32.933
2018-03-13T23:12:56.417
null
null
1389
[ "r", "distributions", "normality-assumption" ]
3137
2
null
3136
16
null
Assuming your dataset is called `words` and has a `counts` column, you can plot the histogram to have a visualization of the distribution: ``` hist(words$counts, 100, col="black") ``` where 100 is the number of bins You can also do a normal Q-Q plot using ``` qqnorm(words$counts) ``` Finally, you can also use the Shapiro-Wilk test for normality ``` shapiro.test(word$counts) ``` Although, look at this discussion: [Normality Testing: 'Essentially Useless?'](https://stats.stackexchange.com/questions/2492/normality-testing-essentially-useless)
null
CC BY-SA 2.5
null
2010-09-28T09:10:33.713
2010-09-28T09:10:33.713
2017-04-13T12:44:37.420
-1
582
null
3138
2
null
3136
50
null
If I understand your question correctly, then to test if word occurrences in a set of documents follows a Normal distribution you can just use a shapiro-Wilk test and some qqplots. For example, ``` ## Generate two data sets ## First Normal, second from a t-distribution words1 = rnorm(100); words2 = rt(100, df=3) ## Have a look at the densities plot(density(words1));plot(density(words2)) ## Perform the test shapiro.test(words1); shapiro.test(words2) ## Plot using a qqplot qqnorm(words1);qqline(words1, col = 2) qqnorm(words2);qqline(words2, col = 2) ``` The qqplot commands give: ![alt text](https://i.stack.imgur.com/hdgwQ.png) You can see that the second data set is clearly not Normal by the heavy tails ([More Info](http://data.library.virginia.edu/understanding-q-q-plots/)). In the Shapiro-Walk normality test, the p-value is large for the first data set (>.9) but very small for the second data set (<.01). This will lead you to reject the null hypothesis for the second.
null
CC BY-SA 3.0
null
2010-09-28T09:13:07.200
2018-03-13T23:12:56.417
2018-03-13T23:12:56.417
155460
8
null
3140
1
null
null
2
6583
I am currently working on matlab and I am new to it. I would like to know how do i average the results from the folds (or otherwise combined) to produce a single estimation. My dataset has 2 class labels +1 and -1. and k=5. I use the knnclassify() method in matlab in order to perform cross validation. Kindly help!
Knnclassify() matlab
CC BY-SA 2.5
null
2010-09-28T13:28:00.863
2011-01-04T19:11:00.380
2010-09-28T13:30:42.950
5
null
[ "cross-validation", "matlab" ]
3141
2
null
3136
5
null
In addition to the Shapiro-Wilk test of the stats package, the nortest package (available on CRAN) provides other normality tests.
null
CC BY-SA 2.5
null
2010-09-28T14:30:29.593
2010-09-28T14:30:29.593
null
null
1355
null
3142
2
null
3140
6
null
The knnclassify() function uses the k-nearest neighbours classification algorithm to perform classification. The crossvalind() function is a separate one to divide a data set in to folds to perform cross-validation. Both are part of the Bioinformatics toolbox. I suggest that you read the documentation associated with them, so that you can be sure you are using them correctly. In general, however, with a single n-fold cross-validation (CV) run, you don't really need to average results from folds. You just collect the results from the individual folds. For example, with 5-fold CV, your dataset is divided into 5 separate folds, used as follows: - Train with Folds 1..4, Test on Fold 5 - Train with Folds 1..3+5, Test on Fold 4 - Train with Folds 1+2+4+5 Test on Fold 3 and so on. So all of the data is used for testing exactly once. You can just combine the test results in a single list, and count how many it got right and wrong to compute the overall accuracy.
null
CC BY-SA 2.5
null
2010-09-28T15:19:52.463
2010-09-28T15:19:52.463
null
null
1436
null
3143
1
3144
null
9
17467
I'm quite new to R and I have a following problem: I have a simple 2-factor linear model: ``` # factor1 has 8 categorical values, factor2 has 6 categories Rate ~ factor1 + factor2 model1 <- lm(Rate~factor1+factor2, data=myData) ``` And want to put constraints SUM of factor1 coefficients = 0, the same for factor2. None of the manuals gives any clue how to do this. I found a link to similar problem here but it is different and I couldnt figure out how to modify it...
Linear model with constraints
CC BY-SA 3.0
null
2010-09-28T17:03:17.213
2018-05-25T15:34:20.163
2011-08-09T22:34:02.213
null
1439
[ "r", "regression", "contrasts" ]
3144
2
null
3143
8
null
You can do this using contrasts: ``` options(contrasts=c('contr.sum', 'contr.sum')) ``` See `?contr.sum` for more information. UPDATE: A little googling reveals a page which might be a little clearer: - Samuel E. Buttrey, Setting and Keeping Contrasts
null
CC BY-SA 3.0
null
2010-09-28T17:47:40.680
2013-02-25T19:08:27.867
2013-02-25T19:08:27.867
919
251
null
3145
2
null
3143
22
null
There is a trick to be had here. For simplicity, suppose you are trying to build a model of the form $$y = b_1 x_1 + b_2 x_2 + b_3 x_3,$$ subject to $b_1 + b_2 + b_3 = 0$. Simply re-express $b_3$ as $b_3 = - b_1 - b_2$, which is to say you are trying to build a model of the form $$y = b_1 x_1 + b_2 x_2 - (b_1 + b_2) x_3 = b_1 (x_1 - x_3) + b_2 (x_2 - x_3).$$ So create new variables $\tilde{x}_1 = x_1 - x_3,$ and $\tilde{x}_2 = x_2 - x_3$, and perform your regression using these transformed variables as the independent variables. You should be able to apply this trick independently to the catgories of `factor1` and `factor2`. (I have assumed the data is given as 0/1 indicators of membership in the individual categories.)
null
CC BY-SA 2.5
null
2010-09-28T17:48:38.293
2010-09-28T17:48:38.293
null
null
795
null
3147
1
null
null
6
105
I have developed two differing methods in MATLAB which aim to analyse a pop song and then automatically create a 30 second audio thumbnail (a preview clip) containing part of the chorus section. Both methods have varying results: - The first method can create a thumbnail for each track, managing to find a chorus section in 40 out of 50 tested songs - The second method only managed to work on 30 out of the 50 songs, and it found the chorus section 21 times out the 30. Obviously I know which method is superior, but I need to describe and explain the results in a report which requires the demonstration of proper statistical testing. Other academic papers have previously used an f-test to do this, but because their methods are vastly superior, their aims are usually involve the detection of chorus onset times with 100% accuracy. My aim is more relaxed as I am just looking for the generated thumbnails to contain any part of the chorus, regardless of onset. Can anyone suggest some objective tests that I could possibly explore with regards to my project? This is my first time conducting an investigation like this so my experience/knowledge is incredibly low. Thank you! edit: Can I possibly conduct an f-test? Recall = Number of thumbnails produced / Total number of tested songs Precision = Number of choruses detected / Total number of thumbnails Would this work or am I completely wrong?
How should I objectively test my program results?
CC BY-SA 2.5
null
2010-09-28T19:02:38.067
2010-09-30T19:31:33.913
2010-09-30T19:31:33.913
1441
1441
[ "model-comparison" ]
3148
2
null
3147
1
null
You can count the recognition of a chorus track as a 'Success' and the lack of recognition as a 'Failure'. Thus, you have the following data: Method 1: Proportion of success (say, $p_1$) = $\frac{40}{50}$ Method 2: Proportion of success (say, $p_2$) = $\frac{21}{50}$ It seems that method 2 fails completely for 20 music tracks and hence I am assuming that they should be counted as a failure. Let: $\pi_1$ and $\pi_2$ be the true proportions of successes for the two methods. Then you wish to assess if method 1 is superior to method 2. Thus, you would assume that: Null Hypothesis: $\pi_1 = \pi_2$ Your alternative hypothesis is $\pi_1 \ne \pi_2$. (Note: Your alternative hypothesis could also be framed as $\pi_1 \ge \pi_2$ which would impact how you would do the testing but that is a nuance that you probably need not worry about.) And attempt to see to what extent the data is consistent with the null hypothesis. The way to test for the null hypothesis is to use a Two-proportion z-test (See the 7th row of [Common Test Statistics](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing#Common_test_statistics) on wiki titled "Two-proportion z-test, pooled for d0 = 0". The symbols used in the table are explained at the bottom of the table.) If the calculated Z value as per the formula is greater than 1.96 or less than -1.96 then you would reject the null in favor of the alternative hypothesis.
null
CC BY-SA 2.5
null
2010-09-28T19:29:16.157
2010-09-28T19:29:16.157
null
null
null
null
3149
2
null
3048
1
null
If you do PCA, my advice would be to briefly screen other components than only the first two. Once you have the `prcomp` object `pc` (see suncoolsu's post), you can `plot(pc)` to see the amount of variance attributed to the different principal components. You can also easily visualise several (usually three or four) of them with `pairs(pc$x[,1:3])` or, using `lattice`, `splom(pc$x[,1:3])`.
null
CC BY-SA 2.5
null
2010-09-28T20:24:47.093
2010-09-28T20:24:47.093
null
null
1355
null
3150
2
null
3121
0
null
``` numbers <- 1:500 numTrials <- 1000 means <- matrix(nrow = numTrials, ncol = length(numbers)) for (n in 1:length(numbers)) { for (t in 1:numTrials) { vals <- sample(numbers, n) means[t, n] <- median(vals) } } realAvg <- median(numbers) uplim <- realAvg + 0.05*realAvg downlim <- realAvg - 0.05*realAvg res <- unlist(apply(means, 2, function(x){ length(which(x>downlim & x<uplim))/length(x)})) plot(1:length(numbers), res*500, t="l", xlab="Numbers picked", ylab="% trials with mean within limits") abline(h=95, col="red") print(which(res>=0.95)) ```
null
CC BY-SA 2.5
null
2010-09-28T21:04:50.340
2010-09-28T21:11:05.323
2010-09-28T21:11:05.323
null
null
null
3154
2
null
3136
14
null
No test will show you that your data has a normal distribution - it will only be able to show you when the data is sufficiently inconsistent with a normal that you would reject the null. But counts are not normal in any case, they're positive integers - what's the probability that an observation from a normal distribution will take a value that isn't an integer? (... that's an event of probability 1). Why would you test for normality in this case? It's obviously untrue. [In some cases it may not necessarily matter that you can tell your data aren't actually normal. Real data are never (or almost never) going to be actually drawn from a normal distribution.] If you really need to do a test, the Shapiro-Wilk test (`?shapiro.test`) is a good general test of normality, one that's widely used.
null
CC BY-SA 3.0
null
2010-09-28T22:12:45.667
2017-04-04T01:47:44.910
2017-04-04T01:47:44.910
805
805
null
3156
1
3157
null
2
407
My mean is `3.454992884900722e+008`, and my confidence intervals from three distributions (when compared to the given mean) are: CI for dist. 1: 1.0e+008 * [4.084733001497999 4.097677503988565] CI for dist. 2: 1.0e+008 * [5.424396063219890 5.586301025525149] CI for dist. 3: 1.0e+008 * [2.429145282593182 2.838897116739112] EDIT: More information as follow: For Dist. 1: `p = 8.094614835195452e-130 and h = 1` For Dist. 3: `p = 2.824626709966993e-072 and h = 1` For Dist. 3: `p = 3.054667629953656e-012 and h = 1` Can you explain which distribution among the three should be picked when compared to the mean on the basis of CI, and why? I hope my question is clear. Feel free to edit/suggest anything regarding question. Thanks!
Question on confidence interval
CC BY-SA 2.5
null
2010-09-28T23:04:13.890
2010-09-30T10:58:23.187
2010-09-28T23:25:18.567
null
null
[ "confidence-interval", "multiple-comparisons", "mean" ]
3157
2
null
3156
1
null
I am afraid that there is something you are missing. The documentation says, p is p-value (which is definitely a probability under the null, but the probability of observing extreme observations if the null holds) and h=1 means you are rejecting the null in favor of the alternative hypothesis. Here is a good quote from Bert Gunter borrowed from R mailing list. If you are willing to trust p-values which are as low as yours, I have a bridge to sell. Also, going by the value of your mean, its better that you look at the data and see what distribution your data follows and check if the assumptions of 1 sample t-test hold. I have a feeling that probably you can't use ttest here. I say this because your p-values are unbelievably small. (I understand your means are large ... etc ..., but still ttest can't be applied without verifying assumptions etc.) Thanks, S.
null
CC BY-SA 2.5
null
2010-09-29T00:45:26.520
2010-09-29T00:45:26.520
null
null
1307
null
3158
1
3159
null
23
15385
Does this type of chart have a name? More importantly, is there any visualization library I can use to produce it? ![alt text](https://i.stack.imgur.com/jDQjY.png) [http://www.nytimes.com/interactive/2007/12/15/us/politics/DEBATE.html](http://www.nytimes.com/interactive/2007/12/15/us/politics/DEBATE.html)
What is this type of circular-link visualization called?
CC BY-SA 2.5
null
2010-09-29T05:22:35.593
2022-04-27T13:37:30.303
2010-10-01T09:44:27.850
8
1106
[ "data-visualization" ]
3159
2
null
3158
13
null
Take a look at [Circos](http://mkweb.bcgsc.ca/circos/?): > Circos is a software package for visualizing data and information. It visualizes data in a circular layout β€” this makes Circos ideal for exploring relationships between objects or positions. The flowing data blog also had a post on this that you might find interesting: - Visual Representation of Tabular Information – How to Fix the Uncommunicative Table
null
CC BY-SA 2.5
null
2010-09-29T05:45:42.893
2010-09-29T05:45:42.893
null
null
251
null
3165
1
3170
null
3
5423
Let's say we created a plot, with labels: ``` plot(1, xlab = "text", ylab = "text times three") # text too small ``` In this plot there is still a lot of free region to enlarge the text. We could do so by using cex.lab, like this: ``` par(cex.lab = 2) plot(1, xlab = "text", ylab = "text times three") ``` Questions: - Is there a way to know what is the largest cex.lab possible, without the text getting over the borders? - Is it possible to do that separately for each component of the axis? (so to have a different fitting of cex.axis to the xlab vs the ylab) ?
How to calculate cex.axis automatically so to fit to the maximal width possible (in R)?
CC BY-SA 2.5
null
2010-09-29T11:47:47.373
2010-09-30T21:28:23.743
2010-09-30T21:28:23.743
930
253
[ "r", "data-visualization" ]
3166
2
null
3165
2
null
I can't think of an answer to Q1, but Q2 can be done via separate calls to `title()`. Here's an example ``` x <- 1:10 y <- runif(10) plot(y ~ x, ann = FALSE) title(ylab = "my Y axis label", cex.lab = 2) title(xlab = "my X axis label", cex.lab = 3) ``` But I don't see the value in doing this, as won't the user be drawn to the fact that the labels are drawn to a different size? If they are (I would be) then the user will not be focussing on what the plot is designed to show.
null
CC BY-SA 2.5
null
2010-09-29T12:42:50.793
2010-09-29T12:42:50.793
null
null
1390
null
3167
1
3168
null
5
492
I have to incorporate a number of line plots in a Latex document and I want to get a very high quality. Are there any website that show some templates or default plots styles that I can use. Myself I am not good in designing this kind of plot style, so I am searching some good basic settings for the tool I want to use. About the tools, I think I would like to use Tikz or gnuplot as I want to build tex document and plots in a single run. My data is anyway stored in text files.
Where can I find a good resources for making publication quality line plots
CC BY-SA 2.5
null
2010-09-29T12:56:27.610
2010-09-29T20:33:02.683
null
null
190
[ "data-visualization" ]
3168
2
null
3167
6
null
See [this related question](https://stats.stackexchange.com/questions/257/what-is-the-easiest-way-to-create-publication-quality-plots-under-linux/). All the same advice applies in your case. Just to highlight a few points: - Using R is a good way to go, especially with the ggplot2 package. This is both flexible and produces very high quality output. There are plenty of examples on the ggplot2 website and across the web (including on this website). - To improve the quality of your plots, you should consider using a different device driver than the default, and choose a high-quality output (e.g. SVG). The Cairo package is one good option. You simply call your plot function before plotting and it redirects to Cairo as the output device. Cairo can be used with any plotting software, not just with R. - In terms of putting your plots together in a LaTeX publication, that's the role that Sweave plays. It makes combining plots with your paper a trivial operation (and has the added benefit of leaving you with something that is reproducible and understandable). Use cacheSweave if you have long-running computations. If you want more specific help on a particular plot, I advise giving more detail. Edit If you want a specific example, it is best that you provide more specfics. For more examples of using ggplot, you can also refer to [the LearnR blog](http://learnr.wordpress.com/). Here's an example combining ggplot with Cario for high quality output. ``` CairoPDF("plot.pdf", 6, 6, bg="transparent") qplot(factor(cyl), wt, geom=c("boxplot", "jitter"), color=am, data=mtcars) dev.off() ``` You can look at the documentation for these packages by using the R help functions.
null
CC BY-SA 2.5
null
2010-09-29T13:40:40.327
2010-09-29T15:23:59.080
2017-04-13T12:44:52.277
-1
5
null
3169
2
null
3167
2
null
You are probably aware of [TeXexample.net](http://www.texample.net/tikz/). The TeX package [pgfplots](http://www.ctan.org/tex-archive/help/Catalogue/entries/pgfplots.html) might also be of interest to you: it provides a pretty complete manual and allows to directly invoke gnuplot. Personally, I use [Sweave](http://www.stat.uni-muenchen.de/~leisch/Sweave/) to embed [R](http://www.r-project.org/) code, plots and table (see package [xtable](http://cran.r-project.org/web/packages/xtable/index.html)) into Latex documents. It is very handy especially if you need to run additional processing or statistical tests on your input data. There are many resources about R and its plotting capabilities: [R Graph gallery](http://addictedtor.free.fr/graphiques/), [ggplot2](http://had.co.nz/ggplot2/) website and book, [R Graphics](http://www.stat.auckland.ac.nz/~paul/RGraphics/rgraphics.html) and [Lattice](http://lmdvr.r-forge.r-project.org/figures/figures.html) books, as well as many other general tutorials on R that demonstrate how to plot your data.
null
CC BY-SA 2.5
null
2010-09-29T13:50:25.693
2010-09-29T20:33:02.683
2010-09-29T20:33:02.683
930
1355
null
3170
2
null
3165
4
null
The following uses calls to par(), strheight() and strwidth() to fit within the borders of the plot area and the edges of the margin for the text. They don't rescale together the same so this will give you a framework for redrawing your graph if you change the overall dimensions. If you want something different this will be good to build on. If you want to take up more of the margin area you need to look at parameters for the number of lines in a margin, lineheight, etc. par() is what you need to do what you want but your question is vague. This will do an interpretation of it. ``` sx <- 'my x-label' sy <- 'my y-label' plot(rnorm(4), ann = FALSE) #work out string width max maxcexXh <- par('pin')[1]/strwidth(sx,'inches') maxcexYh <- par('pin')[2]/strwidth(sy,'inches') #work out string height max maxcexXv <- (par('mai')[1]/par('mar')[1])/strheight(sx,'inches') maxcexYv <- (par('mai')[2]/par('mar')[2])/strheight(sy,'inches') maxcexX <- min(maxcexXh, maxcexXv) maxcexY <- min(maxcexYh, maxcexYv) title(xlab = sx, cex.lab = maxcexX) title(ylab = sy, cex.lab = maxcexY) ``` This likely is staying within the height of the string on your computer. If you made the text much longer it would stay within the width. My guess is that after you've done this you'll realize it's not a good idea.
null
CC BY-SA 2.5
null
2010-09-29T13:55:03.430
2010-09-29T19:16:43.450
2010-09-29T19:16:43.450
601
601
null
3171
1
null
null
3
2194
We have data from random polling within a group of people, sampled over a long period of time. As the pool is relatively small and anonymity is crucial there is a high likelihood that we have duplicate samples (though we can't tell for certain because we'd expect responses to vary over time). Does this somehow invalidate the data, or mean that certain operations on it will not be meaningful? Or is it OK to proceed as normal so long as I state this likelihood? --- I'm happy to provide more detail if needed - just say what would help in a comment. Also I'm making this CW. Please feel free to edit the question if there are other relevant implications of duplicate data that would be worth specifying.
Does a high likelihood of duplicate samples invalidate my data or certain operations on it?
CC BY-SA 2.5
null
2010-09-29T15:55:04.150
2015-12-02T03:14:19.143
null
null
1343
[ "statistical-significance", "polling" ]
3173
1
3174
null
7
3340
There are occasions where I would like to fit log-linear models where the independence assumption between observations is violated. It is the normal case that I have multiple observations from each subject. Is there a mixed-effects extension to log-linear models like there are for linear and generalized linear models? I'm interested in the existence of such a thing in principle, but also its concrete existence in an R library.
Mixed effects log-linear models
CC BY-SA 2.5
null
2010-09-29T18:22:58.343
2020-11-05T10:11:48.823
null
null
287
[ "categorical-data", "mixed-model", "log-linear" ]
3174
2
null
3173
11
null
Log-linear or Poisson model are part of generalized linear models. Look at the [lme4](http://cran.r-project.org/web/packages/lme4/index.html) package which allows for mixed-effects modeling, with `family=poisson()`. Here is an example of use: ``` > data(homerun, package="Zelig") > with(homerun, table(homeruns, month)) month homeruns April August July June March May September 0 36 36 40 26 1 33 30 1 13 17 11 21 1 14 14 2 0 3 3 3 0 3 6 3 1 0 0 1 0 1 0 > library(lme4) > mod <- glmer(homeruns ~ player + (player - 1 | month), data=homerun, family=poisson()) > summary(mod) Generalized linear mixed model fit by the Laplace approximation Formula: homeruns ~ player + (player - 1 | month) Data: homerun AIC BIC logLik deviance 305.8 324.6 -147.9 295.8 Random effects: Groups Name Variance Std.Dev. Corr month playerMcGwire 7.9688e-10 2.8229e-05 playerSosa 6.6633e-02 2.5813e-01 0.000 Number of obs: 314, groups: month, 7 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.7949 0.1195 -6.651 2.91e-11 *** playerSosa -0.1252 0.2020 -0.620 0.535 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Correlation of Fixed Effects: (Intr) playerSosa -0.592 ``` The scale parameter (useful to check for possible overdispersion) is available through the following slot: ``` summary(mod)@sigma ``` (The equivalent for usual GLM would be `summary(glm(...))$dispersion`). More information about mixed-effects modeling as implemented in `lme4` can be found on R-forge, in the [Mixed-effects models project](http://lme4.R-forge.R-project.org/), or the [GLMM FAQ](https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html), as suggested by @fabians. The [gamm4](http://cran.r-project.org/web/packages/gamm4/index.html) package may also be of interest as it allows to fit generalized additive mixed models.
null
CC BY-SA 4.0
null
2010-09-29T18:26:28.917
2020-11-05T10:11:48.823
2020-11-05T10:11:48.823
930
930
null