Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2941 | 2 | null | 2938 | 1 | null | I would say that the more or less canonical way to compare two distributions would be a chi-squared test. The statistic is not normalized, though, and it depends on how you choose the bins. The last point can of course be seen as a feature, not a bug: choosing bins appropriately allows you to look more closely for similarity in the tails than in the middle of the distributions, for instance.
| null | CC BY-SA 2.5 | null | 2010-09-21T07:46:52.397 | 2010-09-21T07:46:52.397 | null | null | 1352 | null |
2942 | 2 | null | 2917 | 7 | null | I can only agree with John. Furthermore, perhaps [this](http://eskes.psychiatry.dal.ca/Files/2003_-_Saville_-_Basic_statistics_and_the_inconsistency_of_m.pdf) paper by David Saville helps you with some formula to recalculate variability measures from LSDs et al.:
Saville D.J. (2003). [Basic statistics and the inconsistency of multiple comparison procedures.](http://eskes.psychiatry.dal.ca/Files/2003_-_Saville_-_Basic_statistics_and_the_inconsistency_of_m.pdf) Canadian Journal of Experimental Psychology, 57, 167–175
UPDATE:
If you are looking for more formulas to convert between various effect sizes, books on meta-analysis should provide a lot of these. However, I am not an expert in this area and can't recommend one.
But, I remember that the book by Rosenthal and Rosnow once helped with some formula:
[Essentials of Behavioral Research: Methods and Data Analysis](http://rads.stackoverflow.com/amzn/click/0073531960)
Furthermore, I have heard a lot of good things about the formulas in this book by Rosenthal, Rosnow & Rubin (although I have never used it):
[Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach](http://rads.stackoverflow.com/amzn/click/0521659809) (You should definitely give it a try if a nearby library has it).
If this is not enough, perhaps ask another question on literature for converting effect sizes for meta-analyses. Perhaps someone more into meta-analysis has more grounded recommendations.
| null | CC BY-SA 2.5 | null | 2010-09-21T08:55:35.920 | 2010-09-23T16:56:50.490 | 2010-09-23T16:56:50.490 | 442 | 442 | null |
2944 | 2 | null | 2860 | 3 | null | Following the comments exchange with Ebony (see Whuber's answer). I gather that in Ebony's application, $p$ is much larger than $n$ which is itself very large. In this case the complexity of computing the eigen decomposition is in the order of $O(n^3)$. Two solutions spring to mind:
- partial decomposition: assuming $p$ is very large, it could be the case that the full eigen-decomposition is not needed. If only the $k$ largest eigen values (and corresponding vectors) are needed, presumably they could be obtained with complexity near $O(nk^2)$. Would such an algorithm be a solution to your problem ?
- Full decomposition: in this case it may be better to draw $J$ random sub-samples of your observations of size $n_0$ suitably smaller than $n$ and compute $J$ pca decompositions. That would in turn give you $J$ values of each eigen values/vector which could be used to establish the sampling distribution of there population values (and there means would be a good estimator of the population eigen values/vectors). Given the $n^3$ complexity, this could be made to be much faster (by appropriately choosing $n_0$). A second benefit is that this procedure can be run in parallele across $m$ cores/computers yielding an overall complexity of $O(jm^{-1}n_0^3)$
| null | CC BY-SA 2.5 | null | 2010-09-21T11:45:56.080 | 2010-09-22T00:35:28.130 | 2010-09-22T00:35:28.130 | 603 | 603 | null |
2945 | 2 | null | 2828 | 1 | null | From Yaroslav's comments to Henrik's answer:
but cross-validation seems to just postpone the task of assessing complexity. If you use data to pick your parameters and your model as in cross-validation, the relevant question becomes how estimate the amount of data needed for this "meta"-fitter to perform well
I wonder whether this is not in itself informative. You perform several $k$-fold CV with varying $k$ (say along a grid) and look which model performs better as $k$ increases. More specifically: I wonder whether any differentiation among the two model in there $CV(k)$ performance as a function of $k$ can be taken as evidence that this model (the one whose relative performance decreases less when $k$ increases) would be the less complex one.
You could even give a 'significance' flavor to this since the result of the procedure is directly in terms (units) of difference in out of sample forecasting error.
| null | CC BY-SA 2.5 | null | 2010-09-21T12:08:33.850 | 2010-09-21T12:35:49.923 | 2010-09-21T12:35:49.923 | 603 | 603 | null |
2946 | 2 | null | 2932 | 4 | null | Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose an intriguing distinction; namely, between a set of measure zero having a measure zero neighborhood and a set of measure zero all of whose proper neighborhoods have positive measure.
It is not apparent that separable metric spaces are the "right" way to capture this idea, though, as the comment by Matt Heath points out. It sounds like we only need a predefined subcollection of measurable sets (not necessarily even satisfying the axioms of a topology). Such a collection is conveniently obtained in a separable metric space but there are other ways to create such collections, too. Thus it appears that the idea presented here illuminates the connection between abstract measure theory and using random variables in models, but the use of metric spaces may be a bit of a red herring.
| null | CC BY-SA 3.0 | null | 2010-09-21T14:28:05.750 | 2012-01-08T17:18:07.957 | 2012-01-08T17:18:07.957 | 919 | 919 | null |
2947 | 2 | null | 2938 | 2 | null | I [recently](http://www.thinkingaboutthinking.org/wp-content/uploads/2010/05/Lawrence_BRM_in_press.pdf) used the correlation between the empirical CDF and the fitted CDF to quantify goodness-of-fit, and I wonder if this approach might also be useful in the current case, which as I understand it involves comparing two empirical data sets. Interpolation might be necessary if there are different numbers of observations between the sets.
| null | CC BY-SA 2.5 | null | 2010-09-21T15:15:12.333 | 2010-09-21T15:15:12.333 | null | null | 364 | null |
2948 | 1 | 2951 | null | 47 | 28131 | I'm wondering if someone could suggest what are good starting points when it comes to performing community detection/graph partitioning/clustering on a graph that has weighted, undirected edges. The graph in question has approximately 3 million edges and each edge expresses the degree of similarity between the two vertices it connects. In particular, in this dataset edges are individuals and vertices are a measure of the similarity of their observed behavior.
In the past I followed a suggestion I got here on stats.stackexchange.com and used igraph's implementation of Newman's modularity clustering and was satisfied with the results, but that was on a unweighted dataset.
Are there any specific algorithms I should be looking at?
| How to do community detection in a weighted social network/graph? | CC BY-SA 3.0 | null | 2010-09-21T15:50:45.697 | 2018-01-15T12:14:55.440 | 2016-05-18T16:29:25.253 | 7290 | 1007 | [
"clustering",
"data-visualization",
"networks",
"partitioning",
"modularity"
] |
2950 | 1 | null | null | 3 | 3178 | I am trying to determine significant differences between groups of data using a k-nearest neighbor randomization test in R. This test basically looks for similarity amongst homogenous groups and separates them out using random clustering. In the literature, this test is called a "K-nearest neighbor (kNN) randomization test," however, I'm not certain if it called by other names elsewhere.
For my specific data, I have isotopic ratios given for various prey items. I have already grouped together ecologically similar prey item types, and now want to see if those groups differ from one another in their isotopic signature. This is where the kNN test would come in.
Thanks for all of your answers - I'm new to this site, so I'll address your inquiries as applicable in the comments section.
| kNN randomization test in R? | CC BY-SA 2.5 | null | 2010-09-21T17:19:53.943 | 2012-05-16T06:19:59.067 | 2011-01-18T00:37:31.553 | null | null | [
"r",
"k-nearest-neighbour"
] |
2951 | 2 | null | 2948 | 23 | null | igraph implementation of Newman's modularity clustering (fastgreedy function) can be used with weighted edges as well. Just add weight attribute to the edges and analyse as usual. In my experience, it run even faster with weights as there are less ties.
| null | CC BY-SA 2.5 | null | 2010-09-21T17:20:48.360 | 2010-09-21T17:20:48.360 | null | null | 1387 | null |
2952 | 2 | null | 2948 | 10 | null | I know that [Gephi](http://gephi.org/) can process undirected weighted graph, but I seem to remember it has to be stored in [GDF](http://gephi.org/users/supported-graph-formats/gdf-format/), which is pretty close to CSV, or Ucinet [DL](http://gephi.org/users/supported-graph-formats/ucinet-dl-format/). Be aware that it's still an alpha release.
Now, about clustering your graph, Gephi seems to lack clustering pipelines, except for the MCL algorithm that is now available in the latest version. There was a [Google Code Project](http://code.google.com/p/google-summer-of-code-2009-gephi/) in 2009, [Gephi Network Statistics](http://web.ecs.syr.edu/~pjmcswee/gephi.pdf) (featuring e.g. Newman’s modularity metric), but I don't know if something has been released in this direction. Anyway, it seems to allow some kind of modularity/clustering computations, but see also [Social Network Analysis using R and Gephi](http://www.rcasts.com/2010/04/social-network-analysis-using-r-and.html) and [Data preparation for Social Network Analysis using R and Gephi](http://www.r-bloggers.com/data-preparation-for-social-network-analysis-using-r-and-gephi/) (Many thanks to @Tal).
If you are used to Python, it is worth trying [NetworkX](http://networkx.lanl.gov/) (Here is an example of a [weighted graph](http://networkx.lanl.gov/examples/drawing/weighted_graph.html) with the corresponding code). Then you have many ways to carry out your analysis.
You should also look at [INSNA - Social Network Analysis Software](http://www.insna.org/software/index.html) or Tim Evans's webpage about [Complex Networks and Complexity](http://155.198.210.128/~time/networks/).
| null | CC BY-SA 2.5 | null | 2010-09-21T17:45:48.903 | 2010-09-21T21:07:25.300 | 2010-09-21T21:07:25.300 | 930 | 930 | null |
2953 | 2 | null | 2950 | 3 | null | kmeans() (you can find it by typing `??` followed by the name of what you want as in `??kmean`). In general, a good trick is to first look it up on a R specific search [engine](http://rseek.org/)
| null | CC BY-SA 2.5 | null | 2010-09-21T18:13:57.677 | 2010-09-21T18:44:13.107 | 2010-09-21T18:44:13.107 | 603 | 603 | null |
2954 | 2 | null | 2950 | 3 | null | You appear to have confused "cluster analysis" with "classification". The former is where you don't know the groupings and wish to determine them from the training data to hand. Classification is where you know the groups and want to predict them.
There are a few packages in R that do this. For example, look at the results of this [R Site Search](http://ttp://search.r-project.org/cgi-bin/namazu.cgi?query=knn&max=100&result=normal&sort=score&idxname=functions&idxname=vignettes&idxname=views) for suitable packages.
Alternatively, perhaps you are looking for a multivariate analysis of variance? In which case `lm()` and `aov()` would be worth looking at for a start. This presumes you want to "model" your data as a function of the group variable?
| null | CC BY-SA 2.5 | null | 2010-09-21T21:34:04.403 | 2010-09-21T22:20:21.663 | 2010-09-21T22:20:21.663 | 1390 | 1390 | null |
2955 | 2 | null | 2950 | 3 | null | I am not sure to understand your question since you talk about k-means, which is basically an unsupervised method (i.e. where classes are not known a priori), while at the same time you are saying that you already identified groups of individuals. So I would suggest to look at classification methods, or other supervised methods where class membership is known and the objective is to find a weighted combination of your variables that minimize your classification error rate (this is just an example). For instance, [LDA](http://en.wikipedia.org/wiki/Linear_discriminant_analysis) does a good job (see the CRAN task view on [Multivariate Statistics](http://cran.r-project.org/web/views/Multivariate.html)), but look also at the machine learning community (widely represented on the stats.stackexchange) for other methods.
Now since you also talked of k-nearest neighbor, I wonder if you are not introducing a confusion between k-means and kNN. In this case, the corresponding R function is `knn()` in the [class](http://cran.r-project.org/web/packages/class/index.html) package (it includes cross-validation), or see the [kknn](http://cran.r-project.org/web/packages/kknn/index.html) package.
| null | CC BY-SA 2.5 | null | 2010-09-21T21:45:12.007 | 2010-09-21T22:05:16.047 | 2010-09-21T22:05:16.047 | 930 | 930 | null |
2956 | 1 | null | null | 18 | 616 | I'm the author of the [ez package](http://cran.r-project.org/package=ez) for R, and I'm working on an update to include automatic computation of likelihood ratios (LRs) in the output of ANOVAs. The idea is to provide a LR for each effect that is analogous to the test of that effect that the ANOVA achieves. For example, the LR for main effect represents the comparison of a null model to a model that includes the main effect, the LR for an interaction represents the comparison of a model that includes both component main effects versus a model that includes both main effects and their interaction, etc.
Now, my understanding of LR computation comes from [Glover & Dixon](http://www.ncbi.nlm.nih.gov/pubmed/15732688) ([PDF](http://www.thinkingaboutthinking.org/wp-content/uploads/2010/09/Glover-Dixon-2004-Likelihood-ratios-a-simple-and-flexible-statistic-for-empirical-psychologists.pdf)), which covers basic computations as well as corrections for complexity, and the appendix to [Bortolussi & Dixon](http://rads.stackoverflow.com/amzn/click/0521009138) ([appendix PDF](http://www.thinkingaboutthinking.org/wp-content/uploads/2010/09/Dixon-Unknown-Evaluating-Evidence-Appendix-to-Psychonarratology.pdf)), which covers computations involving repeated-measures variables. To test my understanding, I developed [this spreadsheet](https://spreadsheets.google.com/ccc?key=0Ap2N_aeyRMGHdHJxUnVNeEl5VGtvY1RVLVc5UjU4Vmc&hl=en), which takes the dfs & SSs from an example ANOVA (generated from a 2*2*3*4 design using fake data) and steps through the computation of the LR for each effect.
I would really appreciate it if someone with a little more confidence with such computation could take a look and make sure I did everything correctly. For those that prefer abstract code, [here is the R code](http://gist.github.com/590789) implementing the update to ezANOVA() (see esp. lines 15-95).
| Have I computed these likelihood ratios correctly? | CC BY-SA 2.5 | null | 2010-09-21T23:40:39.317 | 2018-08-27T16:09:42.907 | 2018-08-27T16:09:42.907 | 11887 | 364 | [
"r",
"anova",
"likelihood-ratio"
] |
2957 | 1 | 7716 | null | 20 | 3140 | The Gauss-Markov theorem tells us that the OLS estimator is the best linear unbiased estimator for the linear regression model.
But suppose I don't care about linearity and unbiasedness. Then is there some other (possible nonlinear/biased) estimator for the linear regression model which is the most efficient under the Gauss-Markov assumptions or some other general set of assumptions?
There is of course one standard result: OLS itself is the best unbiased estimator if in addition to the Gauss-Markov assumptions we also assume that the errors are normally distributed. For some other particular distribution of errors I could compute the corresponding maximum-likelihood estimator.
But I was wondering if there is some estimator which is better-than-OLS in some relatively general set of circumstances?
| OLS is BLUE. But what if I don't care about unbiasedness and linearity? | CC BY-SA 2.5 | null | 2010-09-22T01:15:49.873 | 2022-09-17T16:31:59.187 | 2010-10-19T07:20:15.037 | 449 | 1393 | [
"regression",
"unbiased-estimator"
] |
2958 | 1 | null | null | 2 | 776 | I know the value for the 16% quartile, so I know the additive deviation for the given distribution. How do I find the deviation of the log of the given distribution on a multiplicative scale?
| How do you calculate the standard deviation on a multiplicative scale for a distribution that has been transformed logarithmically? | CC BY-SA 2.5 | null | 2010-09-22T01:21:15.247 | 2010-09-22T08:22:25.940 | null | null | null | [
"distributions",
"standard-deviation",
"logarithm"
] |
2959 | 2 | null | 2957 | 9 | null | I don't know if you are OK with the Bayes Estimate? If yes, then depending on the Loss function you can obtain different Bayes Estimates. A theorem by Blackwell states that Bayes Estimates are never unbiased. A decision theoretic argument states that every admissible rule ((i.e. or every other rule against which it is compared, there is a value of the parameter for which the the risk of the present rule is (strictly) less than that of rule against which it's being compared)) is a (generalized) Bayes rule.
James-Stein Estimators are another class of estimators (which can be derived by Bayesian methods asymptotically) which are better than OLS in many cases.
OLS can be inadmissible in many situations and James-Stein Estimator is an example. (also called Stein's paradox).
| null | CC BY-SA 2.5 | null | 2010-09-22T01:47:20.280 | 2011-02-28T16:24:13.567 | 2011-02-28T16:24:13.567 | 1307 | 1307 | null |
2960 | 2 | null | 2925 | 0 | null | an easy way to generate symmetric bernoulli trials is to flip a coin twice. if the first toss is H and the second is T, say X = 1. if it's the other way round, say X = 0. if the two tosses match [2H or 2T], discard the outcome and continue. no matter what the bias of the coin, X will be symmetric bernoulli.
| null | CC BY-SA 2.5 | null | 2010-09-22T03:42:41.153 | 2010-09-22T03:42:41.153 | null | null | 1112 | null |
2961 | 2 | null | 2957 | 5 | null | There is a nice review paper by [Kay and Eldar](http://webee.technion.ac.il/Sites/People/YoninaEldar/Download/67j-04490210.pdf) on biased estimation for the purpose of finding estimators with minimum mean square error.
| null | CC BY-SA 2.5 | null | 2010-09-22T04:00:30.520 | 2010-09-22T04:00:30.520 | null | null | 352 | null |
2962 | 1 | 125649 | null | 11 | 12982 | The statistics book I am reading recommends omega squared to measure the effects of my experiments. I have already proven using a split plot design (mix of within-subjects and between-subjects design) that my within-subjects factors are statistically significant with p<0.001 and F=17.
Now I'm looking to see how big is the difference... is there an implementation of omega squared somewhere for R (or python? I know... one can dream ;) Searching on the internet for R-related stuff is a pain the *, I don't know how I manage to find stuff with C.
thanks!
| Omega squared for measure of effect in R? | CC BY-SA 2.5 | null | 2010-09-22T04:38:22.253 | 2019-01-27T17:08:35.430 | 2011-01-14T19:31:12.340 | 449 | 1320 | [
"r",
"anova",
"effect-size",
"split-plot"
] |
2963 | 2 | null | 2925 | 7 | null | Just as another source of verifiable randomness: random.org generates random numbers from atmospheric noise. They publish [a daily file (most days) of random bits](http://www.random.org/files/); the first digit of each day's file might prove suitably verifiable to your parties.
---
Update 2013-11-12: Access to these files is now restricted, but it looks like you can email random.org's operators to gain access:
>
Note: Access to the pregenerated files is currently restricted due to
bandwidth considerations. Let us know ([email protected]) if you require access.
| null | CC BY-SA 3.0 | null | 2010-09-22T05:25:07.513 | 2013-11-12T19:18:29.360 | 2013-11-12T19:18:29.360 | 71 | 71 | null |
2965 | 2 | null | 2958 | 2 | null | I assume you are referring to something like the estimated coefficients in a logistic regression. These are the log-odds. The estimates usually have a standard error and symmetrical confidence interval.
For example lets say an estimated log odds is 2 with an SE of 0.5 and 95% CI of 1.02 to 2.98. The odds ratio you calculate as exp(2) = 7.4. To estimate a "balanced" 95% confidence interval for the odds ratio exponentiate both ends of the 95% CI on the linear scale, viz. exp(1.02) to exp(2.98) gives a 95% CI of 2.8 to 19.7. You could do the same with both ends of the SD or SE interval: exp(1.5) to exp(2.5) gives 4.5 to 12.2 which we might describe as a 68% confidence interval about the mean odds ratio of 7.4.
Note this is not the same as the standard deviation of a "transformed" distribution, ([eg the log-normal](http://en.wikipedia.org/wiki/Log-normal_distribution)), which is defined on the new scale and is symmetrical on that scale even if the distribution is skew.
| null | CC BY-SA 2.5 | null | 2010-09-22T08:22:25.940 | 2010-09-22T08:22:25.940 | null | null | 521 | null |
2966 | 1 | 2967 | null | 9 | 214 | Context
- you have 200 observations of an individual's running time for the 100 metres measured once a day for 200 days.
- Assume the individual was not a runner before commencement of practice
- Based on the observed data and the 199 other observations, you want to estimate the latent time it would take the individual to run if they (a) applied maximal effort; and (b) had a reasonably good run for them (i.e., no major problems with the run; but still a typical run). Let's call this latent potential.
Of course, the actual data would not measure latent potential directly. The data would be noisy:
- Times would vary from run to run
- On some days the individual would be particularly slow because of one or more possible problems (e.g., tripping at the start, getting a cramp half way through, not putting in much effort). Such problems would result in massive outliers
- On some days the individual would be slower than you'd expect, perhaps because of more minor issues.
- In general, with practice the runner would be expected to get faster in latent potential.
- In rare cases, it is possible for the runner to get slower in latent potential (e.g., injury)
The implications of this:
- The occasional slow time might provide minimal information on what the individual is capable of.
- A fast time for the individual suggests that the individual is capable of such a fast time, but a small amount of this fast time might be good fortune on the day (e.g., the right wind, a little luck on the start).
The question:
Thus, how could one estimate latent potential at each of the 200 time points based on the available data and a few assumptions about the nature of running times?
Initial Thoughts: I imagine there would be some form of Bayesian approach that combined the available information and assumptions to form an estimate, but I'm not sure where to look for such models. I'm also not quite clear how the effectiveness of such a model would be evaluated.
| Estimating latent performance potential based on a sequence of observations | CC BY-SA 2.5 | null | 2010-09-22T10:32:51.633 | 2021-03-21T17:39:32.187 | 2015-08-31T19:19:04.880 | 11887 | 183 | [
"time-series",
"bayesian",
"latent-variable",
"isotonic"
] |
2967 | 2 | null | 2966 | 7 | null | You need to perform an isotonic (i.e. monotonic non decreasing) nonparametric regression (see page 6 of [this](https://www.semanticscholar.org/paper/%E2%80%98monoProc%E2%80%99-Version-1.0-5-Strictly-monotone-and-in-R-Scheder/196e7585c8a525c6feaac27147b29bebc0f9b43b) document for an example), then use $\hat{E}(y|x)+ \delta \hat{\sigma}(y|x)$ with $\delta>0$ as the upper potential. There are many packages that will do that in R. I like this [one](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/isoreg) for its simplicity.
Isotonic nonparametric is simply your regular scatterplot smoother with the added prior that more $x$ cannot decrease smoothed $y$ (i.e. drug dose vs effects).
From the first comment below your design includes a $k$-vector dummy variable $z$ (controling for injury,running style) and a continous variable $x$ (days), assuming that $y$ (latent performance) is given by: $E(y|x,z)=m(x)+\alpha z+\delta$ where $m(x)$ is a monotone scatterplot smoother, $\delta>0$ is known and $\alpha\in \mathbb{R}^k$. This types of model can be estimated by isotonic GAM (see [this](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.5142&rep=rep1&type=pdf) paper implemented [here](https://www.rdocumentation.org/packages/bisoreg/versions/1.5/topics/bisoreg)).
Edit: i changed the link to the paper, (the old link was pointing to a derivative of the method by the same author).
| null | CC BY-SA 4.0 | null | 2010-09-22T11:01:39.503 | 2021-03-21T17:39:32.187 | 2021-03-21T17:39:32.187 | 603 | 603 | null |
2968 | 2 | null | 2925 | 6 | null | Many countries have state lottery which is regularly audited, and whose results are announced online: e.g. [UK national lottery](https://www.national-lottery.co.uk/player/p/results/lotto.ftl). You just need to construct an appropriate function which maps this output space to your desired output.
A continuous distribution would be more tricky, but you could obtain a discrete approximation: in the UK case there are ${}^{49}C_6 \times 43$ = 601 304 088 equally likely outcomes, which, depending on context, could give sufficient granularity.
| null | CC BY-SA 2.5 | null | 2010-09-22T11:53:01.753 | 2010-09-22T11:53:01.753 | null | null | 495 | null |
2969 | 2 | null | 2962 | 1 | null | I'd suggest that generalized eta square is considered ([ref](http://www.ncbi.nlm.nih.gov/pubmed/14664681), [ref](http://brm.psychonomic-journals.org/content/37/3/379.short)) a more appropriate measure of effect size. It is included in the ANOVA output in the [ez package](http://cran.r-project.org/package=ez) for R.
| null | CC BY-SA 2.5 | null | 2010-09-22T12:32:55.730 | 2010-09-22T12:32:55.730 | null | null | 364 | null |
2970 | 2 | null | 2966 | 2 | null | Just a guess.
First I would explore transformations of the data, such as converting time to speed or acceleration. Then I would consider the log of that, since it obviously won't be negative.
Then, since you are interested in the asymptote, I would try fitting (by least squares) a simple exponential to the transformed data, with time t being the x axis, and log-transformed speed (or acceleration) being the y axis. See how that works in predicting new measurements as time increases.
A possible alternative to an an exponential function would be a Michaelis-Menten type of hyperbola.
Actually, I would strongly consider a mixed-effect population approach first (as with NONMEM), because each individual may not show enough information to evaluate different models.
If you want to go Bayesian, you could use WinBugs, and provide any prior distribution you want to the parameters of the exponential function. The book I found useful is Gilks, Richardson, Spiegelhalter, "Markov Chain Monte Carlo in Practice", Chapman & Hall, 1996.
| null | CC BY-SA 2.5 | null | 2010-09-22T12:36:38.767 | 2010-09-22T13:28:08.330 | 2010-09-22T13:28:08.330 | 1270 | 1270 | null |
2971 | 1 | 2985 | null | 17 | 1551 | If you fit a non linear function to a set of points (assuming there is only one ordinate for each abscissa) the result can either be:
- a very complex function with small residuals
- a very simple function with large residuals
Cross validation is commonly used to find the "best" compromise between these two extremes. But what does "best" mean? Is it "most likely"? How would you even start to prove what the most likely solution is?
My inner voice is telling me that CV is finding some sort of minimum energy solution. This makes me think of entropy, which I vaguely know occurs in both stats and physics.
It seems to me that the "best" fit is generated by minimising the sum of functions of complexity and error ie
```
minimising m where m = c(Complexity) + e(Error)
```
Does this make any sense? What would the functions c and e be?
Please can you explain using non mathematical language, because I will not understand much maths.
| What is the definition of "best" as used in the term "best fit" and cross validation? | CC BY-SA 2.5 | null | 2010-09-22T14:11:12.697 | 2013-02-11T12:23:57.870 | 2010-09-23T10:54:12.720 | null | 1134 | [
"model-selection",
"cross-validation"
] |
2972 | 1 | 2989 | null | 7 | 1503 | A little while back, J.M. [suggested](https://stats.stackexchange.com/questions/2746/how-to-efficiently-generate-positive-semi-definite-correlation-matrices/2786#2786) using the [Stewart](http://www.jstor.org/stable/2156882) algorithm for generating $n$ by $n$ pseudo random orthogonal matrices in $O(n^2)$ time. He further noted that this methodology is implemented in Nick Higham's matrix computation toolbox ([matlab](http://www.maths.manchester.ac.uk/~higham/mctoolbox/)). Now this package contains a bunch of .m files. Could anyone direct me to which one implements the pseudorandom orthogonal matrix generator? Alternatively, would anyone know of a R implementation of this algorithm?
| Pseudo-random orthogonal matrix generation | CC BY-SA 2.5 | null | 2010-09-22T14:25:01.887 | 2010-09-23T10:45:55.277 | 2017-04-13T12:44:26.710 | -1 | 603 | [
"random-generation",
"matrix"
] |
2973 | 2 | null | 665 | 3 | null | Statistics is the pursuit of truth in the face of uncertainty. Probability is the tool that allows us to quantify uncertainty.
(I have provided another, longer, answer that assumed that what was being asked was something along the lines of "how would you explain it to your grandmother?")
| null | CC BY-SA 2.5 | null | 2010-09-22T14:31:43.833 | 2010-09-22T14:31:43.833 | null | null | 666 | null |
2975 | 1 | 2978 | null | 5 | 2290 | How do I detrend or normalize multiple series of data so that I can inter-compare between the series?
---
Specifics below may not be appropriate for this forum. Please let me know and I can remove or re-phrase, but I think it might be helpful to fully understand the generic question above.
I have a data set that I would like help analyzing. I think this question belongs here and not in at [https://gis.stackexchange.com/](https://gis.stackexchange.com/)
Specifically, I have the following situation: Each series is collected from an airplane flying a path, and has a variable number of (value, lat,lon,time) tuples. I have several of these flightpaths, each at a different time, and flying different paths (sometimes crossing, sometimes not). Flights happened months apart, at different times of day, and due to natural phenomena, the data (thermal in this case) varies.
Part of the region flown over by multiple flights may or may not have an anomalous temperature signature. This is what I want to investigate. I am seeking an algorithm to detrend or normalize all of the flights so that I can increase my SNR and determine if there is a temperature anomaly over a sub-region.
| Normalizing or detrending groups of samples | CC BY-SA 2.5 | null | 2010-09-22T16:35:45.250 | 2010-09-26T07:45:00.947 | 2017-04-13T12:33:47.693 | -1 | 957 | [
"time-series",
"data-visualization",
"variance",
"normalization"
] |
2976 | 1 | 2998 | null | 31 | 31330 | Questions:
- I have a large correlation matrix. Instead of clustering individual correlations, I want to cluster variables based on their correlations to each other, ie if variable A and variable B have similar correlations to variables C to Z, then A and B should be part of the same cluster. A good real life example of this is different asset classes - intra asset-class correlations are higher than inter-asset class correlations.
- I am also considering clustering variables in terms stregth relationship between them, eg when the correlation between variables A and B is close to 0, they act more or less independently. If suddenly some underlying conditions change and a strong correlation arises (positive or negative), we can think of these two variables as belonging to the same cluster. So instead of looking for positive correlation, one would look for relationship versus no relationship. I guess an analogy could be a cluster of positively and negatively charged particles. If the charge falls to 0, the particle drifts away from the cluster. However, both positive and negative charges attract particles to revelant clusters.
I apologise if some of this isn't very clear. Please let me know, I will clarify specific details.
| Clustering variables based on correlations between them | CC BY-SA 2.5 | null | 2010-09-22T17:01:37.580 | 2017-04-06T14:04:02.507 | 2015-11-24T10:38:37.860 | 28666 | 1250 | [
"correlation",
"clustering",
"correlation-matrix"
] |
2977 | 2 | null | 2971 | 9 | null | I will offer a brief intuitive answer (at a fairly abstract level) till a better answer is offered by someone else:
First, note that complex functions/models achieve better fit (i.e., have lower residuals) as they exploit some local features (think noise) of the dataset that are not present globally (think systematic patterns).
Second, when performing cross validation we split the data into two sets: the training set and the validation set.
Thus, when we perform cross validation, a complex model may not predict very well because by definition a complex model will exploit the local features of the training set. However, the local features of the training set could be very different compared the local features of the validation set resulting in poor predictive performance. Therefore, we have a tendency to select the model that captures the global features of the training and the validation datasets.
In summary, cross validation protects against overfitting by selecting the model that captures the global patterns of the dataset and by avoiding models that exploit some local feature of a dataset.
| null | CC BY-SA 2.5 | null | 2010-09-22T17:04:06.030 | 2010-09-22T17:19:31.487 | 2010-09-22T17:19:31.487 | null | null | null |
2978 | 2 | null | 2975 | 5 | null | Multi-level modelling where your data are grouped by flight as a random variable sounds like a good analysis method for this problem. In R the code might be
```
library(lme4) #load the package)
lmer(temp ~ region + (1|flight))
```
This is doable in a variety of statistics packages. If region is simply in region or outside of region then a logistic form should be used.
To directly address your question of normalizing you might like
```
temp - (mean_temp_for_flight - mean_temp)
```
This zeros the temperatures at the overall mean corrected for the individual flight mean. So, if a flight had a mean temp of 20 over the region, and the mean temp of the region is 18, and your sample is 22, then the normalized value would be
```
22 - (20 - 18) = 20
```
Essentially... by flight variability is eliminated.
| null | CC BY-SA 2.5 | null | 2010-09-22T17:21:32.487 | 2010-09-26T07:45:00.947 | 2010-09-26T07:45:00.947 | 601 | 601 | null |
2979 | 2 | null | 2904 | 0 | null | You could avoid the problem altogether by simply estimating
W = alphaH * H + alphaM * M + alphaL * L + X * beta
using 2sls. The fact that H,M, and L are discrete doesn't violate any of the assumptions of 2sls. Of course, using maximum likelihood will produce more efficient estimates, but it relies on more assumptions. If nothing else, the 2sls estimates should provide good starting values for you maximization algorithm.
For maximizing the likelihood,you should try changing your simulation method to make the likelihood function smooth. I think a slight variant of the Geweke-Hajivassiliou-Keane (often just GHK) simulator would work.
| null | CC BY-SA 2.5 | null | 2010-09-22T17:41:07.970 | 2010-09-22T17:41:07.970 | null | null | 1229 | null |
2980 | 2 | null | 2925 | 1 | null | I didn't quite understand what you meant by "on the basis of an external event." But you can certainly flip a fair coin in a manner that a remote user can cryptographically verify.
Consider this algorithm:
- Bob picks a uniformly random boolean value, TRUE or FALSE. He also chooses a large random number. He sends Alice the SHA-256 hash of the boolean value concatenated with the number. (E.g. he sends the hash of "TRUE|12345678".) Since Alice doesn't know the random number and the hash is one-way, Alice doesn't know the boolean value.
- Alice flips a coin and sends Bob the value -- TRUE or FALSE.
- Bob reveals the random number, and thus his own boolean value. Alice verifies that the boolean value and the random number indeed hash to the value she received earlier.
- This final output of the coin flip is the exclusive-or of Alice's boolean value and Bob's boolean value.
With this algorithm, no party can cheat without the cooperation of the counterparty. If either party plays fairly, the output will be a uniform random boolean value.
(EDIT) I now understand the problem to mean you have no internal source of nondeterminism at all, so all randomness has to come from an external source and the algorithm has to be deterministic. In that case, we can still use cryptography to help.
How about taking the SHA-256 of the PDF version of The New York Times every day, or the SHA-256 of the volume and closing price of all the stocks in the Dow Jones Industrial Average in alphabetical order by ticker symbol, or really the secure hash of anything that can be mutually observed and that you can't influence. If you want just one bit, take the first bit of the SHA-256.
If you want a normal distribution, you could take the whole thing in two parts (128 bits, then 128 bits) as two uniform deviates and use the Box-Muller transform to get two normal deviates.
| null | CC BY-SA 2.5 | null | 2010-09-22T18:26:26.657 | 2010-09-23T02:54:38.517 | 2010-09-23T02:54:38.517 | 1122 | 1122 | null |
2981 | 1 | 2986 | null | 7 | 3553 | With the help of several people in this community I have been wetting my feet in clustering some social network data using [igraph's implementation of modularity-based clustering](http://igraph.sourceforge.net/doc/R/fastgreedy.community.html).
I am having some trouble interpreting the output of this routine and how to use it to generate lists of members of each community detected.
This routine outputs a two-column matrix and a list of modularity values. From [the docs](http://igraph.sourceforge.net/doc/R/fastgreedy.community.html):
>
merges: A matrix with two column, this
represents a dendogram and contains
all the merges the algorithm
performed. Each line is one merge and
it is given by the ids of the two
communities merged. The community ids
are integer numbers starting from zero
and the communities between zero and
the number of vertices (N) minus one
belong to individual vertices. The
first line of the matrix gives the
first merge, this merge creates
community N, the number of vertices,
the second merge creates community
N+1, etc.
modularity: A numeric vector
containing the modularity value of the
community structure after performing
every merge.
Working with this explanation and looking at the example at the bottom of the man page, I think the communities in this graph are
```
1st community: 0 1 2 3 4
2nd community: 10 11 12 13 14
3rd community: 5 6 7 8 9
```
Can someone who has used this method before confirm whether the right approach produces this result? I have basically (i) ignored the last two merges and (ii) gone over each row in the 'merges' matrix, combining each pair of vertices into a set while watching out for vertex values that are larger than the number of vertices (and therefore refer to another row in the 'merges' matrix).
| Interpreting output of igraph's fastgreedy.community clustering method | CC BY-SA 3.0 | null | 2010-09-22T18:49:28.433 | 2016-05-20T16:38:16.397 | 2020-06-11T14:32:37.003 | -1 | 1007 | [
"clustering",
"networks",
"partitioning",
"igraph",
"modularity"
] |
2982 | 1 | 3005 | null | 12 | 3582 | I am trying to summarize what I understood so far in penalized multivariate analysis with high-dimensional data sets, and I still struggle through getting a proper definition of soft-thresholding vs. Lasso (or $L_1$) penalization.
More precisely, I used sparse PLS regression to analyze 2-block data structure including genomic data ([single nucleotide polymorphisms](http://en.wikipedia.org/wiki/Single-nucleotide_polymorphism), where we consider the frequency of the minor allele in the range {0,1,2}, considered as a numerical variable) and continuous phenotypes (scores quantifying personality traits or cerebral asymmetry, also treated as continuous variables). The idea was to isolate the most influential predictors (here, the genetic variations on the DNA sequence) to explain inter-individual phenotypic variations.
I initially used the [mixOmics](http://cran.r-project.org/web/packages/mixOmics/index.html) R package (formerly `integrOmics`) which features penalized [PLS](http://en.wikipedia.org/wiki/Partial_least_squares_regression) regression and regularized [CCA](http://en.wikipedia.org/wiki/Canonical_correlation). Looking at the R code, we found that the "sparsity" in the predictors is simply induced by selecting the top $k$ variables with highest loadings (in absolute value) on the $i$th component, $i=1,\dots, k$ (the algorithm is iterative and compute variables loadings on $k$ components, deflating the predictors block at each iteration, see [Sparse PLS: Variable Selection when Integrating Omics data](http://hal.archives-ouvertes.fr/docs/00/30/02/04/PDF/sPLS.pdf) for an overview).
On the contrary, the [spls](http://cran.r-project.org/web/packages/spls/index.html) package co-authored by S. Keleş (see [Sparse Partial Least Squares Regression for Simultaneous Dimension Reduction and Variable Selection](http://www.stat.wisc.edu/~keles/Papers/SPLS_Nov07.pdf), for a more formal description of the approach undertaken by these authors) implements $L_1$-penalization for variable penalization.
It is not obvious to me whether there is a strict "bijection", so to say, between iterative feature selection based on soft-thresholding and $L_1$ regularization. So my question is: Is there any mathematical connection between the two?
References
- Chun, H. and Kele ̧s, S. (2010), Sparse partial least squares for simultaneous dimension reduction and variable selection. Journal of the Royal Statistical Society: Series B, 72, 3–25.
- Le Cao, K.-A., Rossouw, D., Robert-Granie, C., and Besse, P. (2008), A Sparse PLS for Variable Selection when Integrating Omics Data. Statistical Applications in Genetics and Molecular Biology, 7, Article 35.
| Soft-thresholding vs. Lasso penalization | CC BY-SA 2.5 | null | 2010-09-22T20:53:20.303 | 2010-12-21T15:30:40.507 | 2010-12-21T15:30:40.507 | 930 | 930 | [
"multivariate-analysis",
"lasso",
"feature-selection",
"genetics"
] |
2983 | 2 | null | 2971 | 1 | null | The error function is the error of your model (function) on the training data. The complexity is some norm (e.g., squared l2 norm) of the function you are trying to learn. Minimizing the complexity term essentially favors smooth functions, which do well not just on the training data but also on the test data. If you represent your function by a set of coefficients (say, if you are doing linear regression), penalizing the complexity by the squared norm would lead to small coefficient values in your function (penalizing other norms leads to different notions of complexity control).
| null | CC BY-SA 2.5 | null | 2010-09-22T20:59:33.923 | 2010-09-22T21:05:22.323 | 2010-09-22T21:05:22.323 | 881 | 881 | null |
2984 | 2 | null | 2971 | 1 | null | From an optimization point of view, the problem (with $(p,q)\geq 1,\;\lambda>0$),
$(1)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p+\lambda||\beta||_q$
is equivalent to
$(2)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p$
$s.t.$ $||\beta||_q\leq\lambda$
Which simply incorporates unto the objective function the prior information that $||\beta||_q\leq\lambda$. If this prior turns out to be true, then it can be shown ($q=1,2$) that incorporating it unto the objective function minimizes the risk associated with $\hat{\beta}$ (i.e. very unformaly, improves the accuracy of $\hat{\beta}$)
$\lambda$ is a so called meta-parameter (or latent parameter) that is not being optimized over (in which case the solution would trivially reduce to $\lambda=\infty$), but rather, reflects information not contained in the sample $(x,y)$ used to solve $(1)-(2)$ (for example other studies or expert's opinion). Cross validation is an attempt at constructing a data induced prior (i.e. slicing the dataset so that part of it is used to infer reasonable values of $\lambda$ and part of it used to estimate $\hat{\beta}|\lambda$).
As to your subquestion (why $e()=||y-m(x,\beta)||_p$) this is because for $p=1$ ($p=2$) this measure of distance between the model and the observations has (easely) derivable assymptotical properties (strong convergence to meaningfull population couterparts of $m()$).
| null | CC BY-SA 2.5 | null | 2010-09-22T21:47:06.203 | 2010-09-23T00:56:12.477 | 2010-09-23T00:56:12.477 | 603 | 603 | null |
2985 | 2 | null | 2971 | 7 | null | I think this is an excellent question. I am going to paraphase it just to be sure I have got it right:
>
It would seem that there are lots of
ways to choose the complexity penalty
function $c$ and error penalty
function $e$. Which choice is `best'.
What should best even mean?
I think the answer (if there is one) will take you way beyond just cross-validation. I like how this question (and the topic in general) ties nicely to [Occam's Razor](http://en.wikipedia.org/wiki/Occam%27s_razor) and the general concept of [parsimony](http://en.wikipedia.org/wiki/Parsimony) that is fundamental to science. I am by no means an expert in this area but I find this question hugely interesting. The best text I know on these sorts of question is [Universal Artificial Intelligence](http://www.hutter1.net/ai/uaibook.htm) by Marcus Hutter (don't ask me any questions about it though, I haven't read most of it). I went to a talk by Hutter and couple of years ago and was very impressed.
You are right in thinking that there is a minimum entropy argument in there somewhere (used for the complexity penalty function $c$ in some manner). Hutter advocates the use of [Kolmogorov complexity](http://en.wikipedia.org/wiki/Kolmogorov_complexity) instead of entropy. Also, Hutter's definition of `best' (as far as I remember) is (informally) the model that best predicts the future (i.e. best predicts the data that will be observed in the future). I can't remember how he formalises this notion.
| null | CC BY-SA 2.5 | null | 2010-09-22T22:11:42.100 | 2010-09-22T23:28:21.123 | 2010-09-22T23:28:21.123 | 352 | 352 | null |
2986 | 2 | null | 2981 | 3 | null | The function which is used for this purpose:
community.to.membership(graph, merges, steps, membership=TRUE, csize=TRUE)
this can be used to extract membership based on the fastgreedy.community function results.
You have to provide number of steps - how many merges should be performed. The optimal number of steps(merges) is the one which produce the maximal modularity.
| null | CC BY-SA 2.5 | null | 2010-09-22T23:05:26.657 | 2010-09-22T23:05:26.657 | null | null | 1396 | null |
2987 | 2 | null | 7 | 2 | null | This is probably the most complete list you'll find: [Some Datasets Available on the Web](http://www.datawrangling.com/some-datasets-available-on-the-web)
| null | CC BY-SA 2.5 | null | 2010-09-22T23:57:02.360 | 2010-09-22T23:57:02.360 | null | null | 635 | null |
2988 | 1 | null | null | 12 | 8192 | How does one calculate the sample size needed for a study in which a cohort of subjects will have a single continuous variable measured at the time of a surgery and then two years later they will be classified as functional outcome or impaired outcome.
We would like to see if that measurement could have predicted the bad outcome. At some point we may want to derive a cut point in the continuous variable above which we would try to intervene to diminish the probability of the impaired outcome.
Any ideas? Any R implementation.
| Sample size calculation for univariate logistic regression | CC BY-SA 2.5 | null | 2010-09-23T00:12:51.567 | 2010-12-03T11:40:53.517 | null | null | 104 | [
"logistic",
"sample-size"
] |
2989 | 2 | null | 2972 | 9 | null | It's in the Test Matrix Toolbox, not the Matrix Computation Toolbox. The M-file `qmult.m` (premultiplication by a Haar-distributed pseudorandom orthogonal matrix) can be found [here](http://www.netlib.org/toms/694) or [here](http://people.sc.fsu.edu/~jburkardt/m_src/test_matrix/qmult.m).
| null | CC BY-SA 2.5 | null | 2010-09-23T00:22:16.190 | 2010-09-23T00:22:16.190 | null | null | 830 | null |
2998 | 2 | null | 2976 | 16 | null | Here's a simple example in R using the `bfi` dataset: bfi is a dataset of 25 personality test items organised around 5 factors.
```
library(psych)
data(bfi)
x <- bfi
```
A hiearchical cluster analysis using the euclidan distance between variables based on the absolute correlation between variables can be obtained like so:
```
plot(hclust(dist(abs(cor(na.omit(x))))))
```

The dendrogram shows how items generally cluster with other items according to theorised groupings (e.g., N (Neuroticism) items group together). It also shows how some items within clusters are more similar (e.g., C5 and C1 might be more similar than C5 with C3). It also suggests that the N cluster is less similar to other clusters.
Alternatively you could do a standard factor analysis like so:
```
factanal(na.omit(x), 5, rotation = "Promax")
Uniquenesses:
A1 A2 A3 A4 A5 C1 C2 C3 C4 C5 E1 E2 E3 E4 E5 N1
0.848 0.630 0.642 0.829 0.442 0.566 0.635 0.572 0.504 0.603 0.541 0.457 0.541 0.420 0.549 0.272
N2 N3 N4 N5 O1 O2 O3 O4 O5
0.321 0.526 0.514 0.675 0.625 0.804 0.544 0.630 0.814
Loadings:
Factor1 Factor2 Factor3 Factor4 Factor5
A1 0.242 -0.154 -0.253 -0.164
A2 0.570
A3 -0.100 0.522 0.114
A4 0.137 0.351 -0.158
A5 -0.145 0.691
C1 0.630 0.184
C2 0.131 0.120 0.603
C3 0.154 0.638
C4 0.167 -0.656
C5 0.149 -0.571 0.125
E1 0.618 0.125 -0.210 -0.120
E2 0.665 -0.204
E3 -0.404 0.332 0.289
E4 -0.506 0.555 -0.155
E5 0.175 -0.525 0.234 0.228
N1 0.879 -0.150
N2 0.875 -0.152
N3 0.658
N4 0.406 0.342 -0.148 0.196
N5 0.471 0.253 0.140 -0.101
O1 -0.108 0.595
O2 -0.145 0.421 0.125 0.199
O3 -0.204 0.605
O4 0.244 0.548
O5 0.139 0.177 -0.441
Factor1 Factor2 Factor3 Factor4 Factor5
SS loadings 2.610 2.138 2.075 1.899 1.570
Proportion Var 0.104 0.086 0.083 0.076 0.063
Cumulative Var 0.104 0.190 0.273 0.349 0.412
Test of the hypothesis that 5 factors are sufficient.
The chi square statistic is 767.57 on 185 degrees of freedom.
The p-value is 5.93e-72
```
| null | CC BY-SA 2.5 | null | 2010-09-23T01:38:24.977 | 2010-10-29T11:20:13.590 | 2010-10-29T11:20:13.590 | 183 | 183 | null |
2999 | 2 | null | 2988 | 7 | null | Sample size calculations for logistic regression are complex. I wont attempt to summarise it here. Reasonably accessible solutions to this problem are found in:
[Hsieh FY. Sample size tables for logistic regression. Statistics in Medicine. 1989 Jul;8(7):795-802.](http://onlinelibrary.wiley.com/doi/10.1002/sim.4780080704/abstract)
[Hsieh FY, et al. A simple method of sample size calculation for linear and logistic regression. Statistics in Medicine. 1998 Jul 30;17(14):1623-34.](http://onlinelibrary.wiley.com/doi/10.1002/%28SICI%291097-0258%2819980730%2917:14%3C1623%3a%3aAID-SIM871%3E3.0.CO;2-S/abstract)
An accessible discussion of the issues with example calculations can be found in the last chapter (Section 8.5 pp 339-347) of Hosmer & Lemeshow's [Applied Logistic Regression](http://rads.stackoverflow.com/amzn/click/0471356328).
| null | CC BY-SA 2.5 | null | 2010-09-23T01:56:37.513 | 2010-09-23T01:56:37.513 | null | null | 521 | null |
3000 | 2 | null | 2988 | 3 | null | a simple question about sample size is: how large a sample is needed to get a 95% confidence interval no longer than 2d for the [unknown] mean of the data distribution. another variant is: how large a sample is needed to have power 0.9 at $\theta = 1$ when testing H$_0: \theta = 0$. you don't seem to specify any criterion for choosing a sample size.
actually, it sounds as tho your study will be conducted in a sequential fashion. in that case, it may pay to make that an explicit part of the experiment. sequential sampling can often be more efficient than a fixed sample-size experiment [fewer observations needed, on average].
farrel: i'm adding this in reply to your comment.
to get at a sample size, one usually specifies some sort of precision criterion for an estimate [such as length of a CI] OR power at a specified alternative of a test to be carried out on the data. you seem to have mentioned both of these criteria. there is nothing wrong with that, in principle: you just have to then do two sample size calculations - one to achieve the desired estimation precision - and another to get the desired power at the stated alternative. then the larger of the two sample sizes is what is required. [btw - other than saying 80% power - you don't seem to have mentioned what test you plan to perform - or the alternative at which you want the 80% power.]
as for using sequential analysis: if subjects are enrolled in the study all at the same time, then a fixed sample size makes sense. but if the subjects are few and far between, it may take a year or two [or more] to get the required number enrolled. thus the trial could go on for three or four years [or more]. in that case, a sequential scheme offers the possibility of stopping sooner than that - if the effect[s] you are looking for become statistically significant earlier in the trial.
| null | CC BY-SA 2.5 | null | 2010-09-23T02:13:36.460 | 2010-09-26T01:42:35.190 | 2010-09-26T01:42:35.190 | 1112 | 1112 | null |
3001 | 1 | 3002 | null | 7 | 339 | So let's say you have a distribution where X is the 16% quantile. Then you take the log of all the values of the distribution. Would log(X) still be the 16% quantile in the log distribution?
| Log graph question | CC BY-SA 2.5 | null | 2010-09-23T03:57:03.107 | 2010-11-02T13:51:35.310 | 2010-11-02T13:51:35.310 | 8 | 1395 | [
"standard-deviation",
"logarithm",
"quantiles",
"self-study"
] |
3002 | 2 | null | 3001 | 10 | null | Yes. Quantiles can be transformed under any monotonically increasing transformation.
To see this, suppose $Y$ is the random variable and $q_{0.16}$ is the 16% quantile. Then
$$
\text{Pr}(Y\le q_{0.16}) = \text{Pr}(\log(Y)\le\log(q_{0.16})) = 0.16.
$$
Generally, if $f$ is monotonic and increasing then
$$
\text{Pr}(Y\le q_{\alpha}) = \text{Pr}(f(Y)\le f(q_{\alpha})) = \alpha.
$$
| null | CC BY-SA 2.5 | null | 2010-09-23T04:30:57.217 | 2010-09-24T03:06:00.800 | 2010-09-24T03:06:00.800 | 159 | 159 | null |
3003 | 2 | null | 7 | 2 | null | Peter Skomoroch maintains a list of datasets at [http://www.datawrangling.com/some-datasets-available-on-the-web](http://www.datawrangling.com/some-datasets-available-on-the-web). Many of the links provided as to places that list datasets.
| null | CC BY-SA 2.5 | null | 2010-09-23T06:10:48.023 | 2010-09-23T06:10:48.023 | null | null | 1392 | null |
3004 | 2 | null | 2971 | 3 | null | A lot of people have excellent answers, here is my $0.02.
There are two ways to look at "best model", or "model selection", speaking statistically:
1 An explanation that is as simple as
possible, but no simpler (Attrib.
Einstein)
```
- This is also called Occam's Razor, as explanation applies here.
- Have a concept of True model or a model which approximates the truth
- Explanation is like doing scientific research
```
2 Prediction is the interest, similar to engineering development.
```
- Prediction is the aim, and all that matters is that the model works
- Model choice should be based on quality of predictions
- Cf: Ein-Dor, P. & Feldmesser, J. (1987) Attributes of the performance of central processing units: a relative performance prediction model. Communications of the ACM 30, 308–317.
```
Widespread (mis)conception:
Model Choice is equivalent to choosing the best model
For explanation we ought to be alert to be possibility of there being several
(roughly) equally good explanatory models. Simplicity helps both with communicating the concepts embodied in the model and in what psychologists call generalization, the ability to ‘work’ in scenarios very different from those in which the model was studied. So
there is a premium on few models.
For prediction: (Dr Ripley's) good analogy is that of choosing between expert
opinions: if you have access to a large panel of experts, how would you
use their opinions?
Cross Validation takes care of the prediction aspect. For details about CV please refer to this presentation by Dr. B. D. Ripley [Dr. Brian D. Ripley's presentation on model selection](http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf)
Citation: Please note that everything in this answer is from the presentation cited above. I am a big fan of this presentation and I like it. Other opinions may vary. The title of the presentation is: "Selecting Amongst Large Classes of Models" and was given at Symposium in Honour of John Nelder's 80th birthday, Imperial College, 29/30 March 2004, by Dr. Brian D. Ripley.
| null | CC BY-SA 2.5 | null | 2010-09-23T07:39:23.560 | 2010-09-23T07:39:23.560 | null | null | 1307 | null |
3005 | 2 | null | 2982 | 2 | null | What i'll say holds for regression, but should be true for PLS also. So it's not a bijection because depeding on how much you enforce the constrained in the $l1$, you will have a variety of 'answers' while the second solution admits only $p$ possible answers (where $p$ is the number of variables) <-> there are more solutions in the $l1$ formulation than in the 'truncation' formulation.
| null | CC BY-SA 2.5 | null | 2010-09-23T07:51:46.680 | 2010-09-23T23:33:14.197 | 2010-09-23T23:33:14.197 | 603 | 603 | null |
3006 | 1 | 3010 | null | 9 | 1290 | An anonymous reader posted [the following question on my blog](http://jeromyanglim.blogspot.com/2009/11/tips-for-writing-up-research-in.html?showComment=1285227037363#c3803430820186070755).
Context:
The reader wanted to run a factor analysis on scales from a questionnaire - but the data was from paired husbands and wives.
Question:
- Can factor analysis be run on dyadic data? If so, how?
- Would the independence assumption hold for factor analysis?
| Factor analysis of dyadic data | CC BY-SA 2.5 | null | 2010-09-23T07:58:18.980 | 2020-07-13T03:26:44.700 | 2020-07-13T03:26:44.700 | 11887 | 183 | [
"independence",
"factor-analysis",
"dyadic-data"
] |
3008 | 2 | null | 2988 | 7 | null | I usually find it easier and faster to run a simulation. Papers take a long time to read, to understand and finally come to the conclusion that they don't apply in the special case one is interested in.
Therefore, I would just pick a number of subjects, simulate the covariate you are interested in (distributed as you believe it will be), simulate good/bad outcomes based on the functional form you posit (threshold effects of the covariate? nonlinearity?) with the minimum (clinically) significant effect size you would like to detect, run the result through your analysis and see whether the effect is found at your alpha. Rerun this 10,000 times and look whether you found the effect in 80% of the simulations (or whatever other power you need). Adjust the number of subjects, repeat until you have a power you are happy with.
This has the advantage of being very general, so you are not confined to a specific functional form or a specific number or distribution of covariates. You can include dropouts, see chl's comment above, either at random or influenced by covariate or outcome. You basically code the analysis you are going to do on the final sample beforehand, which sometimes helps focus my thinking on the study design. And it is easily done in R (vectorize!).
| null | CC BY-SA 2.5 | null | 2010-09-23T08:16:37.353 | 2010-10-02T20:05:22.870 | 2010-10-02T20:05:22.870 | 1352 | 1352 | null |
3009 | 2 | null | 3006 | 6 | null | Yes, s/he can run a factor analysis on dyadic data. I would start with Kenny et al.'s (2006) "[Dyadic Data Analysis](http://davidakenny.net/kkc/kkc.htm)". It is a great and extremly helpful book!
Another option is "Modeling Dyadic and Interdependent Data in the Developmental and Behavioral Sciences" (Card et al. 2008).
(If your anonymous read is able to read German, s/he might be interested in this presentation "[Dyadische Datenanalyse: Lineare
Strukturgleichungsmodelle](http://www.mzes.uni-mannheim.de/projekte/pairfam/spring07/Dyadische%20Datenanalyse%20mittels%20SEM.pdf)").
| null | CC BY-SA 2.5 | null | 2010-09-23T09:07:55.583 | 2010-09-23T09:17:29.330 | 2010-09-23T09:17:29.330 | 307 | 307 | null |
3010 | 2 | null | 3006 | 7 | null | Structural equation models are better suited for this kind of data, e.g. by introducing an extra factor for couple which allows to account for the dependence structure (paired responses). David A. Kenny reviewed the main points for [analysis dyadic data](http://davidakenny.net/dyad.htm); although it doesn't focus on questionnaire analysis, it may help.
A couple of references :
- Olsen, JA and Kenny, DA (2006). Structural Equation Modeling With Interchangeable Dyads. Psychological Methods, 11(2), 127–141.
- McMahon,, JM, Pouget, ER, and Tortu, S (2006). A guide for multilevel modeling of dyadic data with binary outcomes using SAS PROC NLMIXED. Comput Stat Data Anal., 50(12), 3663–3680.
- Thompson, L and Walker, AJ (1982). The Dyad as the Unit of Analysis: Conceptual and Methodological Issues. Journal of Marriage and the Family, 889-900.
- Newsom, JT (2002). A multilevel structural equation model for dyadic data. Structural Equation Modeling, 9(3), 441-447.
- González, J, Tuerlinckx, F, and De Boeck, P (2009). Analyzing structural relations in multivariate dyadic binary data. Applied Multivariate Research, 13, 77-92.
- Gill, PS (2005). Bayesian Analysis of Dyadic Data.
For more thorough description of the models for dyadic data (although not restrained to item analysis), I would suggest
- Kenny, DA, Kashy, DA, and Cook, WL (2006). Dyadic Data Analysis. Guilford Press.
- Card, NA, Selig, JP, and Little, TD (2008). Modeling Dyadic and Interdependent Data in the Developmental and Behavioral Sciences. Mahwah, NJ: Lawrence Erlbaum Associates.
| null | CC BY-SA 2.5 | null | 2010-09-23T09:34:29.073 | 2010-09-23T09:42:17.033 | 2010-09-23T09:42:17.033 | 930 | 930 | null |
3011 | 2 | null | 2971 | 5 | null | In a general machine-learning view the answer is fairly simple: we want to build model that will have the highest accuracy when predicting new data (unseen during training). Because we cannot directly test this (we don't have data from the future) we do Monte Carlo simulation of such a test -- and this is basically the idea underneath cross validation.
There may be some issues about what is accuracy (for instance a business client can state that overshoot costs 5€ per unit and undershoot 0.01€ per unit, so it is better to build a less accurate but more undershooting model), but in general it is fairly intuitive per cent of true answers in classification and widely used explained variance in regression.
| null | CC BY-SA 2.5 | null | 2010-09-23T10:31:04.193 | 2010-09-23T10:31:04.193 | null | null | null | null |
3012 | 5 | null | null | 0 | null | null | CC BY-SA 2.5 | null | 2010-09-23T10:59:13.660 | 2010-09-23T10:59:13.660 | 2010-09-23T10:59:13.660 | null | null | null |
|
3013 | 4 | null | null | 0 | null | Model selection is a problem of judging which model from some set performs best. Popular methods include $R^2$, AIC and BIC criteria, test sets, and cross-validation. To some extent, feature selection is a subproblem of model selection. | null | CC BY-SA 3.0 | null | 2010-09-23T10:59:13.660 | 2011-06-15T03:58:08.313 | 2011-06-15T03:58:08.313 | 919 | null | null |
3014 | 5 | null | null | 0 | null | Tag Usage
Clustered-standard-errors and/or cluster-samples should be tagged as such; do not use the "clustering" tag for them. Both these methodologies take clusters as given, rather than discovered.
Overview
Clustering, or cluster analysis, is a statistical technique of uncovering groups of units in multivariate data. It is separate from classification (clustering could be called "classification without a teacher"), as there is no units with known labels, and even the number of clusters is usually unknown, and needs to be estimated. Clustering is a key challenge of data mining, in particular when done in large databases.
Although there are many clustering techniques, they fall into several broad classes: hierarchical clustering (in which a hierarchy is built from each unit representing their own cluster up to the whole sample being one single cluster), centroid-based clustering (in which are units are put into the cluster nearest to a specific centroid), distribution- or model-based clustering (in which clusters are assumed to follow a specific distribution, such as multivariate Gaussian), and density-based clustering (in which clusters are obtained as the areas of the highest estimated density).
References
Consult the following questions for resources on clustering:
- Recommended books or articles as introduction to Cluster Analysis?
- Books about incremental data clustering
- Differences between clustering and segmentation
| null | CC BY-SA 4.0 | null | 2010-09-23T11:06:19.013 | 2020-12-08T14:26:18.850 | 2020-12-08T14:26:18.850 | 11887 | null | null |
3015 | 4 | null | null | 0 | null | Cluster analysis is the task of partitioning data into subsets of objects according to their mutual "similarity," without using preexisting knowledge such as class labels. [Clustered-standard-errors and/or cluster-samples should be tagged as such; do NOT use the "clustering" tag for them.] | null | CC BY-SA 3.0 | null | 2010-09-23T11:06:19.013 | 2016-03-09T10:33:39.543 | 2016-03-09T10:33:39.543 | 3277 | null | null |
3016 | 5 | null | null | 0 | null | Overview
Time series are data observed over time (either in continuous time or at discrete time periods).
Time series analysis includes trend identification, temporal pattern recognition, spectral analysis, and forecasting future values based on the past.
The salient characteristic of methods of time series analysis (as opposed to more general methods to analyze relationships among data) is accounting for the possibility of serial correlation (also known as autocorrelation and temporal correlation) among the data. Positive serial correlation means successive observations in time tend to be close to one another, whereas negative serial correlation means successive observations tend to oscillate between extremes. Time series analysis also differs from analyses of more general stochastic processes by focusing on the inherent direction of time, creating a potential asymmetry between past and future.
References
The following threads contain a list of references on time series:
- Books for self-studying time series analysis?
The following journals are dedicated to researching time series:
- Journal of Time Series Analysis
- Journal of Time Series Econometrics
| null | CC BY-SA 4.0 | null | 2010-09-23T11:14:30.340 | 2020-11-02T13:57:35.530 | 2020-11-02T13:57:35.530 | 53690 | null | null |
3017 | 4 | null | null | 0 | null | Time series are data observed over time (either in continuous time or at discrete time periods). | null | CC BY-SA 2.5 | null | 2010-09-23T11:14:30.340 | 2011-03-11T19:39:55.637 | 2011-03-11T19:39:55.637 | 919 | null | null |
3018 | 5 | null | null | 0 | null | [Hypothesis testing](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing) assesses whether data are inconsistent with a given hypothesis rather than being an effect of random fluctuations.
| null | CC BY-SA 3.0 | null | 2010-09-23T11:28:36.817 | 2017-09-27T18:47:47.197 | 2017-09-27T18:47:47.197 | 7290 | null | null |
3019 | 4 | null | null | 0 | null | Hypothesis testing assesses whether data are inconsistent with a given hypothesis rather than being an effect of random fluctuations. | null | CC BY-SA 3.0 | null | 2010-09-23T11:28:36.817 | 2017-09-27T18:47:47.197 | 2017-09-27T18:47:47.197 | 7290 | null | null |
3022 | 2 | null | 3001 | 1 | null | Yes.
When you say that "X is the 16% quantile", what it means is that 16% of the sample have a lower value than X. The log of any number smaller than X is smaller than log(X) and the log of any number greater than X is greater than log(X), so the ordering is not changed.
| null | CC BY-SA 2.5 | null | 2010-09-23T18:45:20.550 | 2010-09-23T18:45:20.550 | null | null | 666 | null |
3023 | 2 | null | 2686 | 1 | null | A simple solution that does not require the acquisition of specialized knowledge is to use [control charts](http://en.wikipedia.org/wiki/Control_chart). They're ridiculously easy to create and make it easy to tell special cause variation (such as when you are out of town) from common cause variation (such as when you have an actual low-productivity month), which seems to be the kind of information you want.
They also preserve the data. Since you say you'll use the charts for many different purposes, I advise against performing any transformations in the data.
[Here is a gentle introduction](http://rads.stackoverflow.com/amzn/click/0945320531). If you decide that you like control charts, you may want to [dive deeper](http://rads.stackoverflow.com/amzn/click/0945320612) into the subject. The benefits to your business will be huge. Control charts are reputed to have been a major contributor to [the post-war Japanese economic boom](http://en.wikipedia.org/wiki/W._Edwards_Deming).
There is even an [R package](http://cran.r-project.org/web/packages/qcc/).
| null | CC BY-SA 2.5 | null | 2010-09-23T19:16:36.727 | 2010-09-23T19:16:36.727 | null | null | 666 | null |
3024 | 1 | 3027 | null | 36 | 60577 | I understand that for certain datasets such as voting it performs better. Why is Poisson regression used over ordinary linear regression or logistic regression? What is the mathematical motivation for it?
| Why is Poisson regression used for count data? | CC BY-SA 2.5 | null | 2010-09-23T19:38:40.190 | 2022-08-22T00:39:09.940 | 2013-10-04T02:20:09.050 | 7290 | 1392 | [
"count-data",
"poisson-regression"
] |
3025 | 2 | null | 2971 | 4 | null | Great discussion here, but I think of cross-validation in a different way from the answers thus far (mbq and I are on the same page I think). So, I'll put in my two cents at the risk of muddying the waters...
Cross-validation is a statistical technique for assessing the variability and bias, due to sampling error, in a model's ability to fit and predict data. Thus, "best" would be the model which provides the lowest generalization error, which would be in units of variability and bias. Techniques such as Bayesian and Bootstrap Model Averaging can be used to update a model in an algorithmic way based upon results from the cross validation effort.
[This FAQ](http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html) provides good information for more context of what informs my opinion.
| null | CC BY-SA 2.5 | null | 2010-09-23T20:19:30.067 | 2010-09-23T20:19:30.067 | null | null | 1080 | null |
3026 | 2 | null | 3024 | 2 | null | My understanding is primarily because counts are always positive and discrete, the Poisson can summarize such data with one parameter. The main catch being that the variance equals the mean.
| null | CC BY-SA 2.5 | null | 2010-09-23T20:28:48.930 | 2010-09-23T20:28:48.930 | null | null | null | null |
3027 | 2 | null | 3024 | 59 | null | [Poisson distributed](http://en.wikipedia.org/wiki/Poisson_distribution) data is intrinsically integer-valued, which makes sense for count data. Ordinary Least Squares (OLS, which you call "linear regression") assumes that true values are [normally distributed](http://en.wikipedia.org/wiki/Normal_distribution) around the expected value and can take any real value, positive or negative, integer or fractional, whatever. Finally, [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression) only works for data that is 0-1-valued (TRUE-FALSE-valued), like "has a disease" versus "doesn't have the disease". Thus, the Poisson distribution makes the most sense for count data.
That said, a normal distribution is often a rather good approximation to a Poisson one for data with a mean above 30 or so. And in a regression framework, where you have predictors influencing the count, an OLS with its normal distribution may be easier to fit and would actually be more general, since the Poisson distribution and regression assume that the mean and the variance are equal, while OLS can deal with unequal means and variances - for a count data model with different means and variances, one could use a [negative binomial distribution](http://en.wikipedia.org/wiki/Negative_binomial_distribution), for instance.
| null | CC BY-SA 2.5 | null | 2010-09-23T20:42:45.613 | 2010-09-23T20:42:45.613 | null | null | 1352 | null |
3028 | 2 | null | 3024 | 26 | null | Essentially, it's because linear and logistic regression make the wrong kinds of assumptions about what count outcomes look like. Imagine your model as a very stupid robot that will relentlessly follow your orders, no matter how nonsensical those orders are; it completely lacks the ability to evaluate what you tell it. If you tell your robot that something like votes is distributed continuously from negative infinity to infinity, that's what it believes votes are like, and it might give you nonsensical predictions (Ross Perot will receive -10.469 votes in the upcoming election).
Conversely, the Poisson distribution is discrete and positive (or zero... zero counts as positive, yes?). At a very minimum, this will force your robot to give you answers that could actually happen in real life. They may or may not be good answers, but they will at least be drawn from the possible set of "number of votes cast".
Of course, the Poisson has its own problems: it assumes that the mean of the vote count variable will also be the same as its variance. I don't know if I've ever actually seen a non-contrived example where this was true. Fortunately, bright people have come up with other distributions that are also positive and discrete, but that add parameters to allow the variance to, er, vary (e.g., negative binomial regression).
| null | CC BY-SA 2.5 | null | 2010-09-23T20:52:15.760 | 2010-09-23T20:52:15.760 | null | null | 71 | null |
3029 | 2 | null | 3024 | 3 | null | Others have basically said the same thing I'm going to but I thought I'd add my take on it. It depends on what you're doing exactly but a lot of times we like to conceptualize the problem/data at hand. This is a slightly different approach compared to just building a model that predicts pretty well. If we are trying to conceptualize what's going on it makes sense to model count data using a non-negative distribution that only puts mass at integer values. We also have many results that essentially boil down to saying that under certain conditions count data really is distributed as a poisson. So if our goal is to conceptualize the problem it really makes sense to use a poisson as the response variable. Others have pointed out other reasons why it's a good idea but if you're really trying to conceptualize the problem and really understand how data that you see could be generated then using a poisson regression makes a lot of sense in some situations.
| null | CC BY-SA 2.5 | null | 2010-09-23T23:10:50.283 | 2010-09-23T23:10:50.283 | null | null | 1028 | null |
3030 | 2 | null | 3024 | 5 | null | Mathematically if you start with the simple assumption that the probability of an event occurring in a defined interval $T = 1$ is $\lambda$ you can show the expected number of events in the interval $T = t$ is is $\lambda.t$, the variance is also $\lambda.t$ and the [probability distribution ](http://www.umass.edu/wsp/statistics/lessons/poisson/derivation.html) is
$$p(N=n) = \frac{(\lambda.t)^{n}e^{-\lambda.t}}{n!}$$
Via this and the [maximum likelihood method](http://en.wikipedia.org/wiki/Poisson_distribution#Maximum_likelihood) & generalised linear models (or some other method) you arrive at [Poisson regression](http://en.wikipedia.org/wiki/Poisson_regression).
In simple terms Poisson Regression is the model that fits the assumptions of the underlying random process generating a small number of events at a rate (i.e. number per unit time) determined by other variables in the model.
| null | CC BY-SA 2.5 | null | 2010-09-23T23:28:22.660 | 2010-09-23T23:28:22.660 | null | null | 521 | null |
3031 | 1 | null | null | 3 | 1911 | I just loaded a csv file in R. When I ran the `summary` command for one of the columns, I got the following:
```
> Error: unexpected symbol in "summary k_low"
```
I'm pretty sure I know what the 'unexpected symbol' is: for observations in which we had no reliable data for the variable `k_low`, we simply entered a period, or a `.`
Is there a way to escape these observations in R? My boss uses STATA, which apparently escapes periods automatically, but I don't have STATA on my personal computer (and would prefer to use R in any case).
So basically: Is there a way to get R to bypass any 'symbols' and return a summary of only those observations for which numerical data was entered?
| How to escape symbolic value in R | CC BY-SA 2.5 | null | 2010-09-24T02:56:56.010 | 2010-09-24T13:58:47.937 | 2010-09-24T13:58:47.937 | 930 | 1410 | [
"r"
] |
3032 | 2 | null | 3031 | 6 | null | It looks like you just left out the parentheses. Try
```
summary(k_low)
```
| null | CC BY-SA 2.5 | null | 2010-09-24T03:03:32.427 | 2010-09-24T03:03:32.427 | null | null | 159 | null |
3034 | 2 | null | 3031 | 4 | null | Looks like Rob got it right but I'll illustrate how to fix the period problem.
```
> testdata <- c(1, 2, 3, ".")
> testdata
[1] "1" "2" "3" "."
> summary(testdata)
Length Class Mode
4 character character
> #That's not what we want....
> cleandata <- as.numeric(testdata)
Warning message:
NAs introduced by coercion
> cleandata
[1] 1 2 3 NA
> summary(cleandata)
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
1.0 1.5 2.0 2.0 2.5 3.0 1.0
```
Notice that because the testdata vector had a period in it it converted everything to characters. When we tell it to convert to numeric it doesn't know what to do with the periods so it automatically converts it to NA which is nice for what we want to do. Then when you call summary on it it ignores the NAs for the summary but tells you how many NAs there are in the data.
| null | CC BY-SA 2.5 | null | 2010-09-24T03:13:31.783 | 2010-09-24T03:13:31.783 | null | null | 1028 | null |
3035 | 2 | null | 423 | 142 | null | I just came across this and loved it:

([http://xkcd.com/795/](http://xkcd.com/795/)).
| null | CC BY-SA 3.0 | null | 2010-09-24T04:09:04.750 | 2012-12-15T20:29:26.853 | 2012-12-15T20:29:26.853 | 919 | 253 | null |
3036 | 2 | null | 423 | 102 | null | this too:

| null | CC BY-SA 2.5 | null | 2010-09-24T04:13:25.630 | 2010-09-24T04:13:25.630 | null | null | 253 | null |
3037 | 2 | null | 3031 | 1 | null | Another point is that when you read in the data, it sound as if your "." is really a missing value.
So what you might wish to do when reading the data, is something like this:
k_low <- read.date(..., na.strings = ".")
| null | CC BY-SA 2.5 | null | 2010-09-24T04:16:19.980 | 2010-09-24T04:16:19.980 | null | null | 253 | null |
3038 | 1 | null | null | 41 | 23896 | Imagine you have a study with two groups (e.g., males and females) looking at a numeric dependent variable (e.g., intelligence test scores) and you have the hypothesis that there are no group differences.
Question:
- What is a good way to test whether there are no group differences?
- How would you determine the sample size needed to adequately test for no group differences?
Initial Thoughts:
- It would not be enough to do a standard t-test because a failure to reject the null hypothesis does not mean that the parameter of interest is equal or close to zero; this is particularly the case with small samples.
- I could look at the 95% confidence interval and check that all values are within a sufficiently small range; perhaps plus or minus 0.3 standard deviations.
| How to test hypothesis of no group differences? | CC BY-SA 3.0 | null | 2010-09-24T05:24:00.240 | 2016-10-21T14:44:52.830 | 2016-10-21T14:44:52.830 | 35989 | 183 | [
"hypothesis-testing",
"t-test",
"equivalence",
"tost"
] |
3039 | 2 | null | 3031 | 2 | null | When using [read.table, read.csv or read.delim](http://127.0.0.1:20298/library/utils/html/read.table.html) use:
```
read.table(file,..., na.strings = ".", ...)
```
na.strings - a character vector of strings which are to be interpreted as NA values.
Blank fields are also considered to be missing values in logical,
integer, numeric and complex fields.
| null | CC BY-SA 2.5 | null | 2010-09-24T05:37:32.793 | 2010-09-24T05:37:32.793 | null | null | 521 | null |
3040 | 2 | null | 3038 | 21 | null | I think you are asking about [testing for equivalence](http://web.archive.org/web/20120119090119/http://www.graphpad.com/library/BiostatsSpecial/article_182.htm). Essentially you need to decide how large a difference is acceptable for you to still conclude that the two groups are effectively equivalent. That decision defines the 95% (or other) confidence interval limits, and sample size calculations are made on this basis.
There is a [whole book](http://rads.stackoverflow.com/amzn/click/1584881607) on the topic.
A very common clinical "equivalent" of equivalence tests is a [non-inferiority test/ trial](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2701110/pdf/nihms112245.pdf). In this case you "prefer" one group over the other (an established treatment) and design your test to show that the new treatment is not inferior to the established treatment at some level of statistical evidence.
I think I need to credit [Harvey Motulsky](https://stats.stackexchange.com/users/25/harvey-motulsky) for the [GraphPad.com](http://www.graphpad.com/welcome.htm) site (under ["Library"](http://www.graphpad.com/index.cfm?cmd=library.index)).
| null | CC BY-SA 3.0 | null | 2010-09-24T05:50:39.257 | 2013-04-04T16:34:49.933 | 2017-04-13T12:44:41.967 | -1 | 521 | null |
3044 | 2 | null | 3038 | 13 | null | Following Thylacoleo's answer, I did a little research.
The [equivalence](http://cran.r-project.org/web/packages/equivalence/equivalence.pdf) package in R has the `tost()` function.
See Robinson and Frose (2004) "[Model validation using equivalence tests](http://research.eeescience.utoledo.edu/lees/papers_PDF/Robinson_2004_EcolModell.pdf)" for more info.
| null | CC BY-SA 2.5 | null | 2010-09-24T08:25:03.020 | 2010-09-24T08:25:03.020 | null | null | 183 | null |
3045 | 2 | null | 2746 | 0 | null | A cheap and cheerful approach I've used for testing is to generate m N(0,1) n-vectors V[k] and then use P = d*I + Sum{ V[k]*V[k]'} as an nxn psd matrix. With m < n this will be singular for d=0, and for small d will have high condition number.
| null | CC BY-SA 2.5 | null | 2010-09-24T09:19:30.617 | 2010-09-24T09:19:30.617 | null | null | null | null |
3046 | 2 | null | 3038 | 5 | null | In the medical sciences, it is preferable to use a confidence interval approach as opposed to two one-sided tests (tost). I also recommend graphing the point estimates, CIs, and a priori-determined equivalence margins to make things very clear.
Your question would likely be addressed by such an approach.
The CONSORT guidelines for non-inferiority/equivalence studies are quite useful in this regard.
See Piaggio G, Elbourne DR, Altman DG, Pocock SJ, Evans SJ, and CONSORT Group. Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA. 2006, Mar 8;295(10):1152-60. [(Link to full text.)](http://jama.ama-assn.org/cgi/content/full/295/10/1152)
| null | CC BY-SA 2.5 | null | 2010-09-24T09:43:08.590 | 2010-09-24T09:43:08.590 | null | null | 561 | null |
3047 | 2 | null | 3038 | 16 | null | Besides the already mentioned possibility of some kind of equivalence test, of which most of them, to the best of my knowledge, are mostly routed in the good old frequentist tradition, there is the possibility of conducting tests which really provide a quantification of evidence in favor of a null-hyptheses, namely bayesian tests.
An implementation of a bayesian t-test can be found here:
Wetzels, R., Raaijmakers, J. G. W., Jakab, E., & Wagenmakers, E.-J. (2009). [How to quantify support for and against the null hypothesis: A flexible WinBUGS implementation of a default Bayesian t-test.](http://www.ejwagenmakers.com/2009/WetzelsEtAl2009Ttest.pdf) Psychonomic Bulletin & Review, 16, 752-760.
There is also a tutorial on how to do all this in R:
[http://www.ruudwetzels.com/index.php?src=SDtest](http://www.ruudwetzels.com/index.php?src=SDtest)
---
An alternative (perhaps more modern approach) of a Bayesian t-test is provided (with code) in this paper by Kruschke:
Kruschke, J. K. (2013). [Bayesian estimation supersedes the t test](http://www.indiana.edu/~kruschke/articles/Kruschke2012JEPG.pdf). Journal of Experimental Psychology: General, 142(2), 573–603. doi:10.1037/a0029146
---
All props for this answer (before the addition of Kruschke) should go to my colleague David Kellen. I stole his answer from [this question](https://stats.stackexchange.com/questions/305/when-conducting-a-t-test-why-would-one-prefer-to-assume-or-test-for-equal-varia/804#804).
| null | CC BY-SA 3.0 | null | 2010-09-24T11:39:12.360 | 2013-06-05T18:31:14.217 | 2017-04-13T12:44:37.420 | -1 | 442 | null |
3048 | 1 | 3066 | null | 29 | 12538 | I have a matrix where a(i,j) tells me how many times individual i viewed page j. There are 27K individuals and 95K pages. I would like to have a handful of "dimensions" or "aspects" in the space of pages which would correspond to sets of pages which are often viewed together. My ultimate goal is to then be able to compute how often individual i has viewed pages that fall in dimension 1, dimension 2, etc.
I have read the R documentation on [principal component analysis](http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Dimensionality_Reduction/Principal_Component_Analysis) and [single value decomposition](http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Dimensionality_Reduction/Singular_Value_Decomposition) and have executed these commands, but I am unsure how to proceed.
How can I use dimensionality reduction to do this? Or is this really a clustering problem and I should instead look into clustering algorithms?
Many thanks for any insight
~l
| How to do dimensionality reduction in R | CC BY-SA 2.5 | null | 2010-09-24T11:44:24.637 | 2016-11-21T11:44:06.490 | 2010-09-24T13:19:19.210 | 930 | 1007 | [
"r",
"clustering",
"dimensionality-reduction"
] |
3049 | 2 | null | 3038 | 8 | null | There are a few papers I know of that could be helpful to you:
Tryon, W. W. (2001). [Evaluating statistical difference, equivalence, and indeterminacy using inferential confidence intervals: An integrated alternative method of conducting null hypothesis statistical tests.](http://dx.doi.org/10.1037//1082-989X.6.4.371) Psychological Methods, 6, 371-386. ([FREE PDF](http://www.unt.edu/rss/class/mike/5030/articles/tryonICI.pdf))
And a correction:
Tryon, W. W., & Lewis, C. (2008). [An Inferential Confidence Interval Method of Establishing Statistical Equivalence That Corrects Tryon’s (2001) Reduction Factor.](http://psycnet.apa.org/doi/10.1037/a0013158) Psychological Methods, 13, 272-278. ([FREE PDF](http://www.fordham.edu/academics/programs_at_fordham_/psychology/tryon/pdf/Tryon_%26_Lewis_2008_An_Inferential_Confidence_Interval_Method.pdf))
Furthermore:
Seaman, M. A. & Serlin, R. C. (1998). E[quivalence confidence intervals for two-group comparisons of means](http://psycnet.apa.org/doi/10.1037/1082-989X.3.4.403). Psychological Methods, Vol 3(4), 403-411.
| null | CC BY-SA 2.5 | null | 2010-09-24T11:50:32.963 | 2010-09-24T11:57:15.200 | 2010-09-24T11:57:15.200 | 183 | 442 | null |
3050 | 2 | null | 3048 | 4 | null | It is certainly a clustering problem. Check out Rs `cluster` package to get an overview of algorithm options (`pam` and `agnes` are the best options to start; they represent two main streams in clustering -- [centroids](http://en.wikipedia.org/wiki/K-means_clustering) and [hierarchical](http://en.wikipedia.org/wiki/Hierarchical_clustering)).
The main problem to use clustering on your data is to define a good similarity measure between pages; simple one is to use Manhattan distance; a bit more complex to count the number of common viewers and normalize it with, let's say, mean of number of viewers of the first and second page -- this should silence popularity effects.
EDIT: Ok, now I've saw the data size... it will probably make R explode, since it needs one triangle of $(\text{number of pages})\times(\text{number of pages})$ matrix to store distances. Check out this [report](http://ldc.usb.ve/~mcuriel/Cursos/WC/Transfer.pdf) for possible solutions.
| null | CC BY-SA 3.0 | null | 2010-09-24T12:37:53.703 | 2016-11-21T11:44:06.490 | 2016-11-21T11:44:06.490 | 20833 | null | null |
3051 | 1 | 3054 | null | 25 | 54995 | I have a vector of values that I would like to report the average in windows along a smaller slide.
For example, for a vector of the following values:
```
4, 5, 7, 3, 9, 8
```
A window size of 3 and a slide of 2 would do the following:
```
(4+5+7)/3 = 5.33
(7+3+9)/3 = 6.33
(9+8)/3 = 5.67
```
And return a vector of these values:
```
5.33, 6.33, 5.67
```
Is there a simple function that will do this for me? If it also returned the indices of the window starts that would be an added bonus. In this example that would be 1,3,5
| Mean of a sliding window in R | CC BY-SA 2.5 | null | 2010-09-24T14:41:31.997 | 2021-01-04T13:01:05.197 | 2010-09-24T17:17:56.337 | null | 1024 | [
"r"
] |
3052 | 1 | 3065 | null | 11 | 5479 | I'm examining some genomic coverage data which is basically a long list (a few million values) of integers, each saying how well (or "deep") this position in the genome is covered.
I would like to look for "valleys" in this data, that is, regions which are significantly "lower" than their surrounding environment.
Note that the size of the valleys I'm looking for may range from 50 bases to a few thousands.
What kind of paradigms would you recommend using to find those valleys?
UPDATE
Some graphical examples for the data:


UPDATE 2
Defining what is a valley is of course one of the question I'm struggling with. These are obvious ones for me:


but there some more complex situations. In general, there are 3 criteria I consider:
1. The (average? maximal?) coverage in the window with respect to the global average.
2. The (...) coverage in the window with respect to its immediate surrounding.
3. How large is the window: if I see very low coverage for a short span it is interesting, if I see very low coverage for a long span it's also interesting, if I see mildly low coverage for a short span it's not really interesting, but if I see mildly low coverage for a long span - it is.. So it's a combination of the length of the sapn and it's coverage. The longer it is, the higher I let the coverage be and still consider it a valley.
Thanks,
Dave
| How to look for valleys in a graph? | CC BY-SA 2.5 | null | 2010-09-24T15:09:52.740 | 2010-09-26T20:15:09.507 | 2010-09-24T18:02:47.390 | 634 | 634 | [
"r",
"distributions",
"statistical-significance",
"data-visualization"
] |
3053 | 2 | null | 3051 | 5 | null | This simple line of code does the thing:
```
((c(x,0,0) + c(0,x,0) + c(0,0,x))/3)[3:(length(x)-1)]
```
if `x` is the vector in question.
| null | CC BY-SA 2.5 | null | 2010-09-24T15:27:13.803 | 2010-09-24T17:16:59.560 | 2010-09-24T17:16:59.560 | null | 1414 | null |
3054 | 2 | null | 3051 | 30 | null | Function `rollapply` in package zoo gets you close:
```
> require(zoo)
> TS <- zoo(c(4, 5, 7, 3, 9, 8))
> rollapply(TS, width = 3, by = 2, FUN = mean, align = "left")
1 3
5.333333 6.333333
```
It just won't compute the last value for you as it doesn't contain 3 observations. Maybe this will be sufficient for your real problem? Also, note that the returned object has the indices you want as the `names` of the returned vector.
Your example is making an assumption that there is an unobserved 0 in the last window. It might be more useful or realistic to pad with an `NA` to represent the missing information and tell `mean` to handle missing values. In this case we will have (8+9)/2 as our final windowed value.
```
> TS <- zoo(c(4, 5, 7, 3, 9, 8, NA))
> rollapply(TS, width = 3, by = 2, FUN = mean, na.rm = TRUE, align = "left")
1 3 5
5.333333 6.333333 8.500000
```
| null | CC BY-SA 2.5 | null | 2010-09-24T15:36:42.200 | 2010-09-25T09:03:09.933 | 2010-09-25T09:03:09.933 | 1390 | 1390 | null |
3055 | 2 | null | 3052 | 2 | null | There are many options for this, but one good one: you can use the `msExtrema` function in the [msProcess package](http://cran.r-project.org/web/packages/msProcess/index.html).
Edit:
In financial performance analysis, this kind of analysis is often performed using a "drawdown" concept. The `PerformanceAnalytics` package has some [useful functions to find these valleys](http://braverock.com/brian/R/PerformanceAnalytics/html/findDrawdowns.html). You could use the same algorithm here if you treat your observations as a time series.
Here are some examples of how you might be able to apply this to your data (where the "dates" are irrelevant but just used for ordering), but the first elements in the `zoo` object would be your data:
```
library(PerformanceAnalytics)
x <- zoo(cumsum(rnorm(50)), as.Date(1:50))
findDrawdowns(x)
table.Drawdowns(x)
chart.Drawdown(x)
```
| null | CC BY-SA 2.5 | null | 2010-09-24T15:43:45.877 | 2010-09-24T16:36:44.277 | 2010-09-24T16:36:44.277 | 5 | 5 | null |
3056 | 2 | null | 2909 | 2 | null | You might start by looking at the [drawdown distribution functions in fBasics](http://help.rmetrics.org/fBasics/html/stats-maxdd.html). So you could easily simulate the brownian motion with drift and apply these functions as a start.
| null | CC BY-SA 2.5 | null | 2010-09-24T16:03:15.173 | 2010-10-28T16:01:23.113 | 2010-10-28T16:01:23.113 | 5 | 5 | null |
3057 | 2 | null | 3052 | 4 | null | I'm completely ignorant of these data, but assuming the data are ordered (not in time, but by position?) it makes sense to make use of time series methods. There are lots of methods for identifying temporal clusters in data. Generally they are used to find high values but can be used for low values grouped together. I'm thinking here of scan statistics, cumulative sum statistics (and others) used to detect disease outbreaks in count data. Examples of these methods are in the surveillance package and the DCluster package.
| null | CC BY-SA 2.5 | null | 2010-09-24T16:32:47.110 | 2010-09-24T16:32:47.110 | null | null | null | null |
3058 | 2 | null | 3051 | 1 | null | This will get you the window means and the index of the first value of the window:
```
#The data
x <- c(4, 5, 7, 3, 9, 8)
#Set window size and slide
win.size <- 3
slide <- 2
#Set up the table of results
results <- data.frame(index = numeric(), win.mean = numeric())
#i indexes the first value of the window (the sill?)
i <- 1
#j indexes the row of the results to be added next
j <- 1
while(i < length(x)) {
#This mean preserves the denominator of 3
win.mean <- sum(x[i:(i+2)], na.rm = TRUE)/win.size
#Insert the results
results[j, ] <- c(i, win.mean)
#Increment the indices for the next pass
i <- i + slide
j <- j + 1
}
```
Various caveats apply: haven't tested this against anything but your sample data; I believe that appending to data frames like this can get really slow if you have lots of values (because it'll copy the data.frame each time); etc. But it does produce what you asked for.
| null | CC BY-SA 2.5 | null | 2010-09-24T16:40:33.070 | 2010-09-24T16:40:33.070 | null | null | 71 | null |
3059 | 1 | null | null | 7 | 480 | I'm investigating the effect of a continuous variable A on a measurement variable M stratified by another factor variable C in an observational dataset.
Due to heteroscedasticity I decided to use a bootstrapped regression analysis. However looking at the data, the background set of variables are not evenly distributed if I dichotomise A (present or not). I've just finished running another analysis where I do the same analysis after having matched the dataset for confounders (using CEM in R).
Now the problem is which analysis to trust: the bootstrapped regression approach on the entire dataset or the bootstrapped version of the matched data? Under one of the factors in C the results diverge.
Any ideas how this can be analyzed?
| Bootstrapped regression with total data or bootstrap with matched data? | CC BY-SA 2.5 | null | 2010-09-24T17:04:59.187 | 2010-09-27T00:25:43.633 | 2010-09-27T00:25:43.633 | 159 | 1291 | [
"bootstrap"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.