Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
3707
1
3714
null
5
1532
I plotted below the standarized results of: - RAND() - RAND() * RAND() - ... - RAND() * RAND() * RAND() * RAND() * RAND() * RAND() It seems that the results are getting to zero, is that because you're multiplying a bunch of numbers close to zero together, or is there another explanation? ![alt text](https://i.stack.imgur.com/5asaM.jpg) EDIT There is no real motivation behind this. Working on a spreadsheet earlier today, I was in a situation where i was multiplying a constant * Rand() * Rand() .. each rand() corresponded to a different variable (column), so I wanted to find out what goes on if you multiply a bunch of random numbers together.
Difference between Excel's RAND(), RAND()*RAND(), etc
CC BY-SA 2.5
null
2010-10-18T15:10:23.190
2010-10-19T10:10:46.127
2010-10-18T17:43:41.720
449
59
[ "probability", "uniform-distribution", "monte-carlo", "excel", "central-limit-theorem" ]
3708
1
3720
null
7
1494
I have a data set of 100 geographic regions for which the unemployment rate has been observed during the last 9 years. Now, I want to simulate/forecast from this data the next year unemployment rate for all the 100 regions. What would be a suitable approach? I was thinking to use an auto-regressive mixed effect model, but I fear that I do not have enough data... Regards
Forecasting unemployment rate
CC BY-SA 2.5
null
2010-10-18T15:12:10.027
2010-10-19T15:23:18.643
2010-10-18T15:36:51.290
449
1443
[ "time-series", "markov-chain-montecarlo", "mixed-model", "forecasting" ]
3710
2
null
3707
5
null
"In Excel, the Rand function returns a random number that is greater than or equal to 0 and less than 1. The Rand function returns a new random number each time your spreadsheet recalculates." -http://www.techonthenet.com/excel/formulas/rand.php Because RAND() is always less than one and greater than zero, multiplying it by itself will make it smaller. As you do that over and over, you will get closer to zero. If you want something that gives you a random number between 0 and a, you can do a*RAND() instead.
null
CC BY-SA 2.5
null
2010-10-18T15:18:11.330
2010-10-18T15:18:11.330
null
null
1118
null
3711
2
null
3707
3
null
I am not sure why your graph has values from -2 to 4 but for what it is worth here is the answer to the text of your question: Suppose that $U \sim U[0,1]$. Then the cdf of $U$ is given by $F(u) = u$ for $u \in (0,1)$ and 1 otherwise. When you multiply different iid realizations of the random draws you are essentially computing the following: $Y = U^n$ where $n$ is the number of times you are multiplying the random draws. Thus, the corresponding cdf is: $F(y) = P(Y \le y)$ i.e., $F(y) = P(U^n \le y)$ i.e., $F(y) = P(U \le y^{1/n})$ i.e., $F(y) = y^{1/n}$ for $y \in (0,1)$ and 1 otherwise. The above cdf of $Y$ converges to a [dirac-delta](http://en.wikipedia.org/wiki/Dirac_delta_function) function on $Y=0$ as $n \rightarrow \infty$. Thus, $E(y) \rightarrow 0$ as $n \rightarrow \infty$. The above convergence is also related to [first-order stochastic dominance](http://en.wikipedia.org/wiki/Stochastic_dominance#First-order_stochastic_dominance) in the following sense: Suppose that $n_1 > n_2$. Then, it is the case that: $F(y|n_1) \ge F(y|n_2)$ Intuitively, the above result states that: In visual terms as $n$ increases the cdf of $Y$ shifts to the right. This happens because the pdf associated with $Y$ starts concentrating at the lower end of the interval $[0,1]$ and asymptotically all the pdf concentrates at 0 which explains the observed behavior. General Case @whuber's comment to this answer gives the solution when $Y$ is the product of $n$ independent, different random variables drawn from [0,1].
null
CC BY-SA 2.5
null
2010-10-18T15:29:33.363
2010-10-18T19:01:39.267
2010-10-18T19:01:39.267
null
null
null
3712
2
null
3704
4
null
Your impact of treatment depends updon quintile status. After running the 'regress' command, type, 'lincom treat + treatXquin2' to get the impact of treatment for those in quintile 2. This gives a confidence interval and standard error for this estimate. Note that, as is typical in Stata, using the names of the variables in a post-estimation step refers to the coefficients on those variables, not the variables themselves.
null
CC BY-SA 2.5
null
2010-10-18T15:47:22.743
2010-10-18T22:08:54.650
2010-10-18T22:08:54.650
401
401
null
3713
1
3716
null
83
50779
When using cluster analysis on a data set to group similar cases, one needs to choose among a large number of clustering methods and measures of distance. Sometimes, one choice might influence the other, but there are many possible combinations of methods. Does anyone have any recommendations on how to choose among the various clustering algorithms / methods and distance measures? How is this related to the nature of the variables (e.g., categorical or numerical) and the clustering problem? Is there an optimal technique?
Choosing a clustering method
CC BY-SA 3.0
null
2010-10-18T15:58:40.990
2017-08-26T18:46:30.470
2017-03-21T13:05:38.060
7290
485
[ "clustering", "distance-functions", "methodology" ]
3714
2
null
3707
6
null
Standardization is good, but it's not the right standardization for this situation. It helps to see that multiplying values of RAND() is the same as adding their logarithms (followed by a subsequent exponentiation). Because the different calls to RAND() are supposed to be independent, those logarithms are still independently distributed. As a simple calculation shows, their common distribution actually has a mean and variance. (In fact, its negative is an exponential distribution.) The Central Limit Theorem applies. It says that the logs, suitably standardized, converge to a normal distribution. We conclude that these products--standardized to have a constant geometric mean and constant geometric variance--are converging to the exponential of a normally distributed variable: that is, a [lognormal distribution](http://en.wikipedia.org/wiki/Log-normal_distribution).
null
CC BY-SA 2.5
null
2010-10-18T16:00:46.523
2010-10-18T16:06:05.273
2010-10-18T16:06:05.273
919
919
null
3715
1
null
null
6
627
I'd like to use Bayes' Theorem on data obtained through a small random sample, and I want to use Agresti-Coull (or any other alternative technique) to know how big the uncertainty is. Here is Bayes' Theorem: $P(A|B) = \frac{P(B|A)\cdot P(A)}{P(B)}$ Now, all the data I have on this system is obtained from small random samples, so there's a large uncertainty involved with all three variables, $P(B|A)$, $P(A)$ and $P(B)$. I've been using Agresti-Coull to obtain both the value and the uncertainty for each of these three variables. (I represent the `number+-uncertainty` as a [ufloat](http://packages.python.org/uncertainties/#an-easy-to-use-calculator) object using the [uncertainties](http://packages.python.org/uncertainties/) package.) But using Agresti-Coull three times separately for these three variables is a problem; They are dependent on each other. So I've been getting impossible results. For example, if you let $P(B)$'s uncertainty pull it downward, and the respective uncertainties of $P(B|A)$ and $P(A)$ pull them upwards, you get a total probability bigger than one. Is there a way to do Agresti-Coull-style approximation on the whole Bayes expression instead of doing it on the three pieces separately?
Bayes' Theorem and Agresti-Coull: Will it blend?
CC BY-SA 2.5
null
2010-10-18T17:19:09.747
2022-05-07T23:15:21.417
2010-10-18T17:57:01.283
449
5793
[ "bayesian", "python", "approximation", "uncertainty" ]
3716
2
null
3713
47
null
There is no definitive answer to your question, as even within the same method the choice of the distance to represent individuals (dis)similarity may yield different result, e.g. when using euclidean vs. squared euclidean in hierarchical clustering. As an other example, for binary data, you can choose the Jaccard index as a measure of similarity and proceed with classical hierarchical clustering; but there are alternative approaches, like the Mona ([Monothetic Analysis](http://stat.ethz.ch/R-manual/R-devel/library/cluster/html/mona.html)) algorithm which only considers one variable at a time, while other hierarchical approaches (e.g. classical HC, Agnes, Diana) use all variables at each step. The k-means approach has been extended in various way, including partitioning around medoids (PAM) or representative objects rather than centroids (Kaufman and Rousseuw, 1990), or fuzzy clustering (Chung and Lee, 1992). For instance, the main difference between the k-means and PAM is that PAM minimizes a sum of dissimilarities rather than a sum of squared euclidean distances; fuzzy clustering allows to consider "partial membership" (we associate to each observation a weight reflecting class membership). And for methods relying on a probabilistic framework, or so-called model-based clustering (or [latent profile analysis](http://spitswww.uvt.nl/~vermunt/ermss2004f.pdf) for the psychometricians), there is a great package: [Mclust](http://www.stat.washington.edu/mclust/). So definitively, you need to consider how to define the resemblance of individuals as well as the method for linking individuals together (recursive or iterative clustering, strict or fuzzy class membership, unsupervised or semi-supervised approach, etc.). Usually, to assess cluster stability, it is interesting to compare several algorithm which basically "share" some similarity (e.g. k-means and hierarchical clustering, because euclidean distance work for both). For assessing the concordance between two cluster solutions, some pointers were suggested in response to this question, [Where to cut a dendrogram?](https://stats.stackexchange.com/questions/3685/where-to-cut-a-dendrogram) (see also the cross-references for other link on this website). If you are using R, you will see that several packages are already available in Task View on Cluster Analysis, and several packages include vignettes that explain specific methods or provide case studies. [Cluster Analysis: Basic Concepts and Algorithms](http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf) provides a good overview of several techniques used in Cluster Analysis. As for a good recent book with R illustrations, I would recommend chapter 12 of Izenman, Modern Multivariate Statistical Techniques (Springer, 2008). A couple of other standard references is given below: - Cormack, R., 1971. A review of classification. Journal of the Royal Statistical Society, A 134, 321–367. - Everitt, B., 1974. Cluster analysis. London: Heinemann Educ. Books. - Gordon, A., 1987. A review of hierarchical classification. Journal of the Royal Statistical Society, A 150, 119–137. - Gordon, A., 1999. Classification, 2nd Edition. Chapman and Hall. - Kaufman, L., Rousseuw, P., 1990. Finding Groups in Data: An Introduction to Cluster Analysis. New York, Wiley.
null
CC BY-SA 2.5
null
2010-10-18T17:24:05.970
2010-11-09T16:38:14.217
2017-04-13T12:44:51.060
-1
930
null
3717
2
null
3708
3
null
Given the nature of your data I would suggest you investigate the use of [exponential smoothing](http://en.wikipedia.org/wiki/Exponential_smoothing) as well as fitting ARIMA type models, especially due to the temporal constraints within your data. Although I wouldn't doubt spatial dependencies exist, I would be abit skeptical about their usefulness in forecasting (in what I would imagine are fairly large areas), especially since any spatial dependency will likely be already captured (at least to a certain extent) in previous observations in the series. Where the spatial dependencies may be helpful is if you have small area estimation problems, and you can use the spatial dependency in your data to help smooth out your estimations in those noisy geographic regions. This may not be a problem though since you have aggregated data for a full year. You shouldn't take my word for it though, and should investigate economics literature on the subject and assess various forecasting methods yourself. Its quite possible other variables are useful predictors of future unemployment in similar panel settings. Edit: First I'd like to clarify that I did not mean that the OP should simply prefer some type of exponential smoothing over other techniques. I think the OP should assess performance of various forecasting methods using a hold out sample of 1 or 2 time periods. I do not know the literature for forecasting unemployment, but I have not seen any method so obviously superior that others should be dismissed outright in any context. Kwak mentions a key point I did not consider initially (and Stephan's comment makes the same point very succinctly as well). The panel nature of the data allows one to estimate an auto-regressive compenent in the model much easier than in a single time series. So I would follow his suggestion and consider the A/B estimator a good bet to provide the best forecast accuracy. I'm still sticking with my initial suggestion though that I am skeptical of the usefulness of the spatial dependence, and one should assess a models predictive accuracy with and without the spatial component. In terms of prediction it is not simply whether some sort of spatial auto-correlation exists, it is whether that spatial auto-correlation is useful in predicting future values independent of past observations in the series. For simplifying my reasoning, lets denote $R_{t}$ corresponds to a geographic region $R$ at time $t$ $R_{t-1}$ corresponds to a geographic region $R$ at the previous time period $W_{t-1}$ corresponds to however one wants to define the spatial relationship for for the neighbors of $R_{t}$ at the previous time period In this case $R$ is some attribute and $W$ is that same attribute in the neighbors of $R$ (i.e. an endogenous spatial lag.) In pretty much all cases of lattice areal data, we have a relationship between $R$ and $W$. Two general explanations for this relationship are 1) The General Social Process Theory This is when there are processes that affect $R$ and $W$ simultaneously that result in similar values with some sort of spatial organization. The support of the data does not distinguish between the forces that shape attributes in a broader scope than the areal units encompass. (I imagine there is a better name for this, so if someone could help me out.) 2) The Spatial Externalities Theory This is when some attribute of $W$ directly affects an attribute of $R$. Srikant's example of job diffusion is an example of this. In the context of forecasting, the general social process model may not be all that helpful in forecasting. In this case, $R_{t-1}$ and $W_{t-1}$ are reflective of the same external shocks, and so $W_{t-1}$ is less likely to have exogenous power to predict $R_{t}$ independent of $R_{t-1}$. IMO the spatial externalities case I would expect $W_{t-1}$ to have a greater potential to forecast $R_{t}$ independent of $R_{t-1}$ in the short run because $R_{t-1}$ and $W_{t-1}$ can be reflective of different external shocks to the system. This is my opinion though and you typically can't distinguish between the general social process model and the spatial externalities model through empirical means in a cross sectional design (they are probably both occurring to a certain extent in many contexts). Hence I would attempt to validate its usefulness before simply incorporating it into the forecast. Better knowledge of the literature and social processes would definately be helpful here to guide your model building. In criminology only in a very limited set of circumstances does the externalities model make sense (but I imagine it is more likely in economics data). Models of spatial hedonic housing prices often show very strong spatial effects, and in that context I would expect the spatial component to have a strong ability to forecast housing prices. (I like Luc Anselin's explanation of these two different processes better than mine in [this](http://dx.doi.org/10.1007/978-3-7908-2070-6_2) paper, PDF [here](https://www.ace.uiuc.edu/reap/SPARC/2007-01_AnselinLozano.pdf)) Often how we define $W$ is a further problem in this setting. Most conceptions of $W$ are very simplistic and probably aren't entirely reflective of real geographic processes. Here kwaks suggestion of adding a random component to the $W$ effect for each $R$ makes alot of sense. An example would be we would expect New York City to influence its neighbors, but we wouldn't expect NYC's neighbors to have all that much influence on NYC. This still doesn't solve how to either decide what is a neighbor or how to best represent the effects of neighbors. What kwak suggests is essential a local version of Geary's C (spatial differences), local Moran's I (spatial averages) is a common approach as well. I'm still alittle surprised at the negative responses to my suggestion to use simpler smoothing methods (even if they are meant for univariate time series). Am I naive to think exponential smoothing or some other type of moving window technique won't perform at least comparably well enough to more complicated procedures to assess it? I would be more worried if the series were such that we would expect seasonal components, but that is not the case here.
null
CC BY-SA 2.5
null
2010-10-18T18:36:53.263
2010-10-19T15:23:18.643
2010-10-19T15:23:18.643
1036
1036
null
3718
2
null
3707
1
null
There is no mysterious reason. If you multiply a bunch of numbers between 0 an 1, the result will forcibly be close to 0. The average result for RAND()*RAND()*RAND()*RAND()*RAND()*RAND() should be something close to (0.5^6), that is, 0.015625. Be careful using Excel's RAND() function, though. It's not the best random number generator in the world.
null
CC BY-SA 2.5
null
2010-10-18T19:09:55.590
2010-10-18T19:18:41.650
2010-10-18T19:18:41.650
666
666
null
3719
1
3721
null
2
1832
How is it possible to calculate Pearson/Spearman correlation of more than two judges in MATLAB? Thanks.
Is it possible to calculate Pearson/Spearman correlation of more than two judges in MATLAB?
CC BY-SA 2.5
null
2010-10-18T19:10:41.067
2011-05-08T17:46:41.447
2010-10-18T19:13:44.903
449
1564
[ "correlation", "matlab" ]
3720
2
null
3708
6
null
The Arellano-Bond estimator has been designed for precisely this type of problems. You will find a short non-technical paper with a examples [here](http://www.cemmap.ac.uk/wps/cwp0209.pdf). In a nutshell, it combines the information embedded in the large number of cross-section to make up for the small number of points in each series. This estimator is widely used and implemented: it is avalaible in the default gretl [package](http://gretl.sourceforge.net/), but also in stata via the [XTABOND2](http://ideas.repec.org/c/boc/bocode/s435901.html) package and in R too, via the plm [package](http://cran.r-project.org/web/packages/plm/vignettes/plm.pdf) (you should easily find a large number of paper using it). EDIT: Given that spatial correlation may indeed be informative (see Andy's post), i would advice to add a variable: $s_{it} = u_{it} - \bar{u}_{-it}$ where $u_{it}$ is (eventually the $\log()$ of) the unemployment rate of region $i$ at time $t$ and $\bar{u}_{-it}$ its average value among $k$ geographical neighbors of region $i$ (excluding region $i$). I would advise trying different values of $k$ until small changes in $k$ do not affect the estimation end-result/conclusions. Then, for efficient and consistent estimation of $\beta_s$ (the coefficient associated with the variable $s$) i would use OLS for the main effect and allow for a random component to the error terms to account for inter-regional heterogenity in $\beta_s$; thereby leveraging the fact that the R package plm allows to combine gmm (i.e. Arellano-Bond) and random effect coefficients. Concerning Andy W's remark: you could read [these](http://www.econbrowser.com/archives/2008/04/regional_propag.html) [two](http://www.econbrowser.com/archives/2008/03/recessions_at_t.html) documents for a non technical summary. The full paper version is [here](http://ideas.repec.org/p/fip/fedlwp/2009-013.html). Note the reliance on both a large number of cross section and time dimension. PS: Thanks @Srikant. I think i get it now :)
null
CC BY-SA 2.5
null
2010-10-18T19:19:01.650
2010-10-19T15:03:26.300
2010-10-19T15:03:26.300
603
603
null
3721
2
null
3719
3
null
Store the evaluation of the judges in a matrix X where each column represents one judge and the rows are the corresponding evaluations. Then `corr(x)` gives the required pairwise correlation matrix. From Matlab's help: > RHO = corr(X) returns a p-by-p matrix containing the pairwise linear correlation coefficient between each pair of columns in the n-by-p matrix X. Is this what you want?
null
CC BY-SA 2.5
null
2010-10-18T19:37:48.260
2010-10-18T19:37:48.260
null
null
null
null
3722
2
null
3715
0
null
[Brown, Cai, and DasGupta, AS, 2002](https://projecteuclid.org/journals/annals-of-statistics/volume-30/issue-1/Confidence-Intervals-for-a-binomial-proportion-and-asymptotic-expansions/10.1214/aos/1015362189.full) [Brown, Cai, and DasGupta, Stat Sci, 2001](https://projecteuclid.org/journals/statistical-science/volume-16/issue-2/Interval-Estimation-for-a-Binomial-Proportion/10.1214/ss/1009213286.full) I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when it comes to binomial proportions' CI and estimation. Sorry if this is not what you wanted.
null
CC BY-SA 4.0
null
2010-10-18T19:53:58.210
2022-05-07T23:15:21.417
2022-05-07T23:15:21.417
79696
1307
null
3723
2
null
3715
3
null
[Error propagation](http://en.wikipedia.org/wiki/Error_propagation) won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account. You can obtain the variance of your distribution P(B|A) using the [Delta Method](http://en.wikipedia.org/wiki/Delta_method) and use that to obtain a confidence interval. With Bayesian inference, you might find it simpler to use a [credible interval](http://en.wikipedia.org/wiki/Credible_interval). The following slides do a good job of explaining how to obtain this: - Bayesian analysis of one, two, and n-parameter models - A Brief Tutorial on Bayesian Thinking
null
CC BY-SA 2.5
null
2010-10-18T19:57:08.563
2010-10-18T19:57:08.563
null
null
251
null
3724
1
3725
null
9
7424
According to Microsoft Excel Help: > VAR uses the following formula: where x is the sample mean AVERAGE(number1,number2,…) and n is the sample size. Shouldn't it be n, rather than n - 1, in the denominator?
Microsoft Excel formula for variance
CC BY-SA 2.5
null
2010-10-18T20:54:08.583
2010-10-23T06:00:07.847
2020-06-11T14:32:37.003
-1
1618
[ "variance", "excel", "unbiased-estimator" ]
3725
2
null
3724
9
null
Use VARP for the variance you want ("population variance"). VAR is the unbiased estimator for a normally distributed population.
null
CC BY-SA 2.5
null
2010-10-18T21:01:57.917
2010-10-18T21:01:57.917
null
null
919
null
3726
2
null
1224
3
null
Are you calculating your chi-square statistic by squaring the difference between the logHRs and dividing by the variance of this diff? If so, that sounds absolutely fine to me. With only two centres, perhaps I wouldn't usually think of this a test for (between-centre) heterogeneity - i might call it a test for interaction with centre, or a test for between-centre difference in the effect (i.e. in the hazard ratio in this case). That's just terminology though. One thing that probably wouldn't be appropriate would be to estimate the between-centre variance, as with only two centre you really enough information for a meaningful estimate. Another possible mistake to avoid: if the result of the test is 'not statistically significant' do not drop the centre effects, i.e. do still stratify by centre. Even if the HRs are the same in the two centres there could still be confounding by centre, i.e. the pooled HR could still be different from both. And even if that's not a 'significant' difference, avoiding any possibility of confounding by centre characteristics is always worth the loss of 1 degree of freedom. (In any case, it's usually a mistake to decide the method of analysis based on the results of statistical tests, an approach sometimes called 'data snooping' - see [this earlier topic](https://stats.stackexchange.com/q/499/449)). (Note that I'm not suggesting the questioner was going to do either of those things, i'm just taking the opportunity to be particularly pedagogical.)
null
CC BY-SA 2.5
null
2010-10-18T21:44:50.017
2010-10-19T06:44:37.850
2017-04-13T12:44:52.660
-1
449
null
3727
1
3789
null
11
284
What's the best way to approximate $Pr[n \leq X \leq m]$ for two given integers $m,n$ when you know the mean $\mu$, variance $\sigma^2$, skewness $\gamma_1$ and excess kurtosis $\gamma_2$ of a discrete distribution $X$, and it is clear from the (non-zero) measures of shape $\gamma_1$ and $\gamma_2$ that a normal approximation is not appropriate? Ordinarily, I would use a normal approximation with integer correction... $Pr[(n - \text{½})\leq X \leq (m + \text{½})] = Pr[\frac{(n - \text{½})-\mu}{\sigma}\leq Z \leq \frac{(m + \text{½})-\mu}{\sigma}] = \Phi(\frac{(m + \text{½})-\mu}{\sigma}) - \Phi(\frac{(n - \text{½})-\mu}{\sigma})$ ...if the skewness and excess kurtosis were (closer to) 0, but that's not the case here. I have to perform multiple approximations for different discrete distributions with different values of $\gamma_1$ and $\gamma_2$. So I'm interested in finding out if there is an established a procedure that uses $\gamma_1$ and $\gamma_2$ to select a better approximation than the normal approximation.
Approximating $Pr[n \leq X \leq m]$ for a discrete distribution
CC BY-SA 2.5
null
2010-10-18T22:19:27.650
2017-06-06T00:56:11.673
2017-06-06T00:56:11.673
11887
null
[ "probability", "distributions", "moments", "approximation", "saddlepoint-approximation" ]
3728
2
null
499
10
null
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. The technical term for this is over-fitting and it is especially problematic with small datasets, though it is not exclusive to them. By using a procedure that selects variables based on best fit, all of the random variation that looks like fit in this particular sample contributes to estimates and standard errors. This is a problem for both prediction and interpretation of the model. Specifically, r-squared is too high and parameter estimates are biased (they are too far from 0), standard errors for parameters are too small (and thus p-values and intervals around parameters are too small/narrow). The best line of defense against these problems is to build models thoughtfully and include the predictors that make sense based on theory, logic, and previous knowledge. If a variable selection procedure is necessary, you should select a method that penalizes the parameter estimates (shrinkage methods) by adjusting the parameters and standard errors to account for over-fitting. Some common shrinkage methods are Ridge Regression, Least Angle Regression, or the lasso. In addition, cross-validation using a training dataset and a test dataset or model-averaging can be useful to test or reduce the effects of over-fitting. Harrell is a great source for a detailed discussion of these problems. Harrell (2001). "Regression Modeling Strategies."
null
CC BY-SA 2.5
null
2010-10-18T22:26:57.827
2010-10-18T22:26:57.827
null
null
485
null
3729
2
null
3727
2
null
You could try to use [skew normal distribution](http://en.wikipedia.org/wiki/Skew_normal_distribution) and see if excess kurtosis for your particular data sets is sufficiently close to the excess kurtosis of the distribution for given skewness. If it is, you can use the skew normal distribution cdf to estimate the probability. If not, you would have to come up with a transformation to the normal/skew pdf similar to the one used for the skew normal distribution, which would give you control over both skewness and excess kurtosis.
null
CC BY-SA 2.5
null
2010-10-19T00:21:55.433
2010-10-19T00:21:55.433
null
null
1448
null
3730
1
3733
null
151
277079
I get this question frequently enough in my statistics consulting work, that I thought I'd post it here. I have an answer, which is posted below, but I was keen to hear what others have to say. Question: If you have two variables that are not normally distributed, should you use Spearman's rho for the correlation?
Pearson's or Spearman's correlation with non-normal data
CC BY-SA 3.0
null
2010-10-19T01:14:16.967
2023-05-02T20:06:34.277
2017-02-15T01:37:14.343
7290
183
[ "correlation", "normality-assumption", "pearson-r", "spearman-rho" ]
3731
2
null
3730
14
null
Updated The question asks us to choose between Pearson's and Spearman's method when normality is questioned. Restricted to this concern, I think the following paper should inform anyone's decision: - On the Effects of Non-Normality on the Distribution of the Sample Product-Moment Correlation Coefficient (Kowalski, 1975) It's quite nice and provides a survey of the considerable literature, spanning decades, on this topic -- starting from Pearson's "mutilated and distorted surfaces" and robustness of distribution of $r$. At least part of the contradictory nature of the "facts" is that much of this work was done before the advent of computing power -- which complicated things because the type of non-normality had to be considered and was hard to examine without simulations. Kowalski's analysis concludes that the distribution of $r$ is not robust in the presence of non-normality and recommends alternative procedures. The entire paper is quite informative and recommended reading, but skip to the very short conclusion at the end of the paper for a summary. If asked to choose between one of Spearman and Pearson when normality is violated, the distribution free alternative is worth advocating, i.e. Spearman's method. --- Previously .. Spearman's correlation is a rank based correlation measure; it's non-parametric and does not rest upon an assumption of normality. The sampling distribution for Pearson's correlation does assume normality; in particular this means that although you can compute it, conclusions based on significance testing may not be sound. As Rob points out in the comments, with large sample this is not an issue. With small samples though, where normality is violated, Spearman's correlation should be preferred. Update Mulling over the comments and the answers, it seems to me that this boils down to the usual non-parametric vs. parametric tests debate. Much of the literature, e.g. in biostatistics, doesn't deal with large samples. I'm generally not cavalier with relying on asymptotics. Perhaps it's justified in this case, but that's not readily apparent to me.
null
CC BY-SA 2.5
null
2010-10-19T01:27:47.607
2010-10-21T02:05:03.130
2010-10-21T02:05:03.130
251
251
null
3732
2
null
499
4
null
Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob [suggested](https://stats.stackexchange.com/questions/499/when-can-you-use-data-based-criteria-to-specify-a-regression-model/531#531) it is more problematic than simply correcting for multiple hypothesis tests. [Statistical Inference After Model Selection](http://dx.doi.org/10.1007/s10940-009-9077-7) by: Richard Berk, Lawrence Brown, Linda Zhao Journal of Quantitative Criminology, Vol. 26, No. 2. (1 June 2010), pp. 217-236. PDF version [here](http://www-stat.wharton.upenn.edu/~lzhao/papers/MyPublication/StatInfAfterMS_JQC_2010.pdf)
null
CC BY-SA 2.5
null
2010-10-19T01:50:06.413
2010-10-19T01:50:06.413
2017-04-13T12:44:40.807
-1
1036
null
3733
2
null
3730
108
null
Pearson's correlation is a measure of the linear relationship between two continuous random variables. It does not assume normality although it does assume finite variances and finite covariance. When the variables are bivariate normal, Pearson's correlation provides a complete description of the association. Spearman's correlation applies to ranks and so provides a measure of a monotonic relationship between two continuous random variables. It is also useful with ordinal data and is robust to outliers (unlike Pearson's correlation). The distribution of either correlation coefficient will depend on the underlying distribution, although both are asymptotically normal because of the central limit theorem.
null
CC BY-SA 2.5
null
2010-10-19T01:53:19.697
2010-10-19T07:48:02.297
2010-10-19T07:48:02.297
159
159
null
3734
1
null
null
200
49234
In several different contexts we invoke the [central limit theorem](http://en.wikipedia.org/wiki/Central_limit_theorem) to justify whatever statistical method we want to adopt (e.g., approximate the binomial distribution by a normal distribution). I understand the technical details as to why the theorem is true but it just now occurred to me that I do not really understand the intuition behind the central limit theorem. So, what is the intuition behind the central limit theorem? Layman explanations would be ideal. If some technical detail is needed please assume that I understand the concepts of a pdf, cdf, random variable etc but have no knowledge of convergence concepts, characteristic functions or anything to do with measure theory.
What intuitive explanation is there for the central limit theorem?
CC BY-SA 2.5
null
2010-10-19T02:14:32.050
2022-03-17T02:51:11.573
null
null
null
[ "intuition", "central-limit-theorem" ]
3735
2
null
3194
7
null
A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checking that the probability that you roll under (or possibly exceed) each outcome is close to what it should be. What I am getting at is that there are some kinds of deviations from fairness that will heavily impact whatever you're using a d20 for and other kinds of deviations that hardly matter at all, and the chi-squared test will divide power between more interesting and less interesting alternatives. The consequence is that to have enough power to pick up even fairly moderate deviations from fairness, you need a huge number of rolls - far more than you would ever want to sit and generate. (Hint: come up with a few sets of non-uniform probabilities for your d20 that will most heavily impact the outcome that you're using the d20 for and use simulation and chi-squared tests to find out what power you have against them for various numbers of rolls, so you get some idea of the number of rolls you will need.) There are a variety of ways of checking for "interesting" deviations (ones that will be more likely to substantively affect typical uses of a d20) My recommendation is to do an ECDF test (Kolmogorov-Smirnov/Anderson-Darling-type test - but you'll probably want to adjust for the conservativeness that results from the distribution being discrete - at least by lifting the nominal alpha level, but even better by just simulating the distribution to see how the distribution of the test statistic goes for a d20). These can still pick up any kind of deviation, but they put relatively more weight on the more important kinds of deviation. An even more powerful approach is to specifically construct a test statistic that is specifically sensitive to the most important alternatives to you, but it involves a bit more work. --- In [this answer](https://stats.stackexchange.com/a/58442/805) I suggest a graphical method for testing a die based on the size of the individual deviations. Like the chi-squared test this makes more sense for dice with few sides like d4 or d6.
null
CC BY-SA 4.0
null
2010-10-19T02:53:05.170
2018-05-28T23:55:32.270
2018-05-28T23:55:32.270
805
805
null
3736
2
null
3734
21
null
Intuition is a tricky thing. It's even trickier with theory in our hands tied behind our back. The CLT is all about sums of tiny, independent disturbances. "Sums" in the sense of the sample mean, "tiny" in the sense of finite variance (of the population), and "disturbances" in the sense of plus/minus around a central (population) value. For me, the device that appeals most directly to intuition is the quincunx, or 'Galton box', see Wikipedia (for 'bean machine'?) The idea is to roll a tiny little ball down the face of a board adorned by a lattice of equally spaced pins. On its way down the ball diverts right and left (...randomly, independently) and collects at the bottom. Over time, we see a nice bell shaped mound form right before our eyes. The CLT says the same thing. It is a mathematical description of this phenomenon (more precisely, the quincunx is physical evidence for the normal approximation to the binomial distribution). Loosely speaking, the CLT says that as long as our population is not overly misbehaved (that is, if the tails of the PDF are sufficiently thin), then the sample mean (properly scaled) behaves just like that little ball bouncing down the face of the quincunx: sometimes it falls off to the left, sometimes it falls off to the right, but most of the time it lands right around the middle, in a nice bell shape. The majesty of the CLT (to me) is that the shape of the underlying population is irrelevant. Shape only plays a role insofar as it delegates the length of time we need to wait (in the sense of sample size).
null
CC BY-SA 2.5
null
2010-10-19T03:35:38.610
2010-10-19T03:55:41.573
2010-10-19T03:55:41.573
null
null
null
3737
2
null
3734
1
null
I gave up on trying to come up with an intuitive version and came up with some simulations. I have one that presents a simulation of a Quincunx and some others that do things like show how even a skewed raw reaction time distribution will become normal if you collect enough RT's per subject. I think they help but they're new in my class this year and I haven't graded the first test yet. One thing that I thought was good was being able to show the law of large numbers as well. I could show how variable things are with small sample sizes and then show how they stabilize with large ones. I do a bunch of other large number demos as well. I can show the interaction in the Quincunx between the numbers of random processes and the numbers of samples. (turns out not being able to use a chalk or white board in my class may have been a blessing)
null
CC BY-SA 4.0
null
2010-10-19T04:18:09.647
2019-06-26T14:19:09.627
2019-06-26T14:19:09.627
601
601
null
3738
2
null
3730
47
null
From an applied perspective, I am more concerned with choosing an approach that summarises the relationship between two variables in a way that aligns with my research question. I think that determining a method for getting accurate standard errors and p-values is a question that should come second. Even if you chose not to rely on asymptotics, there's always the option to bootstrap or change distributional assumptions. As a general rule, I prefer Pearson's correlation because (a) it generally aligns more with my theoretical interests; (b) it enables more direct comparability of findings across studies, because most studies in my area report Pearson's correlation; and (c) in many settings there is minimal difference between Pearson and Spearman correlation coefficients. However, there are situations where I think Pearson's correlation on raw variables is misleading. - Outliers: Outliers can have great influence on Pearson's correlations. Many outliers in applied settings reflect measurement failures or other factors that the model is not intended to generalise to. One option is to remove such outliers. Univariate outliers do not exist with Spearman's rho because everything is converted to ranks. Thus, Spearman is more robust. - Highly skewed variables: When correlating skewed variables, particularly highly skewed variables, a log or some other transformation often makes the underlying relationship between the two variables clearer (e.g., brain size by body weight of animals). In such settings it may be that the raw metric is not the most meaningful metric anyway. Spearman's rho has a similar effect to transformation by converting both variables to ranks. From this perspective, Spearman's rho can be seen as a quick-and-dirty approach (or more positively, it is less subjective) whereby you don't have to think about optimal transformations. In both cases above, I would advise researchers to either consider adjustment strategies (e.g., transformations, outlier removal/adjustment) before applying Pearson's correlation or use Spearman's rho.
null
CC BY-SA 2.5
null
2010-10-19T06:08:06.590
2010-10-19T07:36:34.143
2010-10-19T07:36:34.143
183
183
null
3739
1
null
null
3
475
I have sampled a number of $x$ values from a normal distribution with mean 0 and sd 0.2. I then transformed these $x$ values to $y$ values using the formula $y = e^x/(e^x + 1)$. I know that the $y$ values will have a mean of 0.5 and all lie between 0 and 1. - Do these y values fit some common distribution? - Is there a way to figure this out in R? What I am looking for is if there is a simple distribution to describe the y values that would allow me to describe it in terms of a few parameters and possibly sample directly from it in R.
Does data transformed in a certain way from a normal distribution fit some other common distribution?
CC BY-SA 2.5
null
2010-10-19T06:14:17.770
2010-10-19T07:48:17.050
2010-10-19T07:17:41.907
449
null
[ "distributions", "data-transformation" ]
3740
2
null
3739
1
null
The trivial way is just to realize this transformation and see what happens: ![alt text](https://i.stack.imgur.com/5vCml.png) This looks pretty normal, indeed qqnorm confirms it: ![alt text](https://i.stack.imgur.com/sEQfX.png)
null
CC BY-SA 2.5
null
2010-10-19T06:40:30.707
2010-10-19T06:40:30.707
null
null
null
null
3743
2
null
3739
3
null
You have: $X \sim N(\mu,\sigma^2)$ By definition: $Y = \frac{e^X}{e^X+1}$ Therefore, the cdf of $Y$ is: $P(Y \le y) = P(\frac{e^X}{e^X+1} \le y)$ Simplifying the RHS, we get: $P(Y \le y) = P(X \le -log(\frac{1-y}{y}) )$ Therefore, $P(Y \le y) = \Phi(-log(\frac{1-y}{y}),\mu,\sigma^2)$ Differentiating the above wrt $y$, we get the pdf $f(y)$ as: $f(y) = (\frac{1}{y(1-y)}) \phi(-log(\frac{1-y}{y}),\mu,\sigma^2)$ I do not think the above pdf has a standard name. onestop identifies the correct name for the pdf in his [answer](https://stats.stackexchange.com/questions/3739/does-data-transformed-in-a-certain-way-from-a-normal-distribution-fit-some-other/3745#3745): [Logit-normal distribution](http://en.wikipedia.org/wiki/Logit-normal_distribution). Reg how to sample it in R you can use the [inverse-transform sampling](http://en.wikipedia.org/wiki/Inverse_transform_sampling). The idea is as follows: - Generate a uniform random variate: $U \sim U[0,1]$ - Set $U = \Phi(-log(\frac{1-y}{y}),\mu,\sigma^2)$ and invert for $y$. However, the use of the inverse method sampling is not necessary as a you could just sample values of $Y$ by sampling from $X$ first and then doing the logistic transformation (as given in your question).
null
CC BY-SA 2.5
null
2010-10-19T06:50:40.420
2010-10-19T07:48:17.050
2017-04-13T12:44:51.217
-1
null
null
3744
2
null
3730
61
null
Don't forget Kendall's tau! Roger Newson has argued for the superiority of Kendall's τa over Spearman's correlation rS as a rank-based measure of correlation in a paper whose full text is now freely available online: Newson R. [Parameters behind "nonparametric" statistics: Kendall's tau,Somers' D and median differences](http://www.stata-journal.com/sjpdf.html?articlenum=st0007). Stata Journal 2002; 2(1):45-64. He references (on p47) Kendall & Gibbons (1990) as arguing that "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters, but the sample Spearman’s rS is much more easily calculated without a computer" (which is no longer of much importance of course). [Kendall, M. G. and J. D. Gibbons. 1990. Rank Correlation Methods. 5th ed. London: Griffin.](https://www.worldcat.org/title/1151099968)
null
CC BY-SA 4.0
null
2010-10-19T07:07:16.707
2023-05-02T20:06:34.277
2023-05-02T20:06:34.277
344718
449
null
3745
2
null
3739
2
null
This is called a [logit-normal distribution](http://en.wikipedia.org/wiki/Logit-normal_distribution) (by analogy with the much more common log-normal distribution). Knowing that doesn't simplify sampling from it, however, or change the parameters used to describe it, which are still the mean and SD (or variance) of the parent normal distribution.
null
CC BY-SA 2.5
null
2010-10-19T07:14:21.320
2010-10-19T07:14:21.320
null
null
449
null
3746
1
null
null
11
3794
I need to write a program to find the average GPS point from a population of points. In practice the following happens: - Each month a person records a GPS point of the same static asset. - Because of the nature of GPS, these points differ slightly each month. - Sometimes the person makes a mistake a records the wrong assest at a completely different location. - Each GPS point has a certainty weight (HDOP) that indicates how accurate the current GPS data is. GPS points with better HDOP values are preferred over lower ones.. How do I determine the following: - Deal with data with 2 values vs. a single value like age. (Find the average age in a population of people) - Determine the outliers. In the example below these would be [-28.252, 25.018] and [-28.632, 25.219] - After excluding the outliers, find the average GPS point in this it might be [-28.389, 25.245]. - It would be a bonus if can work the "weight" provided by HDOP value for each point. ![alt text](https://i.stack.imgur.com/ecYHJ.png)
Finding the average GPS point
CC BY-SA 2.5
null
2010-10-19T08:10:54.190
2010-10-19T20:11:33.707
2010-10-19T09:28:06.077
1352
1623
[ "outliers", "spatial" ]
3747
2
null
3746
3
null
Rob Hyndman recently posed a [question about detecting outliers in multivariate data](https://stats.stackexchange.com/questions/213/what-is-the-best-way-to-identify-outliers-in-multivariate-data). The answers may provide a couple of possible approaches (and otherwise, you may want to put the question of finding 2-d outliers in a separate question). And you can average your remaining GPS data component by component - add all the first components up and divide by the number of points, that will give you the first component of the average. Same with the second components. This averaging can be weighted by HDOP. Sum up the products of the first component, multiplied with the corresponding HDOP score, and divide the sum by the sum of the HDOP scores. Same with the second components. I'll take the liberty of removing the "normal-distribution" tag...
null
CC BY-SA 2.5
null
2010-10-19T08:30:31.317
2010-10-19T08:30:31.317
2017-04-13T12:44:32.747
-1
1352
null
3748
2
null
2214
3
null
The Levy distribution has 4 parameter. Each of them has a quantile-based sample equivalent: - $\mu$, the location parameter, can be estimated by the median. This is a high efficiency alternative (ARE$\approx 0.85$). - $\gamma$, the scale parameter, can be estimated by the median absolute deviation (or more efficiently yet by the Qn estimator (1) with ARE similar to that of the median) - $\beta$, the skew parameter, can be estimated by the $S_k$ estimator, with $S_k=(Q_x(\frac{3}{4})-2Q_x(\frac{1}{2})+Q_x(\frac{1}{4}))(Q_x(\frac{3}{4})-Q_x(\frac{1}{4}))^{-1}$ where $Q_x(\tau)$ is the $\tau$^th quantile of $x$. - $\alpha$, the tail parameter, can be estimated by Moors's quantile based kurtosis estimator (2). List of references: - P.J. Rousseeuw, C. Croux (1993) Alternatives to the Median Absolute Deviation, JASA, 88, 1273-1283. - J. J. A. Moors, (1988) A Quantile Alternative for Kurtosis Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 37, No. 1, pp. 25-32
null
CC BY-SA 2.5
null
2010-10-19T09:24:21.443
2010-10-19T09:24:21.443
null
null
603
null
3749
1
3981
null
17
4285
I am discovering the marvellous world of such called "Hidden Markov Models", also called "regime switching models". I would like to adapt a HMM in R to detect trends and turning points. I would like to build the model as generic as possible so that I can test it on many prices. Can anyone recommend a paper? I have seen (and read) (more than) a few but I am looking for a simple model that is easy to implement. Also, what R packages are recommended? I can see there is a lot of them doing HMM. I have bought the book "Hidden Markov models for time series: an introduction using R", let see what's in it ;) Fred
Usage of HMM in quantitative finance. Examples of HMM that works to detect trend / turning points?
CC BY-SA 2.5
null
2010-10-19T10:04:10.973
2010-10-26T06:19:55.590
2010-10-26T02:19:11.633
183
1709
[ "r", "time-series", "finance", "hidden-markov-model" ]
3751
2
null
3734
32
null
The nicest animation I know: [http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html](http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html) [](https://i.stack.imgur.com/IH6cc.gif) The simplest words I have read: [http://elonen.iki.fi/articles/centrallimit/index.en.html](http://elonen.iki.fi/articles/centrallimit/index.en.html) > If you sum the results of these ten throws, what you get is likely to be closer to 30-40 than the maximum, 60 (all sixes) or on the other hand, the minumum, 10 (all ones). The reason for this is that you can get the middle values in many more different ways than the extremes. Example: when throwing two dice: 1+6 = 2+5 = 3+4 = 7, but only 1+1 = 2 and only 6+6 = 12. That is: even though you get any of the six numbers equally likely when throwing one die, the extremes are less probable than middle values in sums of several dice.
null
CC BY-SA 3.0
null
2010-10-19T10:27:08.170
2017-05-12T17:00:53.633
2017-05-12T17:00:53.633
90687
1219
null
3752
1
null
null
19
776
This question is prompted by [discussion elsewhere](https://stats.stackexchange.com/q/3556/159). Variable kernels are often used in local regression. For example, loess is widely used and works well as a regression smoother, and is based on a kernel of variable width that adapts to data sparsity. On the other hand, variable kernels are usually thought to lead to poor estimators in kernel density estimation (see [Terrell and Scott, 1992](http://www.jstor.org/pss/2242011)). Is there an intuitive reason why they would work well for regression but not for density estimation?
If variable kernel widths are often good for kernel regression, why are they generally not good for kernel density estimation?
CC BY-SA 2.5
null
2010-10-19T11:35:15.373
2019-04-17T02:20:42.810
2017-04-13T12:44:48.803
-1
159
[ "nonparametric", "smoothing", "kernel-smoothing", "loess" ]
3753
2
null
3746
8
null
One of the problems with multivariate data is deciding on, and then interpreting, a suitable metric for calculating distances, hence clever but somewhat hard-to-explain concepts such as Mahalanobis distance. But in this case surely the choice is obvious - [Euclidean distance](http://en.wikipedia.org/wiki/Euclidean_distance). I'd suggest a simple heuristic algorithm something like: - Calculate the (unweighted) centroid of the data points, i.e. the (unweighted) means of the 2 coordinates - Calculate the Euclidean distance of all the readings from the centroid - Exclude any readings that are further than a certain distance (to be determined based on your experience and knowledge of the technology, or failing that a bit of trial and error cross-validation - 100m, 1km, 10km??) - Calculate the weighted average of both coords of the remaining points, weighting by the inverse of the HDOP score (or some monotonic function of it - i had a quick look at the wikipedia page linked in the question and think maybe you don't need such a function but i'd need to study it further to be sure) There are clearly several ways to make this more sophisticated, such as down-weighting outliers or using [M-estimators](http://en.wikipedia.org/wiki/M-estimator) rather than simply excluding them, but I'm not sure whether such sophistication is really necessary here.
null
CC BY-SA 2.5
null
2010-10-19T11:54:28.387
2010-10-19T11:54:28.387
null
null
449
null
3754
1
3756
null
6
11661
trying to read a large dataset in R the console displayed the follwing errors: ``` data<-read.csv("UserDailyStats.csv", sep=",", header=T, na.strings="-", stringsAsFactors=FALSE) > data = data[complete.cases(data),] > dataset<-data.frame(user_id=as.character(data[,1]),event_date= as.character(data[,2]),day_of_week=as.factor(data[,3]),distinct_events_a_count=as.numeric(as.character(data[,4])),total_events_a_count=as.numeric(as.character(data[,5])),events_a_duration=as.numeric(as.character(data[,6])),distinct_events_b_count=as.numeric(as.character(data[,7])),total_events_b=as.numeric(as.character(data[,8])),events_b_duration= as.numeric(as.character(data[,9]))) Error: cannot allocate vector of size 94.3 Mb In addition: Warning messages: 1: In data.frame(user_msisdn = as.character(data[, 1]), calls_date = as.character(data[, : NAs introduced by coercion 2: In data.frame(user_msisdn = as.character(data[, 1]), calls_date = as.character(data[, : NAs introduced by coercion 3: In class(value) <- "data.frame" : Reached total allocation of 3583Mb: see help(memory.size) 4: In class(value) <- "data.frame" : Reached total allocation of 3583Mb: see help(memory.size) ``` Does anyone know how to read large datasets? The size of UserDailyStats.csv is approximately 2GB.
How to read large dataset in R
CC BY-SA 2.5
null
2010-10-19T12:00:38.310
2013-01-09T11:25:55.163
null
null
315
[ "r", "large-data" ]
3755
2
null
3754
2
null
What platform are you running R on? How much physical and virtual memory does the machine have? Also, you might find the following relevant: [http://stat.ethz.ch/R-manual/R-devel/library/base/html/Memory-limits.html](http://stat.ethz.ch/R-manual/R-devel/library/base/html/Memory-limits.html)
null
CC BY-SA 2.5
null
2010-10-19T13:02:15.267
2010-10-19T13:02:15.267
null
null
439
null
3756
2
null
3754
9
null
Two basic things: - That complaint addresses all of the memory in the R session, not just the one object that you're loading. And unless you're using something like ff, everything in your session is in memory. - One Windows, you need to specify how much memory can be used by R. Have a look at help(memory.limit). Even though you're using 64-bit, it won't default to use all your available memory. Otherwise, you can consider using `bigmemory` to handle larger datasets if it's still a problem. Some relevant sources: - The High Performance Computing view on CRAN. - Dirk Eddelbuettel's tutorial on the subject. - Ryan Rosario's presentation on the subject. As a final suggestion, you can try calling `gc()` to free up memory before running your command, although in principle R will do this automatically as it needs to.
null
CC BY-SA 2.5
null
2010-10-19T13:09:51.147
2010-10-19T13:16:54.847
2010-10-19T13:16:54.847
5
5
null
3757
1
null
null
6
6988
In my data, the RT (gaze) of individuals (ID) is examined as a function of a visual conditions, the factor size (small, medium, large). Base model: ``` print(Base <- lmer(RT ~ Size + (1|ID), data=rt), cor=F) ``` Random effect: ``` print(NoCor <- lmer(RT ~ Size + (0+Size|ID) , data=rt)) print(WithCor <- lmer(RT ~ Size + (1+Size|ID), data=rt)) ``` Addition of ID slopes improves the Base model. My question is, how can a significant random effect (Size/ID) be interpreted when there is no relationship between the random and fixed effect, i.e., when the correlation between the random factor and the fixed facor does not improve the model [the anova(NoCr, WithCor) does not show a significant improvement]?
Random effect slopes in linear mixed models
CC BY-SA 2.5
null
2010-10-19T14:00:19.707
2010-10-20T21:08:01.323
2010-10-20T21:08:01.323
8
1626
[ "r", "mixed-model", "random-effects-model" ]
3758
1
null
null
6
1916
I'm a new R user and had just tried running friedman on non-normal and heteroscedastic data on seagrass. I am testing whether biomass is significantly different between sites across years. R (`friedman` function from `agricolae` package) returned result like this: ``` Friedman's Test =============== Adjusted for ties Value: 0.01333333 Pvalue chisq : 0.9080726 F value : 0.01316678 Pvalue F: 0.9086942 0.9087002 Alpha : 0.05 t-Student : 1.990847 LSD : 17.34995 Means with the same letter are not significantly different. GroupTreatment and Sum of the ranks a 1 119 a 2 118 ``` I know this means no significant difference given the p-value chisq. But what does p-value F mean?
What is a meaning of "p-value F" from Friedman test?
CC BY-SA 2.5
null
2010-10-19T14:27:51.110
2014-08-18T07:32:11.940
2010-10-20T09:04:39.833
null
1627
[ "r", "nonparametric" ]
3759
1
3761
null
5
1636
I'm looking for an implementation of FNN (or better yet, a SOFNN as described by [Forecasting Time Series by SOFNN with Reinforcement Learning](https://www.semanticscholar.org/paper/Forecasting-Time-Series-by-SOFNN-with-Reinforcement-Kuremoto-Obayashi/8a5ce65e52077303b8dcbe39a3953219e910ca3f)). Any language, though preference is Java, C#, C++ in that order.
Looking for impl of a Fuzzy Neural Network (FNN or SOFNN)
CC BY-SA 4.0
null
2010-10-19T14:29:50.193
2022-11-21T01:51:40.377
2022-11-21T01:51:40.377
362671
1127
[ "machine-learning", "neural-networks" ]
3760
2
null
3101
4
null
Way back in 1965, Sir Austin Bradford Hill wrote [a great essay](http://www.edwardtufte.com/tufte/hill) about something very akin to the Pyramid of Evidence, where he discussed how the piling up of evidence can increase our confidence in hypotheses of causality in Medicine. Most of the factors he discusses can be applied to Economics and political sciences.
null
CC BY-SA 2.5
null
2010-10-19T14:34:44.807
2010-10-19T14:34:44.807
null
null
666
null
3761
2
null
3759
4
null
I'm not an expert on this subject, but I believe that FNN are sometimes referred to as a [neuro-fuzzy system](http://en.wikipedia.org/wiki/Neuro-fuzzy) (also [referenced here](http://www.scholarpedia.org/article/Fuzzy_neural_network)). There are several implementations that I can find for that subject, including in [C++](http://fuzzy.cs.uni-magdeburg.de/nefcon/) and [Java](http://fuzzy.cs.uni-magdeburg.de/nefclass/nefclass-j/), but I can't confirm whether they would be relevant to your problem.
null
CC BY-SA 2.5
null
2010-10-19T14:58:49.700
2010-10-19T14:58:49.700
null
null
5
null
3762
1
null
null
3
550
I have 6 sets of interval data each of which between 0 and 1. Each set, calculated by a computer program, is related to the degree of similarity between some sounds (pairwise). What do you think in the best inter-rater reliability measure I can use to see how close the 6 judges are? If I want to explain the data in each set, it can be: 0.98, 0.01, 0.5, ... which shows 'sound1' and 'sound2' are very similar (0.98), 'sound1' and 'sound3' are much different (0.01) and so on. Thank you so much.
The best measure of reliability for interval data between 0 and 1
CC BY-SA 2.5
null
2010-10-19T15:39:13.843
2010-10-20T09:27:39.170
2010-10-19T17:11:20.393
930
1564
[ "psychometrics", "reliability", "agreement-statistics" ]
3764
2
null
3758
3
null
I generally used `friedman.test()` which doesn't return any F statistic. If you consider that you have $b$ blocks, for which you assigned ranks to observations belonging to each of them, and that you sum these ranks for each of your $a$ groups (let denote them sum $R_i$), then the Friedman statistic is defined as $$ F_r=\frac{12}{ba(a+1)}\sum_{i=1}^aR_i^2-3b(a+1) $$ and follows a $\chi^2(a-1)$, for $a$ and $b$ sufficiently large. Quoting Zar (Biostatistical Analysis, 4th ed., pp. 263-264), this approximation is conservative (hence, test has low power) and we can use an F-test, with $$ F_{\text{obs}}=\frac{(b-1)F_r}{b(a-1)-F_r} $$ which is to be compared to an F distribution with $a-1$ and $(a-1)(b-1$) degrees of freedom.
null
CC BY-SA 2.5
null
2010-10-19T16:00:57.583
2010-10-19T16:07:40.880
2010-10-19T16:07:40.880
930
930
null
3765
2
null
3754
2
null
I,m totally agree with Dirk answer. One suggestion. I have found very useful the use of programming languages such as AWK or others when assessing large databases. So, I was able to filter the data I wanted to include in my analysis, reducing the final size of dataset. Moreover, in your code you are duplicating the same data set twice (data and dataset). If you want to define your variables as factor, numeric, etc, you could use of colClasses option in the read.table function.
null
CC BY-SA 2.5
null
2010-10-19T16:00:58.520
2010-10-19T16:00:58.520
null
null
221
null
3766
2
null
3762
4
null
Referring to your comments to @Henrik, I'm inclined to think that you rather have continuous measurements on a set of objects (here, your similarity measure) for 6 raters. You can compute an [intraclass correlation](http://en.wikipedia.org/wiki/Intraclass_correlation) coefficient, as described here [Reliability in Elicitation Exercise](https://stats.stackexchange.com/questions/1015/reliability-in-elicitation-exercise/3667#3667). It will provide you with a measure of agreement (or concordance) between all 6 judges wrt. assessments they made, or more precisely the part of variance that is explained by between-rater variance. There's a working R script in appendix. Note that this assumes that your measures are considered as real valued measurement (I refer to @onestop's comment), not really proportions of similarity or whatever between your paired sounds. I don't know of a specific version of the ICC for % or values bounded on an interval, only for binary or ranked data. Update: Following your comments about parameters of interest and language issue: - There are many other online ressources on the ICC; I think David Howell provides a gentle and well illustrated introduction to it. Its discussion generalize to k-sample (judges/raters) without any difficulty I think, or see this chapter from Sea and Fortna on Psychometric Methods. What you have to think to is mainly whether you want to consider your raters as an unique set of observers, not necessarily representative of all the raters that would have assess your object of measurement (this is called a fixed effect), or as a random sample of raters sampled from a larger (hypothetical) population of potential raters: in the former case, this corresponds to a one-way anova or a consistency ICC, in the latter case we talk about an agreement ICC. - A colleague of mine successfully used Kevin Brownhill's script (from Matlab Central file exchange). The ICC you are interested in is then cse=3 (if you consider that your raters are not representative of a more general population of raters).
null
CC BY-SA 2.5
null
2010-10-19T16:14:39.170
2010-10-20T09:27:39.170
2017-04-13T12:44:33.310
-1
930
null
3767
2
null
3762
1
null
If you want to compare just two measures, simply take the [correlation coefficient (Pearson's r)](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient).
null
CC BY-SA 2.5
null
2010-10-19T16:33:04.320
2010-10-19T16:33:04.320
null
null
442
null
3768
2
null
3754
2
null
Since you're on 64-bit Windows, make sure that you have installed and are running the 64-bit version of R for Windows. Then, follow the instructions on Gary King's page: - How do I increase the memory for R?
null
CC BY-SA 2.5
null
2010-10-19T16:35:03.540
2010-10-19T16:35:03.540
null
null
251
null
3769
2
null
3758
4
null
It seems the output is from the `agricolae` package using the method `friedman`. The relevant lines for computing the two statistics in that function are: ``` T1.aj <- (m[2] - 1) * (t(s) %*% s - m[1] * C1)/(A1 - C1) T2.aj <- (m[1] - 1) * T1.aj/(m[1] * (m[2] - 1) - T1.aj) ``` Comparing this with the formula in chl's answer, you'll notice that `T2.adj` ("F value") corresponds to $F_{obs}$ and `T1.adj` ("Value") to $F_r$.
null
CC BY-SA 2.5
null
2010-10-19T16:43:16.423
2010-10-19T16:43:16.423
null
null
251
null
3770
2
null
3698
3
null
The author is using a single proportion test. Try the following in R: ``` (p-0.5)/sqrt(0.5*0.5/33) ``` where p = 17/33. See the wiki for the test statistic where the above test is called the [One-proportion z-test](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing#Common_test_statistics).
null
CC BY-SA 2.5
null
2010-10-19T17:10:57.197
2010-10-19T17:10:57.197
null
null
null
null
3771
2
null
3727
4
null
Fitting a distribution to data using the first four moments is exactly what Karl Pearson devised the [Pearson family of continuous probability distributions](http://en.wikipedia.org/wiki/Pearson_distribution) for (maximum likelihood is much more popular these days of course). Should be straightforward to fit the relevant member of that family then use the same type of continuity correction as you give above for the normal distribution. I assume you must have a truly enormous sample size? Otherwise sample estimates of skewness and especially kurtosis are often hopelessly imprecise, as well as being highly sensitive to outliers.In any case, I highly recommend you have a look at [L-moments](http://www.research.ibm.com/people/h/hosking/lmoments.html) as an alternative that have several advantages over ordinary moments that can be advantageous for fitting distributions to data.
null
CC BY-SA 2.5
null
2010-10-19T17:29:33.383
2010-10-19T17:29:33.383
null
null
449
null
3772
1
3775
null
9
4955
I am evaluating the effectiveness of 5 different methods to predict a particular binary outcome (call them 'Success' and 'Failure'). The data look like so: ``` Method Sample_Size Success Percent_Success 1 28 4 0.14 2 19 4 0.21 3 24 7 0.29 4 21 13 0.61 5 22 9 0.40 ``` I would like to perform a test among these 5 methods to assess the relative superiority of the methods. In other words, I want to order the methods in order of performance as method 1 > method 2 > ... method 5. To avoid the issue of multiple comparisons, I plan to do a permutation test along the following lines: Step 1: Pool all the data so that the overall sample size is 114 with overall 37 successes. Step 2: Randomly split the data into 5 groups with the corresponding sample sizes of 28, 19, 24, 21 and 22. Step 3: Increment a counter if the observed order of Percent_Success from step 2 is consistent with the ordering of my data. Step 4: Repeat steps 2 and 3 many times (say 10000). Desired p-value = Final Counter Value / 10000. Questions: - Is the above procedure ok? - Is there anything in R that would enable me to perform the above test? - Any suggestions for improvement or alternative methods would be helpful.
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportions?
CC BY-SA 2.5
null
2010-10-19T17:43:46.987
2010-11-17T23:55:00.300
2010-10-19T18:58:16.153
null
1558
[ "r", "multiple-comparisons", "permutation-test" ]
3773
2
null
3772
1
null
Your suggested Monte-Carlo permutation test procedure will produce a p-value for a test of the null hypothesis that the probability of success is the same for all methods. But there's little reason for doing a Monte Carlo permutation test here when the corresponding exact permutation test is perfectly feasible. That's Fisher's exact test (well, some people reserve that name for 2x2 tables, in which case it's a conditional exact test). I've just typed your data into Stata and -tabi ..., exact- gave p=.0067 (for comparison, Pearson's chi-squared test gives p=.0059). I'm sure there's an equivalent function in R which the R gurus will soon add. If you really want to look at ranking you may be best using a Bayesian approach, as it can give a simple interpretation as the probability that each method is truly the best, second best, third best, ... . That comes at the price of requiring you to put priors on your probabilities, of course. The maximum likelihood estimate of the ranks is simply the observed ordering, but it's difficult to quantify the uncertainty in the ranking in a frequentist framework in a way that can be easily interpreted, as far as i'm aware. I realise I haven't mentioned multiple comparisons, but I just don't see how that comes into this.
null
CC BY-SA 2.5
null
2010-10-19T18:17:52.227
2010-10-19T18:17:52.227
null
null
449
null
3774
2
null
7
3
null
Adding a couple to the list: - Lots of in-depth financial data on publicly-traded companies, going back many decades: http://www.mergent.com/servius - Rich information on 16+ million businesses in the US: http://compass.webservius.com Both available via a REST API and have free trial plans.
null
CC BY-SA 2.5
null
2010-10-19T18:41:58.457
2010-10-19T18:41:58.457
null
null
1629
null
3775
2
null
3772
6
null
The proposed procedure does not answer your question. It only estimates the frequency, under the null hypothesis, with which your observed order would occur. But under that null, to a good approximation, all orders are equally likely, whence your calculation will produce a value close to 1/5! = about 0.83%. That tells us nothing. One more obvious observation: the order, based on your data, is 4 > 5 > 3 > 2 > 1. Your estimates of their relative superiorities are 0.61 - 0.40 = 21%, 0.40 - 0.21 = 11%, etc. Now, suppose your question concerns the extent to which any of the ${5 \choose 2} = 10$ differences in proportions could be due to chance under the null hypothesis of no difference. You can indeed evaluate these ten questions with a permutation test. However, in each iteration you need to track ten indicators of relative difference in proportion, not one global indicator of the total order. For your data, a simulation with 100,000 iterations gives the results \begin{array}{ccccc} & 5 & 4 & 3 & 2 \cr 1 & 0.02439 & 0.0003 & 0.13233 & 0.29961 \cr 2 & 0.09763 & 0.00374 & 0.29222 & \cr 3 & 0.20253 & 0.00884 & & \cr 4 & 0.08702 & & & \end{array} The differences in proportions between method 4 and methods 1, 2, and 3 are unlikely to be due to chance (with estimated probabilities 0.03%, 0.37%, 0.88%, respectively) but the other differences might be. There is some evidence (p = 2.44%) of a difference between methods 1 and 5. Thus it appears you can have confidence that the differences in proportions involved in the relationships 4 > 3, 4 > 2, and 4 > 1 are all positive, and most likely so is the difference in 5 > 1.
null
CC BY-SA 2.5
null
2010-10-19T18:58:56.603
2010-10-19T19:19:20.613
2010-10-19T19:19:20.613
919
919
null
3776
2
null
3746
2
null
Call the HDOP the independent variable. Use this for weighting later on. So you have sets of co-ordinates - call this (x1,y1); (x2,y2), etc... First ignore outliers. Calculate the weighted averages of the x co-ordinates as [(x1*h1)+(x2*h2) +....+ (xn*hn)] / [sum(h1,h2,...,hn)] where h1,h2,... is the HDOP value. Do the same for the y co-ordinates. This will give a fairly accurate average value for each co-ordinate. Dealing with outliers can be a bit tricky. How do you know if they are outliers or not? Strictly you need to determine a statistical fit to the observations and within a confidence interval determine if they are genuine or not. Looking at the question the Poison Distribution does come to mind. But this is probably a lot of work and I'm sure you don't want to go into this. Maybe use an approximation? Say you assume that the average co-ordinate value is a good mean to use. Then determine a value for the standard deviation. I think the standard dev or the poison distribution is 1/(mean). Then approximate using the normal distribution and a 95% confidence interval. Say if an observation is outside the interval (mean-*1.645*std dev ; mean + 1.645*std dev) then it is an outlier? Give this a go. Maybe go do a bit of reading on the poison distribution and incorporate the HDOP value into this?
null
CC BY-SA 2.5
null
2010-10-19T20:11:33.707
2010-10-19T20:11:33.707
null
null
null
null
3778
2
null
3614
9
null
Yet another way to quickly compute the probability distribution of a dice roll would be to use a specialized calculator designed just for that purpose. [Torben Mogensen](http://www.diku.dk/hjemmesider/ansatte/torbenm/), a CS professor at [DIKU](http://www.diku.dk/) has an excellent dice roller called [Troll](http://topps.diku.dk/torbenm/troll.msp). The Troll dice roller and probability calculator prints out the probability distribution (pmf, histogram, and optionally cdf or ccdf), mean, spread, and mean deviation for a variety of complicated dice roll mechanisms. Here are a few examples that show off Troll's dice roll language: Roll 3 6-sided dice and sum them: `sum 3d6`. Roll 4 6-sided dice, keep the highest 3 and sum them: `sum largest 3 4d6`. Roll an "exploding" 6-sided die (i.e., any time a "6" comes up, add 6 to your total and roll again): `sum (accumulate y:=d6 while y=6)`. Troll's [SML](http://en.wikipedia.org/wiki/Standard_ML) [source code](http://www.diku.dk/hjemmesider/ansatte/torbenm/Troll/Troll.zip) is available, if you want to see how its implemented. Professor Morgensen also has a 29-page paper, "[Dice Rolling Mechanisms in RPGs](http://www.diku.dk/hjemmesider/ansatte/torbenm/Troll/RPGdice.pdf)," in which he discusses many of the dice rolling mechanisms implemented by Troll and some of the mathematics behind them. A similar piece of free, open-source software is [Dicelab](http://www.semistable.com/dicelab/), which works on both Linux and Windows.
null
CC BY-SA 2.5
null
2010-10-19T23:38:01.513
2010-10-19T23:38:01.513
null
null
null
null
3779
1
6135
null
27
4320
Suppose you had a bag with $n$ tiles, each with a letter on it. There are $n_A$ tiles with letter 'A', $n_B$ with 'B', and so on, and $n_*$ 'wildcard' tiles (we have $n = n_A + n_B + \ldots + n_Z + n_*$). Suppose you had a dictionary with a finite number of words. You pick $k$ tiles from the bag without replacement. How would you compute (or estimate) the probability that you can form zero words from the dictionary given the $k$ tiles selected? For those not familiar with Scrabble (TM), the wildcard character can be used to match any letter. Thus the word [BOOT] could be 'spelled' with the tiles 'B', '*', 'O', 'T'. To give some idea of the scale of the problem, $k$ is smallish, like 7, $n$ is around 100, and the dictionary contains about 100,000 words of size $k$ or smaller. edit: By 'form a word', I mean a word of length no greater than $k$. Thus, if the word [A] is in the dictionary, then by drawing even a single 'A' from the bag, one has 'formed a word'. The problem of wildcards is radically simplified if one can assume there are words of length 1 in the dictionary. For if there are, any draw of a wildcard automatically can match a length 1 word, and thus one can concentrate on the case where there are no wildcards. Thus the more slippery form of the problem has no 1-letter words in the dictionary. Also, I should explicitly state that the order in which the letters are drawn from the bag is immaterial. One does not have to draw the letters in the 'correct' order of the word.
Probability of not drawing a word from a bag of letters in Scrabble
CC BY-SA 2.5
null
2010-10-20T00:28:37.190
2011-01-11T08:38:13.353
2011-01-11T08:38:13.353
2116
795
[ "sampling", "games", "probability" ]
3780
2
null
3779
2
null
Monte Carlo Approach The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could form a word by $m_w$. The desired probability would be: $$1 - \frac{m_w}{m}$$ Direct Approach Let the number of words in the dictionary be given by $S$. Let $t_s$ be the number of ways in which we can form the $s^\mbox{th}$ word. Let the number of letters needed by the $s^\mbox{th}$ word be denoted by ${m_a, m_b, ..., m_z}$ (i.e., the $s^\mbox{th}$ word needs $m_a$ number of 'a' letters etc). Denote the number of words we can form with all tiles by $N$. $$N = \binom{n}{k}$$ and $$t_s = \binom{n_a}{m_a} \binom{n_b}{m_b} ... \binom{n_z}{m_z}$$ (Including the impact of wildcard tiles is a bit trickier. I will defer that issue for now.) Thus, the desired probability is: $$1 - \frac{\sum_s{t_s}}{N}$$
null
CC BY-SA 2.5
null
2010-10-20T00:54:20.850
2010-10-20T16:26:06.390
2010-10-20T16:26:06.390
null
null
null
3781
1
null
null
4
172
I've got a dataset where someone counted birds in the breeding season over 10 years. For each year (x site), we want to see how reduced sampling might affect our ability to detect a trend. So to that end, I have simulated various datasets from the original where we cut down sampling to once every 45 days (and 60, 90,120). I fit the same negative binomial model to the original and each simulated dataset (5oo datasets for each of the different sampling intervals). So now I have 500 regressions (and coefficients) for each dataset x location. Is there some way to throw a confidence band around these? Is it something very trivial, like computing CI around a mean?
Is there a way to compute confidence intervals for regression estimates of simulated data?
CC BY-SA 2.5
null
2010-10-20T01:04:56.197
2010-10-20T12:58:47.653
null
null
1451
[ "confidence-interval", "negative-binomial-distribution" ]
3782
1
null
null
7
1403
I designed a field experiment with 4 independent factors but data is not normal and heteroscedastic. Friedman test (agricolae package) from R only fits for rbd. Can anybody suggest how to analyze my data please?
How to do factorial analysis for a non-normal and heteroscedastic data?
CC BY-SA 4.0
null
2010-10-20T01:11:04.410
2020-12-18T17:46:42.550
2020-12-18T17:46:03.170
11887
1627
[ "r", "nonparametric", "experiment-design", "heteroscedasticity" ]
3783
2
null
3782
2
null
Package [vegan](http://cc.oulu.fi/~jarioksa/softhelp/vegan.html) implements some permutation testing procedures using a distance based approach. For factor analysis, you should take a look at section 5 of the [documentation](http://cc.oulu.fi/~jarioksa/opetus/metodi/vegantutor.pdf). There's also more information in the paper: - On distance-based permutation tests for between-group comparisons (Reiss et al, 2010) You might also be interested in skimming this table: - Choosing the Correct Statistical Test
null
CC BY-SA 2.5
null
2010-10-20T01:52:30.530
2010-10-20T02:22:15.280
2010-10-20T02:22:15.280
251
251
null
3784
2
null
3757
7
null
First, you should compare models from lmer after fitting with ML (maximum likelihood) since the default is REML. So something like: ``` fit.nc <- update(NoCor, REML=FALSE) fit.wc <- udpate(WithCor, REML=FALSE) anova(fit.nc, fit.wc) ``` It would help to see the output of the random effects variation from your fits. For example, to answer: is there a strong correlation between the intercept and slope and what are the variation sizes? If you find that the random slope only model (NoCor) provides the best fit, then this means that the Size variable has a different effect between groups (depending on the variation). But the no intercept implies that the mean response at some zero level (baseline for your factor Size) is the same across all groups. A random slope only model is not as common unless informed by theory -- usually we assume baseline variation between groups (random intercept) and then let effects (slope) vary as well. If you don't think there's a good reason to accept the slope-only model, then you may want to keep the random intercept & slope model since it may conform better to theory.
null
CC BY-SA 2.5
null
2010-10-20T02:40:28.530
2010-10-20T02:40:28.530
null
null
251
null
3785
2
null
3757
6
null
In this case it would not be expected to find substantive differences between the 'NoCor' model and the 'WithCor' model that you have specified. This is because 'Size' is a factor, not a numeric covariate, and what changes between the two models is that instead of the random effects being referenced to the intercept (base level) the random effects are set as stand alone. You can see when you contrast them with ANOVA that there is no difference in AIC, BIC and logLik and no differences in the degrees of freedom (both models have the same number of parameters). I've found that it might be better to create dummy variables if you want to estimate a variance component for each level of a factor, without including correlations. Something like: ``` rt$Size1<- ifelse(rt$Size == "small", 1, 0) rt$Size2<- ifelse(rt$Size == "med", 1, 0) rt$Size3<- ifelse(rt$Size == "large", 1, 0) NoCor2 <- lmer(RT ~ Size + (0+Size1|ID) + (0+Size2|ID) + (0+Size3|ID), data=rt) ``` You might also want to try the slightly simpler model: ``` NoCorHom <- lmer(RT ~ Size + (1|ID) + (1|ID:Size), data=rt) ``` You can see from the model summary that this fits a single variance for the size factor, equivalent to assuming sphericity and homogeneity (just like a regular repeated measures ANOVA). If Size was numeric then you would be looking at something like the below to compare the correlation parameter: ``` NoCor3 <- lmer(RT ~ Size + (1|ID) + (0+Size|ID), data=rt) #vs Cor <- lmer(RT ~ Size + (1+Size|ID), data=rt) ```
null
CC BY-SA 2.5
null
2010-10-20T06:31:31.430
2010-10-20T06:31:31.430
null
null
966
null
3786
2
null
3782
1
null
The Skillings-Mack test is a general Friedman-type test that can be used in almost any block design with an arbitrary missing-data structure. It's part of the `asbio` package for R, and there's a user-written package `skilmack` for Stata. Skillings, J. H., and G. A. Mack. 1981. [On the use of a Friedman-type statistic in balanced and unbalanced block designs.](http://www.jstor.org/stable/1268034) Technometrics 23: 171-177. Aho, K. [asbio: A collection of statistical tools for biologists.](http://finzi.psych.upenn.edu/R/library/asbio/) Version 0.3-24. 2010-9-18. Comprehensive R Archive Network (CRAN) 2010-09-19. Chatfield, M. and Mander, A. [The Skillings–Mack test (Friedman test when there are missing data).](http://www.stata-journal.com/article.html?article=st0167) Stata Journal 9(2):299-305.
null
CC BY-SA 2.5
null
2010-10-20T07:22:44.740
2010-10-20T07:22:44.740
null
null
449
null
3787
1
3790
null
46
19969
For a unimodal distribution that is moderately skewed, we have the following empirical relationship between the mean, median and mode: $$ \text{(Mean - Mode)}\sim 3\,\text{(Mean - Median)} $$ How was this relationship derived? Did Karl Pearson plot thousands of these relationships before forming this conclusion, or is there a logical line of reasoning behind this relationship?
Empirical relationship between mean, median and mode
CC BY-SA 3.0
null
2010-10-20T08:22:36.250
2018-08-13T03:07:45.363
2017-09-18T11:19:07.617
60613
1636
[ "distributions", "mathematical-statistics", "descriptive-statistics", "history" ]
3788
1
null
null
6
10132
1- How can I check if a set of data can be assumed as IID data? I'm not so familiar with statistics, but I guess I should look at the first lag of autocorrelation for independent distribution. Have no idea about identical distribution condition! 2- It seems that I was not clear enough! I'm trying to detect outliers in a series of records (turbulent flow velocity in a river). I transform data into wavelet space and then I shrink the wavelets over a certain threshold. Since standard deviation is the worst option as an scale estimator, I looked for a new estimator. Rousseeuw and Croux developed new robust estimators for measuring dispersion in iid random variables, Sn and Qn. I don't know offhand if the high breakdown properties they enjoy carry over to the time-series case or not. From the answer given by kwak, I can infer that wavelets do NOT follow independent distribution property. Since after shrinkage, location of non-zero elements indicates the spike location in the original time series. Am I true? (shuffling the indices results in losing location of spikes) If so, other scale estimators like median absolute deviation (MAD) are not valid in case of time series as we calculate the median. How about identical distribution assumption requirements? 3- OK, let me ask my question in simple manner: I want to use robust scale estimators Sn and Qn for shrinking a series of wavelets. the wavelets are obtained from decomposing observations of a turbulent flow field velocity vectors collected at 1 Hz sampling rate. if the data can be assumed as iid e.g. Qn has breakpoint of 50% and efficiency of 82% (Gaussian distribution). My question is the high breakdown properties they enjoy carry over to the time-series case or not. Or how can i approve that the wavelets follow iid characteristics.
How can the IID assumption be checked in a given dataset?
CC BY-SA 2.5
null
2010-10-20T08:24:46.747
2010-10-21T09:21:55.823
2010-10-21T09:21:55.823
1637
1637
[ "distributions", "time-series", "autocorrelation" ]
3789
2
null
3727
4
null
This is an interesting question, which doesn't really have a good solution. There a few different ways of tackling this problem. - Assume an underlying distribution and match moments - as suggested in the answers by @ivant and @onestop. One downside is that the multivariate generalisation may be unclear. - Saddlepoint approximations. In this paper: Gillespie, C.S. and Renshaw, E. An improved saddlepoint approximation. Mathematical Biosciences, 2007. We look at recovering a pdf/pmf when given only the first few moments. We found that this approach works when the skewness isn't too large. - Laguerre expansions: Mustapha, H. and Dimitrakopoulosa, R. Generalized Laguerre expansions of multivariate probability densities with moments. Computers & Mathematics with Applications, 2010. The results in this paper seem more promising, but I haven't coded them up.
null
CC BY-SA 2.5
null
2010-10-20T08:32:13.313
2010-10-20T08:32:13.313
null
null
8
null
3790
2
null
3787
34
null
Denote $\mu$ the mean ($\neq$ average), $m$ the median, $\sigma$ the standard deviation and $M$ the mode. Finally, let $X$ be the sample, a realization of a continuous unimodal distribution $F$ for which the first two moments exist. It's well known that $$|\mu-m|\leq\sigma\label{d}\tag{1}$$ This is a frequent textbook exercise: \begin{eqnarray} |\mu-m| &=& |E(X-m)| \\ &\leq& E|X-m| \\ &\leq& E|X-\mu| \\ &=& E\sqrt{(X-\mu)^2} \\ &\leq& \sqrt{E(X-\mu)^2} \\ &=& \sigma \end{eqnarray} The first equality derives from the definition of the mean, the third comes about because the median is the unique minimiser (among all $c$'s) of $E|X-c|$ and the fourth from Jensen's inequality (i.e. the definition of a convex function). Actually, this inequality can be made tighter. In fact, for any $F$, satisfying the conditions above, it can be shown [3] that $$|m-\mu|\leq \sqrt{0.6}\sigma\label{f}\tag{2}$$ Even though it is in general not true ([Abadir, 2005](http://www.jstor.org/stable/3533476)) that any unimodal distribution must satisfy either one of $$M\leq m\leq\mu\textit{ or }M\geq m\geq \mu$$ it can still be shown that the inequality $$|\mu-M|\leq\sqrt{3}\sigma\label{e}\tag{3}$$ holds for any unimodal, square integrable distribution (regardless of skew). This is proven formally in [Johnson and Rogers (1951)](http://www.jstor.org/stable/pdfplus/2236630.pdf?acceptTC=true&jpdConfirm=true) though the proof depends on many auxiliary lemmas that are hard to fit here. Go see the original paper. --- A sufficient condition for a distribution $F$ to satisfy $\mu\leq m\leq M$ is given in [2]. If $F$: $$F(m−x)+F(m+x)\geq 1 \text{ for all }x\label{g}\tag{4}$$ then $\mu\leq m\leq M$. Furthermore, if $\mu\neq m$, then the inequality is strict. The Pearson Type I to XII distributions are one example of family of distributions satisfying $(4)$ [4] (for example, the Weibull is one common distribution for which $(4)$ does not hold, see [5]). Now assuming that $(4)$ holds strictly and w.l.o.g. that $\sigma=1$, we have that $$3(m-\mu)\in(0,3\sqrt{0.6}] \mbox{ and } M-\mu\in(m-\mu,\sqrt{3}]$$ and since the second of these two ranges is not empty, it's certainly possible to find distributions for which the assertion is true (e.g. when $0<m-\mu<\frac{\sqrt{3}}{3}<\sigma=1$) for some range of values of the distribution's parameters but it is not true for all distributions and not even for all distributions satisfying $(4)$. - [0]: The Moment Problem for Unimodal Distributions. N. L. Johnson and C. A. Rogers. The Annals of Mathematical Statistics, Vol. 22, No. 3 (Sep., 1951), pp. 433-439 - [1]: The Mean-Median-Mode Inequality: Counterexamples Karim M. Abadir Econometric Theory, Vol. 21, No. 2 (Apr., 2005), pp. 477-482 - [2]: W. R. van Zwet, Mean, median, mode II, Statist. Neerlandica, 33 (1979), pp. 1--5. - [3]: The Mean, Median, and Mode of Unimodal Distributions:A Characterization. S. Basu and A. DasGupta (1997). Theory Probab. Appl., 41(2), 210–223. - [4]: Some Remarks On The Mean, Median, Mode And Skewness. Michikazu Sato. Australian Journal of Statistics. Volume 39, Issue 2, pages 219–224, June 1997 - [5]: P. T. von Hippel (2005). Mean, Median, and Skew: Correcting a Textbook Rule. Journal of Statistics Education Volume 13, Number 2.
null
CC BY-SA 3.0
null
2010-10-20T08:34:16.963
2017-10-26T00:38:34.403
2017-10-26T00:38:34.403
603
603
null
3791
2
null
3782
0
null
As you suggest you "designed" an experiment, it would be better if can you give a description of your design and data set. Even if the data is heteroscedastic and non-normal, probably some variable transformations might help and you may be able to take advantage of the design. The t-test is fairly robust to the normality assumptions.
null
CC BY-SA 4.0
null
2010-10-20T08:53:24.307
2020-12-18T17:46:42.550
2020-12-18T17:46:42.550
11887
1307
null
3792
2
null
3788
4
null
You don't frame the two problems the right way. Given a random dataset, ie a collection of observations $x_{ij}$ lying in general [position](http://en.wikipedia.org/wiki/General_position) you can always make the $n$ $x_{i}\in\mathbb{R}^p$ independent from one another by randomly shuffling the $n$ indexes. The real question is whether you will lose information doing this. In some context you will (times series, panel data, cluster analysis, functional analysis,...) in others you won't. That's for the first I in IID. The 'ID' is also defined with respect to what you mean by distribution. Any mixture of distribution is also a distribution. Most often, 'ID' is a portmanteau term for 'unimodal'.
null
CC BY-SA 2.5
null
2010-10-20T08:57:39.543
2010-10-20T11:01:37.220
2010-10-20T11:01:37.220
603
603
null
3793
1
3807
null
3
112
Is there a term that describes what I'm trying to do below? Also, how would you do this using something like JMP or Excel? (or do I need to code this in something like perl?) Given this sort of data: ``` ID| opened | closed | quantity -------------------------------------- 1 | 2010-01-01 | 2010-01-03 | 1 2 | 2010-01-02 | | 2 3 | 2010-01-02 | 2010-10-05 | 3 ``` I'd like to get this data and then graph with x being a time line and Y being total quantity open: ``` on date | total quantity open --------------------------------- 2010-01-01 | 1 2010-01-02 | 6 2010-01-03 | 5 2010-01-04 | 5 2010-01-05 | 2 ```
Is there a term for generating time-line based data from individual points? Also how would I do this?
CC BY-SA 2.5
null
2010-10-20T09:55:52.417
2010-10-20T15:32:06.020
2010-10-20T15:32:06.020
919
1641
[ "terminology", "excel" ]
3794
2
null
3793
0
null
One simple algorithm which could be implemented easily would be: Step 1: Add variables to your data frame; one for each time you want to graph. In pseudo code `if time_i >= opened AND time_i < closed then quantity else 0` Step 2: sum the rows for each generated time variable to get the quantity at each time Step 3: It would then be straight forward to graph the resulting data. All these steps would be straightforward in Excel, R, or any number of other programs.
null
CC BY-SA 2.5
null
2010-10-20T10:16:28.447
2010-10-20T10:16:28.447
null
null
183
null
3795
1
null
null
9
9335
I am looking for a software tool (preferably open source) to draw structural equation/mixture models efficiently and prettily. After looking into xfig and graphviz I now stick to the general vector graphics package [inkscape](http://inkscape.org) because it seems most flexible. I would like to poll the stat.stackexchange community: How do you draw your structural equation/mixture models? What software do you use?
How do you draw structural equation/MPLUS models?
CC BY-SA 2.5
null
2010-10-20T10:33:59.067
2017-03-20T21:11:25.560
2017-03-20T21:11:25.560
12359
767
[ "data-visualization", "modeling", "structural-equation-modeling", "software" ]
3796
2
null
3795
4
null
I use the [psych](http://cran.r-project.org/web/packages/psych/index.html) R package for CFA and John Fox's [sem](http://cran.r-project.org/web/packages/sem/index.html) package with simple SEM. Note that the graphical backend is graphviz. I don't remember if the [lavaan](http://lavaan.ugent.be/) package provides similar or better facilities. Otherwise, the [Mx software](http://www.vcu.edu/mx/) for genetic modeling features a graphical interface in its Windows flavour, and you can export the model with path coefficients.
null
CC BY-SA 2.5
null
2010-10-20T10:58:25.770
2010-10-20T11:13:51.817
2010-10-20T11:13:51.817
930
930
null
3798
2
null
570
13
null
I met Laura Trinchera who contributed a nice R package for PLS-path modeling, [plspm](http://cran.r-project.org/web/packages/plspm/). It includes several graphical output for various kind of 2- and k-block data structures. I just discovered the [plotSEMM](http://www.bethanykok.com/plotSEMM.html) R package. It's more related to your second point, though, and is restricted to graphing bivariate relationships. As for recent references on diagnostic plot for SEMs, here are two papers that may be interesting (for the second one, I just browsed the abstract recently but cannot find an ungated version): - Sanchez BN, Houseman EA, and Ryan LM. Residual-Based Diagnostics for Structural Equation Models. Biometrics (2009) 65, 104–115 - Yuan KH and Hayashi K. Fitting data to model: Structural equation modeling diagnosis using two scatter plots, Psychological Methods (2010) - Porzio GC and Vitale MP. Discovering interaction in Structural Equation Models through a diagnostic plot. ISI 58th World Congress (2011).
null
CC BY-SA 3.0
null
2010-10-20T11:31:09.770
2012-01-22T19:36:01.517
2012-01-22T19:36:01.517
930
930
null
3799
1
3813
null
11
312
I've got following problem: - We have set of N people - We have set of K images - Each person rates some number of images. A person might like or not like an image (these are the only two possibilites) . - The problem is how to calculate likelihood that some person likes a particular image. I'll give example presenting my intuition. N = 4 K = 5 + means that person likes image - means that person doesn't like image 0 means that person hasn't been asked about the image, and that value should be predicted ``` x 1 2 3 4 5 1 + - 0 0 + 2 + - + 0 + 3 - - + + 0 4 - 0 - - - ``` Person 1 will probably like image 3 because, person 2 has similar preferences and person 2 likes image 3. Person 4 will probably not like image 2 because no one else likes it and in addition person 4 does not like most images. Is there any well known method, which can be used to calculate such likelihood?
Probability that someone will like image
CC BY-SA 2.5
null
2010-10-20T11:59:04.173
2010-10-21T09:06:04.907
2010-10-21T09:06:04.907
183
1643
[ "missing-data", "rating" ]
3800
2
null
3799
6
null
This looks like a good problem for machine learning, so I'll concentrate on this group of methods. First and the most obvious idea is the kNN algorithm. There you first calculate the similarity among viewers and then predict the missing votes with the average vote on this picture cast by similar users. For details see [Wikipedia](http://en.wikipedia.org/wiki/KNN). Another idea is to grow unsupervised random forest on this data (either way, with attributes in images or people, whatever is better) and impute missing data based on the forest structure; the whole method is implemented and described in R `randomForest` package, look for `rfImpute` function. Finally, you can restructure the problem to a plain classification task, say make an object of each zero in matrix and try to think of some reasonable descriptors (like average viewer vote, average image vote, vote of a most, second most, ... similar viewer, same with image, possibly some external data (average hue of image, age of voter, etc). And then try various classifiers on this data (SVM, RF, NB, ...). There are also some more complex possibilities; for an overview you can look for Netflix prize challenge (which was a similar problem) solutions.
null
CC BY-SA 2.5
null
2010-10-20T12:29:20.677
2010-10-20T12:49:42.360
2010-10-20T12:49:42.360
null
null
null
3801
2
null
3781
3
null
Yes, you have repetitions of each scenario, so you can make a histogram of each coefficient values. Now treat it as a real distribution and just find a band that encloses this 99, 95 or whatever per cent of its area -- this will be the nonparametric approximation of CI. The simpler way is to assume normality and just count standard deviation over repetitions; this is useful when the number of repetitions is low, but will obviously give bad results when the over-repetition distribution is far from normal.
null
CC BY-SA 2.5
null
2010-10-20T12:58:47.653
2010-10-20T12:58:47.653
null
null
null
null
3802
2
null
3795
11
null
I use [OpenMx](http://openmx.psyc.virginia.edu/) for SEM modeling where I simply use the [omxGraphViz](http://openmx.psyc.virginia.edu/docs/OpenMx/latest/_static/Rdoc/omxGraphviz.html) function to return a dotfile. I haven't found it too inflexible -- the default output looks pretty good and though I've rarely needed to modify the dotfile, it's not hard to do. Update By the way, Graphviz can output SVG files, which can be imported into Inkscape, giving you the best of both worlds. :)
null
CC BY-SA 2.5
null
2010-10-20T13:33:51.450
2010-10-20T14:55:41.667
2010-10-20T14:55:41.667
251
251
null
3803
2
null
3779
7
null
Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains every possible single-letter word. In this case, the chance of not making a word in a draw of $1$ or more letters is zero. (2) The dictionary contains only words formed out of a single letter (e.g., "a", "aa", "aaa", etc.). The chance of not making a word in a draw of $k$ letters is easily determined and obviously is nonzero. Any definite closed-form answer would have to incorporate the entire dictionary structure and would be a truly awful and long formula. The second reason is that MC indeed is feasible: you just have to do it right. The preceding paragraph provides a clue: don't just generate words at random and look them up; instead, analyze the dictionary first and exploit its structure. One way represents the words in the dictionary as a tree. The tree is rooted at the empty symbol and branches on each letter all the way down; its leaves are (of course) the words themselves. However, we can also insert all nontrivial permutations of every word into the tree, too (up to $k!-1$ of them for each word). This can be done efficiently because one does not have to store all those permutations; only the edges in the tree need to be added. The leaves remain the same. In fact, this can be simplified further by insisting that the tree be followed in alphabetical order. In other words, to determine whether a multiset of $k$ characters is in the dictionary, first arrange the elements into sorted order, then look for this sorted "word" in a tree constructed from the sorted representatives of the words in the original dictionary. This will actually be smaller than the original tree because it merges all sets of words that are sort-equivalent, such as {stop, post, pots, opts, spot}. In fact, in an English dictionary this class of words would never be reached anyway because "so" would be found first. Let's see this in action. The sorted multiset is "opst"; the "o" would branch to all words containing only the letters {o, p, ..., z}, the "p" would branch to all words containing only {o, p, ..., z} and at most one "o", and finally the "s" would branch to the leaf "so"! (I have assumed that none of the plausible candidates "o", "op", "po", "ops", or "pos" are in the dictionary.) A modification is needed to handle wildcards: I'll let the programmer types among you think about that. It won't increase the dictionary size (it should decrease it, in fact); it will slightly slow down the tree traversal, but without changing it in any fundamental way. In any dictionary that contains a single-letter word, like English ("a", "i"), there is no complication: the presence of a wildcard means you can form a word! (This hints that the original question might not be as interesting as it sounds.) The upshot is that a single dictionary lookup requires (a) sorting a $k$-letter multiset and (b) traversing no more than $k$ edges of a tree. The running time is $O(k \log(k))$. If you cleverly generate random multisets in sorted order (I can think of several efficient ways to do this), the running time reduces to $O(k)$. Multiply this by the number of iterations to get the total running time. I bet you could conduct this study with a real Scrabble set and a million iterations in a matter of seconds.
null
CC BY-SA 2.5
null
2010-10-20T14:29:00.610
2010-10-20T14:29:00.610
null
null
919
null
3804
1
3832
null
4
1689
Suppose I'm modeling a set of processes using a beta-binomial prior. I can build parameterized beta-binomial models that average over large groups of the processes to give reasonable, although coarse, priors. $p_i \sim \beta B(n, \alpha_i, \beta_i)$ (roughly) I know how to update those priors using observed partial data via Bayes' rule. However, for a subset of the priors, I actually have a little more historical data that I'd like to incorporate into the prior, call it $h_j$, where $j \in h$ is a subset of the $i$s. So the result would be an updated distribution, call it $p'_i$. That additional data is a scalar. For example, if I've got a beta-binomial with $n=9$, $\alpha=2$ and $\beta=3$ (see the examples for the `dbetabin.ab` function in the VGAM R package), it has a mode of 3, but I might have additional prior information that suggests the mode should be closer to 6. I happen to know that this additional information is only modestly predictive ($r$ of .4, say). But it's still better than nothing, and for this particular process, it's known to be a better predictor than the expected value of my existing beta-binomial prior ($r$ of around .3). So, what I'm looking for, is a way to update the beta-binomial, using this scalar, so that the result is also a beta-binomial, which I can then update like any of my other process models as data comes in. (That is, I need a closed-form expression.) $(\alpha'_i, \beta'_i) = f(\alpha_i, \beta_i, h_i, \theta)$, where $\theta$ has something to do with the relative estimated predictiveness of the original beta-binomial and the scalar $h$. What's a reasonable approach here? Is there a way to adjust the $\alpha$ and $\beta$ parameters so that the central tendency is pulled an appropriate amount towards my modestly-predictive scalar? I'm happy to use cross-validation or something to identify a weighting parameter, if that's the right way to go about this.
Updating a beta-binomial
CC BY-SA 3.0
null
2010-10-20T14:43:05.913
2017-08-27T12:45:40.327
2017-08-27T12:45:40.327
11887
6
[ "bayesian", "prior", "beta-binomial-distribution" ]
3805
1
3806
null
5
681
I have a very simple model. This model uses data that are not given as continuous distributions, but are described by percentiles. What is the best way to sample these percentile bins, when the bins are of unequal size? So, for example, to select the body weight for a given individual, I pick a random number between 0-100, then match this value to the nearest percentile. I don't interpolate or extrapolate, I just match the value I draw to the nearest bin. (Extrapolating isn't a good idea given the data.) Let's say, for body weight, the percentiles I have are 25, 50 and 75. But this gives bin sizes of 37.5 (0-37.5), 25 (37.5-62.5), and 37.5 (62.5-100). So because of the unequal bin sizes, I'm going to be sampling both the 25% and 75% bins much more than I'll be sampling the median, 50%, bin. This is the opposite of what I'd like to happen. I could weight the bins, but that seems arbitrary. Or, instead of drawing my random number from a uniform distribution 0-100, I could draw it from a normal distribution centered at the median, but that also seems arbitrary. Or, alternatively, I'd love to be convinced that I don't actually have a problem here. Any ideas on how I could better set this up? Thanks!
Sampling with unequal bins?
CC BY-SA 2.5
null
2010-10-20T14:47:38.207
2010-10-20T15:06:32.823
2010-10-20T15:06:32.823
919
1645
[ "distributions", "modeling", "sampling", "discrete-data", "quantiles" ]
3806
2
null
3805
2
null
In the spirit of simplicity, while aiming to attain some realism (which is not possible without both interpolation and extrapolation), consider fitting a distribution to the percentiles and sampling from it. For body weights we can expect a power between 1/2 and 1/3 to be normally distributed. By trial and error you can find a power that symmetrizes the percentiles (specifically, look for a $p$ for which $q_{50}^p - q_{25}^p$ is approximately equal to $q_{75}^p - q_{50}^p$). The transformed percentiles easily determine a unique Normal distribution: $q_{50}^p$ is its mean and the scaled IQR $(q_{75}^p - q_{25}^p) / 1.349$ is its standard deviation. To obtain a random body weight, draw from this normal and invert the transformation (that is, apply the $1/p$ power). This example is offered as an illustration only, not as a general recipe. For other attributes (like age or, in another context, recurrence times of major floods) it's wise to let empirical knowledge and theory suggest an appropriate distribution to fit to the percentiles: a Normal distribution is not always appropriate, even after transforming the percentiles for symmetry. It is also wise in many situations to allow for outliers. E.g., you could "contaminate" your normal distribution by occasionally drawing from a distribution with a higher mean and SD, simulating the occasional grossly obese person.
null
CC BY-SA 2.5
null
2010-10-20T15:05:41.183
2010-10-20T15:05:41.183
null
null
919
null
3807
2
null
3793
2
null
Use SUMIF() to compute the total open to date and the total closed to date. The difference at any time is the total currently open. Let's suppose the data you show are in the range A1:D4 in Excel. Reserve four columns for your output: the two shown plus two for intermediate calculations. Let's suppose they are columns E:H. The formulas are: Column E has the dates in ascending order exactly as shown in your output. Column F is computed by propagating this formula from F2 down as far as needed: `=SUMIF(B$2:B$4,"<=" & $E2,$D$2:$D$4)` (Extend the row index "4" as far down as needed to cover your data.) Column G is computed by propagating the formula from F2 over to G2 and then down. For example, the formula in G3 will be `=SUMIF(C$2:C$4,"<=" & $E3,$D$2:$D$4)` Column H is the difference of columns G and F: it contains the results you need. My spreadsheet looks like this: ``` ID Opened Closed Quantity Date Open Closed Net 1 1/1/2010 1/3/2010 1 1/1/2010 1 0 1 2 1/2/2010 2 1/2/2010 6 0 6 3 1/2/2010 1/5/2010 3 1/3/2010 6 1 5 1/4/2010 6 1 5 1/5/2010 6 4 2 ``` The translation to R is straightforward for those who prefer that environment.
null
CC BY-SA 2.5
null
2010-10-20T15:30:33.467
2010-10-20T15:30:33.467
null
null
919
null
3808
2
null
3752
2
null
There seem to be two different questions here, which I'll try to split: 1) how is KS, kernel smoothing, different from KDE, kernel density estimation ? Well, say I have an estimator / smoother / interpolator ``` est( xi, fi -> gridj, estj ) ``` and also happen to know the "real" densityf() at the xi. Then running `est( x, densityf )` must give an estimate of densityf(): a KDE. It may well be that KSs and KDEs are evaluated differently — different smoothness criteria, different norms — but I don't see a fundamental difference. What am I missing ? 2) How does dimension affect estimation or smoothing, intuitivly ? Here's a toy example, just to help intuition. Consider a box of N=10000 points in a uniform grid, and a window, a line or square or cube, of W=64 points within it: ``` 1d 2d 3d 4d --------------------------------------------------------------- data 10000 100x100 22x22x22 10x10x10x10 side 10000 100 22 10 window 64 8x8 4x4x4 2.8^4 side ratio .64 % 8 % 19 % 28 % dist to win 5000 47 13 7 ``` Here "side ratio" is window side / box side, and "dist to win" is a rough estimate of the mean distance of a random point in the box to a randomly-placed window. Does this make any sense at all ? (A picture or applet would really help: anyone ?) The idea is that a fixed-size window within a fixed-size box has very different nearness to the rest of the box, in 1d 2d 3d 4d. This is for a uniform grid; maybe the strong dependence on dimension carries over to other distributions, maybe not. Anyway, it looks like a strong general effect, an aspect of the curse of dimensionality.
null
CC BY-SA 2.5
null
2010-10-20T15:51:18.260
2010-10-20T15:51:18.260
null
null
557
null
3809
2
null
3804
1
null
Assume that prior2 is a beta random variable and set $\alpha$ and $\beta$ as needed subject to the constraint that $\frac{\alpha-1}{\alpha + \beta -2} = 6$. In response to your comment: - Getting to prior2: Fix either $\alpha$ or $\beta$ at the same value as prior1 and tweak the other to match the desired mode. If the above does not work then you can use whatever constraints you want to impose (e.g., same variance) and use some sort of routine (e.g., optimization) to get to your desired mode (e.g., Min abs($\frac{\alpha-1}{\alpha + \beta -2} - 6)$ subject to constraints) or simply play around till your prior2 parameters are consistent with your constraints. - Accommodating the fact that you do not fully believe in prior2: A principled way to approach the issue of 20% trust in prior2 is to assume mixture priors. Thus, your prior is: $f(\alpha_1,\beta_1|-) 0.8 + f(\alpha_2,\beta_2|-) 0.2$. You could multiply your likelihood with the above mixture priors to get a beta-binomial model.
null
CC BY-SA 2.5
null
2010-10-20T16:09:43.653
2010-10-20T18:39:15.817
2010-10-20T18:39:15.817
null
null
null
3810
1
null
null
4
3374
I have data from a survey comprised of several measures that used different Likert-type scaling (4-, 5-, and 6-point scales). I would like to run a principal components analysis using the data from these measures. It seems to me that I need to transform this data in some way so that the power of all items is equivalent prior to analysis. However, I am uncertain how to proceed.
Data transformation for Principal Components Analysis from different Likert scales
CC BY-SA 2.5
null
2010-10-20T16:38:35.493
2020-11-15T08:13:18.257
2020-11-15T08:13:18.257
930
1647
[ "pca", "data-transformation", "likert", "scales", "psychometrics" ]
3812
2
null
3810
9
null
As suggested by @whuber, you can "abstract" the scale effect by working with a standardized version of your data. If you're willing to accept that an interval scale is the support of each of your item (i.e. the distance between every two response categories would have the same meaning for every respondents), then linear correlations are fine. But you can also compute [polychoric correlation](http://en.wikipedia.org/wiki/Polychoric_correlation) to better account for the discretization of a latent variable (see the R package [polycor](http://cran.r-project.org/web/packages/polycor/index.html)). Of note, it's a largely more computer-intensive job, but it works quite well in R. Another possibility is to combine optimal scaling within your PCA, as implemented in the [homals](http://cran.r-project.org/web/packages/homals/index.html) package. The idea is to find a suitable non-linear transformation of each scale, and this is very nicely described by Jan de Leeuw in the accompagnying vignette or the JSS article, [Gifi Methods for Optimal Scaling in R: The Package homals](http://www.jstatsoft.org/v31/i04/paper). There are several examples included. For a more thorough understanding of this approach with any factorial method, see the work of [Yoshio Takane](http://takane.brinkster.net/Yoshio/) in the 80s. Similar points were raised by @Jeromy and @mbq on related questions, [Does it ever make sense to treat categorical data as continuous?](https://stats.stackexchange.com/questions/539/does-it-ever-make-sense-to-treat-categorical-data-as-continuous), [How can I use optimal scaling to scale an ordinal categorical variable?](https://stats.stackexchange.com/questions/548/how-can-i-use-optimal-scaling-to-scale-an-ordinal-categorical-variable)
null
CC BY-SA 2.5
null
2010-10-20T17:20:23.490
2010-10-20T18:18:35.790
2017-04-13T12:44:33.310
-1
930
null
3813
2
null
3799
6
null
I believe this is a standard problem of [Collaborative Filtering](http://en.wikipedia.org/wiki/Collaborative_filtering). A google search gives thousands of results.
null
CC BY-SA 2.5
null
2010-10-20T19:01:44.367
2010-10-20T19:01:44.367
null
null
795
null
3814
1
null
null
105
7058
I recently asked a question regarding general principles around [reviewing statistics in papers](https://stats.stackexchange.com/questions/3460/reviewing-statistics-in-papers). What I would now like to ask, is what particularly irritates you when reviewing a paper, i.e. what's the best way to really annoy a statistical referee! One example per answer, please.
How to annoy a statistical referee?
CC BY-SA 3.0
null
2010-10-20T19:09:31.910
2019-06-25T12:56:38.810
2017-04-13T12:44:45.783
-1
8
[ "references", "referee" ]