Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5357 | 2 | null | 5340 | 11 | null | I mistrust all but the lowest-level functions in Excel, and for good reason: many procedures that go beyond simple arithmetic operations have flaws or errors and most of them are poorly documented. This includes all the probability distribution functions.
Numerical flaws are inevitable due to limitations in floating point accuracy. For example, no matter what platform you use, if it uses double precision then don't try to compute the tail probability of a standard normal distribution for z = 50: the value equals $10^{-545}$, which underflows. However, using NORMSDIST you'll get increasingly bad values once |z| exceeds 3 or so, and once |z| exceeds 8 Excel gives up and just returns zero. Here is a tabulation of some of the errors (compared to Mathematica's answers):
$$\eqalign{
Z &\quad 10^6\text{(Excel - True)/True} \cr
-3 &\quad 50 \cr
-4 &\quad 468 \cr
-5 &\quad 1580\cr
-6 &\quad 3582\cr
-7 &\quad 6462\cr
-8 &\quad 70789\cr
-9 &\quad -1000000
}$$
Therefore, if you must use Excel's statistical functions, severely limit the ones you do use; learn their flaws and foibles; work around those problems; and use the ones you are familiar with as building blocks for everything else. In this spirit, I recommend computing lognormal probabilities in terms of NORMSDIST: just apply it to the log of the argument. Specifically, in place of LOGNORMDIST(z, mu, sigma) use NORMSDIST((LN(z) - mu)/sigma). Once you have verified that these expressions do return the same values, you have also established exactly what LOGNORMDIST does: there's little possibility of confusion. In particular, you can see that mu is the mean of the logarithms, not the geometric mean, and that sigma is the SD of the logs, not the geometric SD.
| null | CC BY-SA 2.5 | null | 2010-12-10T17:11:32.343 | 2010-12-10T17:11:32.343 | null | null | 919 | null |
5358 | 2 | null | 5347 | 26 | null | If it often resembles a [Poisson](http://en.wikipedia.org/wiki/Poisson_distribution), have you tried approximating it by a Poisson with parameter $\lambda = \sum p_i$ ?
EDIT: I've found a theoretical result to justify this, as well as a name for the distribution of $Y$: it's called the [Poisson binomial distribution](http://en.wikipedia.org/wiki/Poisson_binomial_distribution). [Le Cam's inequality](http://en.wikipedia.org/wiki/Le_Cam%27s_theorem) tells you how closely its distribution is approximated by the distribution of a Poisson with parameter $\lambda = \sum p_i$. It tells you the quality of this approx is governed by the sum of the squares of the $p_i$s, to paraphrase [Steele (1994)](http://www.jstor.org/stable/2325124). So if all your $p_i$s are reasonably small, as it now appears they are, it should be a pretty good approximation.
EDIT 2: How small is 'reasonably small'? Well, that depends how good you need the approximation to be! The [Wikipedia article on Le Cam's theorem](http://en.wikipedia.org/wiki/Le_Cam%27s_theorem) gives the precise form of the result I referred to above: the sum of the absolute differences between the [probability mass function](http://en.wikipedia.org/wiki/Probability_mass_function) (pmf) of $Y$ and the pmf of the above Poisson distribution is no more than twice the sum of the squares of the $p_i$s. Another result from [Le Cam (1960)](http://www.projecteuclid.org/euclid.pjm/1103038058) may be easier to use: this sum is also no more than 18 times the largest $p_i$. There are quite a few more such results... see [Serfling (1978)](http://www.jstor.org/stable/2030354) for one review.
| null | CC BY-SA 2.5 | null | 2010-12-10T18:23:10.457 | 2010-12-12T17:20:27.390 | 2010-12-12T17:20:27.390 | 449 | 449 | null |
5359 | 1 | 5365 | null | 5 | 1537 | >
Diagonal elements of Psi (...)
represent independent noise variances
for each of the variables
C.M. Bishop, Pattern Recognition and Machine Learning
...but I'm not clear on what does the Psi signify.
I've applied FA to a series of images, received Psi matrix and now struggling with interpreting the results.
| What does the Psi term in factor analysis signify? | CC BY-SA 2.5 | null | 2010-12-10T18:26:17.983 | 2010-12-11T12:52:08.337 | 2010-12-11T12:52:08.337 | null | 1371 | [
"factor-analysis"
]
|
5360 | 1 | 5361 | null | 13 | 5126 | I am fitting a stepwise logistic regression on a set of data in SPSS. In the procedure, I am fitting my model to a random subset that is approx. 60% of the total sample, which is about 330 cases.
What I find interesting is that every time I re-sample my data, I am getting different variables popping in and out in the final model. A few predictors are always present in the final model, but others pop in and out depending on the sample.
My question is this. What is the best way to handle this? I was hoping to see the convergence of predictor variables, but that isn't the case. Some models make much more intuitive sense from an operational view (and would be easier to explain to the decision makers), and others fit the data slightly better.
In short, since variables are shuffling around, how would you recommend dealing with my situation?
Many thanks in advance.
| Stepwise logistic regression and sampling | CC BY-SA 2.5 | null | 2010-12-10T18:42:31.760 | 2015-12-02T14:51:50.233 | 2010-12-11T12:56:21.487 | null | 569 | [
"logistic",
"spss",
"stepwise-regression"
]
|
5361 | 2 | null | 5360 | 16 | null | If you're going to use a stepwise procedure, don't resample. Create one random subsample once and for all. Perform your analysis on it. Validate the results against the held-out data. It's likely most of the "significant" variables will turn out not to be significant.
(Edit 12/2015: You can indeed go beyond such a simple approach by resampling, repeating the stepwise procedure, and re-validating: this will lead you into a form of cross-validation. But in such a case more sophisticated methods of variable selection, such as ridge regression, the Lasso, and the Elastic Net are likely preferable to stepwise regression.)
Focus on the variables that make sense, not those that fit the data a little better. If you have more than a handful of variables for 330 records, you're at great risk of overfitting in the first place. Consider using fairly severe entering and leaving criteria for the stepwise regression. Base it on AIC or $C_p$ instead of thresholds for $F$ tests or $t$ tests.
(I presume you have already carried out the analysis and exploration to identify appropriate re-expressions of the independent variables, that you have identified likely interactions, and that you have established that there really is an approximately linear relationship between the logit of the dependent variable and the regressors. If not, do this essential preliminary work and only then return to the stepwise regression.)
Be cautious about following generic advice like I just gave, by the way :-). Your approach should depend on the purpose of the analysis (prediction? extrapolation? scientific understanding? decision making?) as well as the nature of the data, the number of variables, etc.
| null | CC BY-SA 3.0 | null | 2010-12-10T19:05:02.207 | 2015-12-02T14:51:50.233 | 2015-12-02T14:51:50.233 | 919 | 919 | null |
5362 | 2 | null | 5278 | 14 | null | Thank you for your great answer so far. While they where very enlightening, I felt that using them for the course I am currently teaching (well, TA'ing) will be too much for my students. (I help teach the course BioStatistics for students from advanced degrees in medicine sciences)
Therefore, I ended up creating two images (Both are simulation based) which I think are useful example for explaining ANOVA.
I would be happy to read comments or suggestions for improving them.
The first image shows a simulation of 30 data points, separated to 3 plots (showing how the MST=Var is separated to the data that creates MSB and MSW:
- The left plot shows a scatter plot of the data per group.
- The middle one shows how the data we are going to use for MSB looks like.
- The right image shows how the data we are going to use for MSW looks like.

The second image shows 4 plots, each one for a different combination of variance and expectancy for the groups while
- The first row of plots is for low variance, while the second row is for high(er) variance.
- The first column of plots is for equal expectancy between the groups, while the second column shows groups with (very) different expectancies.

| null | CC BY-SA 2.5 | null | 2010-12-10T19:15:07.527 | 2010-12-10T20:26:42.770 | 2010-12-10T20:26:42.770 | 253 | 253 | null |
5363 | 1 | null | null | 24 | 8461 | I got a slight confusion on the [backpropagation](http://en.wikipedia.org/wiki/Backpropagation) algorithm used in [multilayer perceptron](http://en.wikipedia.org/wiki/Multilayer_perceptron) (MLP).
The error is adjusted by the cost function. In backpropagation, we are trying to adjust the weight of the hidden layers. The output error I can understand, that is, `e = d - y` [Without the subscripts].
The questions are:
- How does one get the error of hidden layer? How does one calculate it?
- If I backpropagate it, should I use it as a cost function of an adaptive filter or should I use a pointer (in C/C++) programming sense, to update the weight?
| Backpropagation algorithm and error in hidden layer | CC BY-SA 4.0 | null | 2010-12-10T19:21:44.130 | 2021-07-26T16:04:02.020 | 2021-05-05T17:56:24.207 | 155836 | 2329 | [
"machine-learning",
"neural-networks",
"backpropagation"
]
|
5364 | 1 | 5385 | null | 10 | 4734 | The underlying model of [PLS](http://en.wikipedia.org/wiki/Partial_least_squares_regression) is that a given $n \times m$ matrix $X$ and $n$ vector $y$ are related by
$$X = T P' + E,$$
$$y = T q' + f,$$
where $T$ is a latent $n \times k$ matrix, and $E, f$ are noise terms (sssuming $X, y$ are centered).
PLS produces estimates of $T, P, q$, and a 'shortcut' vector of regression coefficients, $\hat{\beta}$ such that $y \sim X \hat{\beta}$. I would like to find the distribution of $\hat{\beta}$ under some simplifying assumptions, which should probably include the following:
- The model is correct, i.e. $X = T P' + E,y = T q' + f$ for unknown $T, P, q$;
- The number of latent factors, $k$, is known, and used in the PLS algorithm;
- The actual error terms are i.i.d. zero-mean normal with known variances;
This question is somewhat underdefined because there are scores of variants of 'the' PLS algorithm, but I would accept results for any of them. I would also accept guidance on how to estimate the distribution of $\hat{\beta}$ via e.g. a bootstrap, but perhaps that is a separate question.
| How to compute the confidence intervals on regression coefficients in PLS? | CC BY-SA 2.5 | null | 2010-12-10T20:19:19.707 | 2020-11-08T19:14:42.627 | null | null | 795 | [
"regression",
"confidence-interval",
"latent-variable",
"partial-least-squares"
]
|
5365 | 2 | null | 5359 | 6 | null | Following Bishop's notation, the FA model is written as (Eq. 12.64, p. 584):
$$
p(x|z)=\mathcal{N}(x|\mathbf{W}z+\boldsymbol{\mu},\boldsymbol{\psi})
$$
where $\boldsymbol{\psi}$ is a $D\times D$ diagonal matrix of so-called variable uniquenesses, that is the variance not accounted for by the latent factors, whereas $\mathbf{W}$ reflects factor loadings $\lambda_i$, that is the correlation of variable $i$ with factors represented in $z$ (more exactly, the square of $\lambda_i$ is the variance explained by the latent factor).
If you're not familiar with the FA literature, I would suggest "lighter" approach, e.g. William Revelle has good tutorials on his website [personality-project.org](http://www.personality-project.org); especially, I would suggest Chapter 6 of his forthcoming book on Psychometric methods entitled [Constructs, Components, and Factor models](http://personality-project.org/r/book/Chapter6.pdf). You will shortly understand the relations between PCA and FA. Specifically, with PCA, we are constructing linear combinations of observed variables (this yields a composite variable), whereas in FA we are expressing each variable as a weighted combination of hypothesized latent factors (where weights are called loadings) plus an error term (the $\boldsymbol{\psi}$ in the above formula). In sum, the FA model incorporates a model for noise--this is what is expressed in Equation 12.65; but see [What are the differences between Factor Analysis and Principal Component Analysis](https://stats.stackexchange.com/questions/1576/what-are-the-differences-between-factor-analysis-and-principal-component-analysis), for additional discussion.
| null | CC BY-SA 2.5 | null | 2010-12-10T20:34:45.023 | 2010-12-10T20:42:01.910 | 2017-04-13T12:44:20.943 | -1 | 930 | null |
5366 | 1 | null | null | 23 | 18356 | I need to cluster units into $k$ clusters to minimize within-group sum of squares (WSS), but I need to ensure that the clusters each contain at least $m$ units. Any idea if any of R's clustering functions allow for clustering into $k$ clusters subject to a minimum cluster size constraint? kmeans() does not seem to offer a size constraint option.
| Clustering (k-means, or otherwise) with a minimum cluster size constraint | CC BY-SA 2.5 | null | 2010-12-10T20:53:42.207 | 2019-12-09T16:16:06.647 | null | null | 96 | [
"r",
"clustering"
]
|
5367 | 2 | null | 5366 | 4 | null | I think it would just be a matter of running the k means as part of an if loop with a test for cluster sizes, I.e. Count n in cluster k - also remember that k means will give different results for each run on the same data so you should probably be running it as part of a loop anyway to extract the "best" result
| null | CC BY-SA 2.5 | null | 2010-12-10T21:31:02.173 | 2010-12-10T21:31:02.173 | null | null | null | null |
5368 | 1 | null | null | 2 | 451 | I think that after apply PCA the covariance between components is reduced since the aim of PCA is to maximize variance. Covariance is needed by Factor analysis and thus it does not make sense to operate on the data in PCA space.
Does this make any sense? Does someone have any other explanation?
| Why would it not make sense to apply factor analysis after PCA? | CC BY-SA 2.5 | null | 2010-12-11T03:34:15.997 | 2010-12-13T04:35:59.257 | 2010-12-11T12:50:40.273 | null | null | [
"pca",
"factor-analysis"
]
|
5369 | 2 | null | 5368 | 4 | null | PCA actually ends up with orthogonal variables, therefore the covariance between the components should be 0.
It does not make sense to do factor analysis which selects the factors from a dataset after a procedure that makes factors from the dataset.
| null | CC BY-SA 2.5 | null | 2010-12-11T06:27:19.523 | 2010-12-11T06:27:19.523 | null | null | 1808 | null |
5370 | 2 | null | 5366 | 6 | null | Use EM Clustering
In EM clustering, the algorithm iteratively refines an initial cluster model to fit the data and determines the probability that a data point exists in a cluster. The algorithm ends the process when the probabilistic model fits the data. The function used to determine the fit is the log-likelihood of the data given the model.
If empty clusters are generated during the process, or if the membership of one or more of the clusters falls below a given threshold, the clusters with low populations are reseeded at new points and the EM algorithm is rerun.
| null | CC BY-SA 2.5 | null | 2010-12-11T06:33:24.350 | 2010-12-11T06:33:24.350 | null | null | 1808 | null |
5371 | 2 | null | 5366 | 2 | null | How large is your data set? Maybe you could try to run a hierarchical clustering and then decide which clusters retain based on your dendrogram.
If your data set is huge, you could also combine both clustering methods: an initial non-hierarchical clustering and then a hierarchical clustering using the groups from the non-hierarchical analysis. You can find an example of this approach in [Martínez-Pastor et al (2005)](http://www.ncbi.nlm.nih.gov/pubmed/15385419)
| null | CC BY-SA 2.5 | null | 2010-12-11T08:04:01.883 | 2010-12-11T15:52:15.773 | 2010-12-11T15:52:15.773 | 221 | 221 | null |
5372 | 1 | 5383 | null | 3 | 463 | I need to detect Out Of Shelf situations in reatil stores. What I had in mind is to assume that purchase frequency of some article (SKU) has certain ditribution (I tried with Poisson) and then calculate probability (according to that distribution) of that specific article was not bought since last purchase, and if that probability is below some treshold (let say 1%) then it is a possible OOS situation.
My question is:
What distributions are commonly used in such problems and is there any way that one can test for several distributions and than pick the best one? Thing is, this should be done automatically due to large number of SKU/store combinations to deal with.
| What distribution should be used for the detection of out of shelf situations in a retail stores? | CC BY-SA 2.5 | null | 2010-12-11T11:31:49.120 | 2011-10-16T11:19:48.770 | 2010-12-11T18:57:33.810 | null | 2341 | [
"distributions",
"probability",
"poisson-distribution"
]
|
5373 | 2 | null | 5344 | 23 | null | Others have summarized the differences very well. My impression is that `lme4` is more suited for clustered data sets especially when you need to use crossed random effects. For repeated measures designs (including many longitudinal designs) however, `nlme` is the tool since only `nlme` supports specifying a correlation structure for the residuals. You do it using the `correlations` or `cor` argument with a `corStruct` object. It also allows you to model heteroscedasticity using a `varFunc` object.
| null | CC BY-SA 3.0 | null | 2010-12-11T11:45:13.167 | 2016-12-08T15:40:34.870 | 2016-12-08T15:40:34.870 | 60613 | 2020 | null |
5374 | 2 | null | 5333 | 7 | null | You're misinterpreting these results, which is easy to do as with mixed models there's more than one type of 'fitted value' and the documentation of `lmer` isn't as clear as it might be. Try using `fixed.effects()` in place of `fitted()` and you should get correlations which makes more intuitive sense if you're interested in the contribution of the fixed effects.
The `fitted()` function of `lmer` is documented as giving the 'conditional means'. I had to check the Theory.pdf vignette to work out that these include the predictions of the modelled random effects. Your modelled random effect variances are, overall, smaller in the model including the fixed effect. But smaller random effects mean less shrinkage, i.e. the predicted random effect is closer to the observed residual. When calculating the correlation, it seems that in your case this smaller shrinkage just overcomes the improvement from the fixed effect.
The interpretation of $R^2$ as 'proportion of variance explained' gets more complex with mixed models, as it depends whether you think of random effects as 'explaining' variance. Probably not, in most cases.
| null | CC BY-SA 2.5 | null | 2010-12-11T11:52:26.187 | 2010-12-11T11:52:26.187 | null | null | 449 | null |
5375 | 2 | null | 5278 | 25 | null | How about something like this?

Following Crawley (2005). Statistics. An introduction using R: Wiley.
| null | CC BY-SA 2.5 | null | 2010-12-11T13:19:29.733 | 2010-12-11T13:19:29.733 | null | null | 1050 | null |
5376 | 2 | null | 5278 | 2 | null | It seems the ship has already sailed in terms of an answer, but I think that if this is an introductory course that most of the displays offered here are going to be too difficult to grasp for introductory students... or at the very least too difficult to grasp without an introductory display which provides a very simplified explanation of partitioning variance. Show them how SST total increases with the number of subjects. Then after showing it inflate for several subjects (maybe adding one in each group several times), explain that SST = SSB + SSW (though I prefer to call it SSE from the outset because it avoids confusion when you go to the within subjects test IMO). Then show them a visual representation of the variance partitioning, e.g. a big square color coded such that you can see how SST is made of SSB and SSW. Then, graphs similar to Tals or EDi may become useful, but I agree with EDi that the scale should be SS rather than MS for pedagogical purposes when first explaining things.
| null | CC BY-SA 2.5 | null | 2010-12-11T14:43:17.020 | 2010-12-12T04:24:44.690 | 2010-12-12T04:24:44.690 | 196 | 196 | null |
5377 | 2 | null | 5347 | 4 | null | Well, based on your description and the discussion in the comments it is clear that $Y$ has mean $\sum_i p_i$ and variance $\sum_i p_{i}(1-p_{i})$. The shape of $Y$'s distribution will ultimately depend on the behavior of $p_i$. For suitably "nice" $p_i$ (in the sense that not too many of them are really close to zero), the distribution of $Y$ will be approximately normal (centered right at $\sum p_i$). But as $\sum_i p_i$ starts heading toward zero the distribution will be shifted to the left and when it crowds up against the $y$-axis it will start looking a lot less normal and a lot more Poisson, as @whuber and @onestop have mentioned.
From your comment "the distribution looks Poisson" I suspect that this latter case is what's happening, but can't really be sure without some sort of visual display or summary statistics about the $p$'s. Note however, as @whuber did, that with sufficiently pathological behavior of the $p$'s you can have all sorts of spooky things happen, like limits that are mixture distributions. I doubt that is the case here, but again, it really depends on what your $p$'s are doing.
As to the original question of "how to efficiently model", I was going to suggest a hierarchical model for you but it isn't really appropriate if the $p$'s are fixed constants. In short, take a look at a histogram of the $p$'s and make a first guess based on what you see. I would recommend the answer by @mpiktas (and by extension @csgillespie) if your $p$'s aren't too crowded to the left, and I would recommend the answer by @onestop if they are crowded left-ly.
By the way, here is the R code I used while playing around with this problem: the code isn't really appropriate if your $p$'s are too small, but it should be easy to plug in different models for $p$ (including spooky-crazy ones) to see what happens to the ultimate distribution of $Y$.
```
set.seed(1)
M <- 5000
N <- 15000
p <- rbeta(N, shape1 = 1, shape2 = 10)
Y <- replicate(M, sum(rbinom(N, size = 1, prob = p)))
```
Now take a look at the results.
```
hist(Y)
mean(Y)
sum(p)
var(Y)
sum(p*(1 - p))
```
Have fun; I sure did.
| null | CC BY-SA 2.5 | null | 2010-12-11T17:05:23.770 | 2010-12-11T17:05:23.770 | null | null | null | null |
5378 | 2 | null | 5278 | 12 | null | Since we gather certain types of nice graphs in this post, here is another one that I recently found and may help you understand how ANOVA works and how the F statistic is generated. The graphic was created using the [granova](http://rgm2.lab.nig.ac.jp/RGM2/func.php?rd_id=granova:granova.1w) package in R.

| null | CC BY-SA 2.5 | null | 2010-12-11T17:33:56.597 | 2010-12-11T17:33:56.597 | null | null | 339 | null |
5379 | 2 | null | 5360 | 10 | null | In general, there are two problems of feature selection:
- minimal optimal, where you seek for smallest set of variables that give you the smallest error
- all relevant, where you seek for all variables relevant in a problem
The convergence of predictor selection is in a domain of the all relevant problem, which is hell hard and thus requires much more powerful tools than logistic regression, heavy computations and a very careful treatment.
But it seems you are doing the first problem, so you shouldn't worry about this. I can generally second whuber's answer, but I disagree with the claim that you should drop resampling -- here it won't be a method to stabilize feature selection, but nevertheless it will be a simulation for estimating performance of a coupled feature selection + training, so will give you an insight in confidence of your accuracy.
| null | CC BY-SA 2.5 | null | 2010-12-11T18:04:50.730 | 2010-12-11T23:40:23.283 | 2010-12-11T23:40:23.283 | null | null | null |
5380 | 2 | null | 5360 | 12 | null | An important question is "why do why do you want a model with as few variables a possible?". If you want to have as few variables as possible to minimize the cost of data collection for the operational use of your model, then the answers given by whuber and mbq are an excellent start.
If predictive performance is what is really important, then you are probably better off not doing any feature selection at all and use regularized logistic regression instead (c.f. ridge regression). In fact if predictive performance was what was of primary importance, I would use bagged regularized logistic regression as a sort of "belt-and-braces" strategy for avoiding over-fitting a small dataset. Millar in his [book](http://rads.stackoverflow.com/amzn/click/1584881712) on subset selection in regression gives pretty much that advice in the appendix, and I have found it to be excellent advice for problems with lots of features and not very many observations.
If understanding the data is important, then there is no need for the model used to understand the data to be the same one used to make predictions. In that case, I would resample the data many times and look at the patterns of selected variables across samples to find which variables were informative (as mbq suggests, if feature selection is unstable, a single sample won't give the full picture), but I would still used the bagged regularized logistic regression model ensemble for predictions.
| null | CC BY-SA 2.5 | null | 2010-12-11T18:34:21.803 | 2010-12-11T18:34:21.803 | null | null | 887 | null |
5381 | 2 | null | 5360 | 6 | null | You might glance at the paper [Stability Selection](http://www.stats.ox.ac.uk/~meinshau/stability.pdf) by Meinshausen and Buhlmann in J R Statist. Soc B (2010) 72 Part 4, and the discussion after it. They consider what happens when you repeatedly divide your set of data points at random into two halves and look for features in each half. By assuming that what you see in a one half is independent of what you see in the matching other half you can prove bounds on the expected number of falsely selected variables.
| null | CC BY-SA 2.5 | null | 2010-12-11T19:10:31.897 | 2010-12-12T11:40:31.173 | 2010-12-12T11:40:31.173 | 930 | 1789 | null |
5382 | 1 | 5644 | null | 2 | 2325 | I'm a new guy here. Hopefully, I'm asking this question to right forum.
## Problem:
We have data of a group of people (P1, P2, P3). They rank their expertise (1-10, where higher number is better) in a list of components (G1, G2, G3).
```
P1 P2 P3
--------------
G1 | 8 4 7
G2 | 7 3 7
G3 | 9 6 5
```
Also, we have some data regarding work done by each person in each component. Example:
For P1,
```
W WD
----------
G1 | 0 0
G2 | 2 0
G3 | 8 2
```
where `W` is total work allotted to user P1, and `WD` is actual work done. W >= WD >= 0.
We have similar data for P2, and P3 users.
Point to note: The user might have some level of expertise regardless of work done in a component. Example: P1 has ranked himself 8/10 even though he has not been given any task in G1 component (W = 0). P1 also has ranked himself 7/10 for G2 even though he has not finished any task in that component (WD = 0).
Now, we want to calculate effective rating of all users relative to the group of users, not self-ranking, considering their work data and self-ranking.
Can anyone suggest some mechanism to achieve this?
Thanks much in advance!
| Normalizing rating in a group of people [finding effectiveness] | CC BY-SA 2.5 | null | 2010-12-11T19:54:14.823 | 2010-12-20T07:25:21.537 | 2010-12-11T20:26:26.090 | 2344 | 2344 | [
"normalization",
"rating"
]
|
5383 | 2 | null | 5372 | 4 | null | Poisson makes a certain amount of sense if "purchase frequency" is e.g. number of items bought per day. Negative binomial would also make sense (probably more sensible, as it will allow for more variation). If you have a huge amount of data, then you could just use the empirical distribution (i.e. across days, what is the empirical frequency of <= this number being bought?)
A couple of things that make sense to consider in this context (don't know how much effort you are going to be putting into this, nor what your statistical background is): (1) it would make sense to fit a mixed model to allow for variation among stores, SKUs, days, etc.; (2) you might want to use more than just zeros -- i.e. if there were really only 2 items there to begin with, then there will only be 2 bought, and you could potentially detect that that outcome was highly unlikely for that particular SKU/store/day combination ... and hence detect the OOS issue before the next day (when there were zero and hence zero bought).
| null | CC BY-SA 2.5 | null | 2010-12-11T20:40:51.063 | 2010-12-11T20:40:51.063 | null | null | 2126 | null |
5384 | 2 | null | 5114 | 2 | null | Since the log-normal distribution has two parameters, you can't satisfactorily fit it to three constraints that don't naturally fit it. With extreme quantiles of 2.5 and 7.5, the mode is ~4, and there is not much you can do about it. Since the scale of the errors for `a` and `b` is much smaller than for `c`, one of them will be pretty much ignored during the optimization.
For a better fit, you can either pick a three-parameter distribution, for example the generalized gamma distribution (implemented in the `VGAM` package), or add a shift parameter to the lognormal (or gamma, ...) distribution.
As a last note, since the distribution you are looking for is clearly not symmetric, the average of the two given observations is not the right value for the mode. I would maximize the sum of the densities at 3.0 and 3.6 while maintaining the extreme quantiles at 2.5 and 7.5 - this is possible if you have three parameters.
| null | CC BY-SA 2.5 | null | 2010-12-11T20:49:36.103 | 2010-12-11T20:49:36.103 | null | null | 279 | null |
5385 | 2 | null | 5364 | 9 | null | Do you know this article: [PLS-regression: a basic tool of chemometrics](https://www.sciencedirect.com/science/article/abs/pii/S0169743901001551) ([PDF](http://libpls.net/publication/PLS_basic_2001.pdf))? Deriving SE and CI for the PLS parameters is described in §3.11.
I generally rely on Bootstrap for computing CIs, as suggested in e.g., Abdi, H. [Partial least squares regression and projection on latent structure regression (PLS Regression)](http://www.utdallas.edu/%7Eherve/abdi-wireCS-PLS2010.pdf). See also the [plspm](https://github.com/gastonstat/plspm) package, and its accompagnying texbook: [PLS Path Modeling with R](https://www.gastonsanchez.com/PLS_Path_Modeling_with_R.pdf).
I seem to remember there are theoretical solutions discussed in Tenenhaus M. (1998) La régression PLS: Théorie et pratique (Technip), but I cannot check for now as I don't have the book. For now, there are some useful R packages, like [plsRglm](http://cran.r-project.org/web/packages/plsRglm/index.html).
P.S. I just discovered Nicole Krämer's work, in reference to the [plsdof](http://cran.r-project.org/web/packages/plsdof/index.html) R package.
| null | CC BY-SA 4.0 | null | 2010-12-11T21:07:43.147 | 2020-11-08T19:14:42.627 | 2020-11-08T19:14:42.627 | 930 | 930 | null |
5387 | 1 | 5398 | null | 19 | 12398 | In a logit model, is there a smarter way to determine the effect of an independent ordinal variable than to use dummy variables for each level?
| Logit with ordinal independent variables | CC BY-SA 2.5 | null | 2010-12-11T22:50:26.880 | 2010-12-12T11:26:47.150 | 2010-12-11T23:15:04.053 | 82 | 82 | [
"logistic",
"logit",
"ordinal-data"
]
|
5388 | 2 | null | 5387 | 6 | null | it's perfectly fine to use a categorical predictor in a logit (or OLS) regression model if the levels are ordinal. But if you have a reason to treat each level as discrete (or if in fact your categorical variable is nominal rather than ordinal), then, as alternative to dummy coding, you can also use orthogonal contrast coding. For very complete & accessible discussion, see Judd, C.M., McClelland, G.H. & Ryan, C.S. Data analysis : a model comparison approach, Edn. 2nd. (Routledge/Taylor and Francis, New York, NY; 2008), or just google "contrast coding"
| null | CC BY-SA 2.5 | null | 2010-12-11T23:29:31.963 | 2010-12-11T23:29:31.963 | null | null | 11954 | null |
5389 | 2 | null | 5347 | 9 | null | @onestop provides good references. The Wikipedia article on the Poisson binomial distribution gives a recursive formula for computing the exact probability distribution; it requires $O(n^2)$ effort. Unfortunately, it's an alternating sum, so it will be numerically unstable: it's hopeless to do this computation with floating point arithmetic. Fortunately, when the $p_i$ are small, you only need to compute a small number of probabilities, so the effort is really proportional to $O(n \log(\sum_i{p_i}))$. The precision necessary to carry out the calculation with rational arithmetic (i.e., exactly, so that the numerical instability is not a problem) grows slowly enough that the overall timing may still be approximately $O(n^2)$. That's feasible.
As a test, I created an array of probabilities $p_i = 1/(i+1)$ for various values of $n$ up to $n = 2^{16}$, which is the size of this problem. For small values of $n$ (up to $n = 2^{12}$) the timing for the exact calculation of probabilities was in seconds and scaled quadratically, so I ventured a calculation for $n = 2^{16}$ out to three SDs above the mean (probabilities for 0, 1, ..., 22 successes). It took 80 minutes (with Mathematica 8), in line with the predicted time. (The resulting probabilities are fractions whose numerators and denominators have about 75,000 digits apiece!) This shows the calculation can be done.
An alternative is to run a long simulation (a million trials ought to do). It only has to be done once, because the $p_i$ don't change.
| null | CC BY-SA 2.5 | null | 2010-12-11T23:50:58.420 | 2010-12-11T23:50:58.420 | null | null | 919 | null |
5390 | 2 | null | 1870 | 1 | null | In youtube there are a lot of videos related with SPSS, I usually learn a gui with them. I recommend you record videos to teach how to apply methods and encourage them to use the "paste" button to learn a little of syntax
| null | CC BY-SA 2.5 | null | 2010-12-12T01:27:00.017 | 2010-12-12T01:39:25.333 | 2010-12-12T01:39:25.333 | null | null | null |
5391 | 2 | null | 1870 | 1 | null | Your method will certainly work for most introductory techniques you'd need to do. I am quite familiar with R, but I was required to learn Minitab to teach to non-statistics people. In one afternoon, I had a basic enough understanding of Minitab to explain how to use it in the limited framework of an introductory stats class. I figure SPSS would behave similarly.
I would suggest reviewing the course syllabus and trying to complete all techniques outlined in the syllabus. A large majority of your students' questions will pertain to doing tasks the teacher has asked them to do, and a smaller percentage of questions will be related to doing tasks in their other statistics endeavors.
You will be able to answer a large majority of student questions by knowing how to use SPSS to solve a class-related problem, so just playing around within SPSS will benefit you.
| null | CC BY-SA 2.5 | null | 2010-12-12T02:13:30.870 | 2010-12-12T02:13:30.870 | null | null | 1118 | null |
5392 | 1 | 5407 | null | 14 | 15148 | Let:
Standard deviation of random variable $A =\sigma_{1}=5$
Standard deviation of random variable $B=\sigma_{2}=4$
Then the variance of A+B is:
$Var(w_{1}A+w_{2}B)= w_{1}^{2}\sigma_{1}^{2}+w_{2}^{2}\sigma_{2}^{2} +2w_{1}w_{2}p_{1,2}\sigma_{1}\sigma_{2}$
Where:
$p_{1,2}$ is the correlation between the two random variables.
$w_{1}$ is the weight of random variable A
$w_{2}$ is the weight of random variable B
$w_{1}+w_{2}=1$
The figure below plots the variance of A and B as the weight of A changes from 0 to 1, for the correlations -1 (yellow),0 (blue) and 1 (red).

How did the formula result in a straight line (red) when the correlation is 1? As far as I can tell, when $p_{1,2}=1$, the formula simplifies to:
$Var(w_{1}A+w_{2}B)= w_{1}^{2}\sigma_{1}^{2}+w_{2}^{2}\sigma_{2}^{2} +2w_{1}w_{2}\sigma_{1}\sigma_{2}$
How can I express that in the form of $y=mx+c$?
Thank you.
| Variance of two weighted random variables | CC BY-SA 2.5 | null | 2010-12-12T03:21:44.353 | 2010-12-12T21:40:58.743 | 2010-12-12T10:40:35.683 | 1636 | 1636 | [
"random-variable"
]
|
5393 | 2 | null | 5347 | 3 | null | I think other answers are great, but I didn't see any Bayesian ways of estimating your probability. The answer doesn't have an explicit form, but the probability can be simulated using R.
Here is the attempt:
$$ X_i | p_i \sim Ber(p_i)$$
$$ p_i \sim Beta(\alpha, \beta) $$
Using [wikipedia](http://en.wikipedia.org/wiki/Beta_distribution) we can get estimates of $\hat{\alpha}$ and $\hat{\beta}$ (see parameter estimation section).
Now you can generate draws for the $i^{th}$ step, generate $p_i$ from $Beta(\hat{\alpha},\hat{\beta})$ and then generate $ X_i$ from $Ber(p_i)$. After you have done this $N$ times you can get $Y = \sum X_i$. This is a single cycle for generation of Y, do this $M$(large) number of times and the histogram for $M$ Ys will be the estimate of density of Y.
$$Prob[Y \leq y] = \frac {\#Y \leq y} {M}$$
This analysis is valid only when $p_i$ are not fixed. This is not the case here. But I will leave it here, in case someone has a similar question.
| null | CC BY-SA 2.5 | null | 2010-12-12T03:38:17.960 | 2010-12-13T00:00:30.870 | 2010-12-13T00:00:30.870 | 1307 | 1307 | null |
5395 | 2 | null | 5382 | 3 | null | It sounds like you have multiple sources of information about measurements of interest.
The aim is to combine the multiple sources so as to maximise measurement accuracy.
In general you could adopt a formula that weights the different sources of information based on the confidence you ascribe to the source.
It also seems that the confidence in the source might vary.
E.g., the more work data you have on someone, perhaps the more weight this information would be given for that individual.
You may also have evidence on the accuracy of an individual's self ratings.
Thus, a simple strategy would to develop a weighted composite of available information where the size of the weight is relative to the informativeness of the source.
This question on [Bayesian rating systems](https://stats.stackexchange.com/questions/1848/bayesian-rating-system-with-multiple-categories-for-each-rating) might also be relevant.
| null | CC BY-SA 2.5 | null | 2010-12-12T08:57:36.583 | 2010-12-12T08:57:36.583 | 2017-04-13T12:44:33.310 | -1 | 183 | null |
5396 | 1 | null | null | 1 | 1749 |
- Does anyone know how to create a nice looking summary table in JMP for several variables? I use Tables>Summary and the table I get has one row that includes for example mean(var1) mean(var2) mean(var3) median(var1) median(var2) median(var3) and so on. I would like each variable to have row of its own, with columns representing the statistics.
- Does anyone know how to create a frequency table when creating a histogram in JMP? I get a histogram by using Analyze>Distributions but I don't see any option of getting the frequency table.
Thank you,
Naama Wolf
| Summary and frequency tables in JMP | CC BY-SA 2.5 | null | 2010-12-12T10:18:17.687 | 2011-04-20T23:57:53.813 | 2010-12-12T13:02:56.367 | null | null | [
"data-visualization",
"jmp"
]
|
5397 | 2 | null | 5363 | 6 | null | I haven't dealt with Neural Networks for some years now, but I think you will find everything you need here:
[Neural Networks - A Systematic Introduction, Chapter 7: The backpropagation algorithm](http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf)
I apologize for not writing the direct answer here, but since I have to look up the details to remember (like you) and given that the answer without some backup may be even useless, I hope this is ok. However, if any questions remain, drop a comment and I'll see what I can do.
| null | CC BY-SA 2.5 | null | 2010-12-12T11:13:06.010 | 2010-12-12T11:13:06.010 | null | null | 264 | null |
5398 | 2 | null | 5387 | 17 | null | To add to @dmk38's response, "any set of scores gives a valid test, provided they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly distorts a numerical scale that really does underlie the ordered classification, the test will not be sensitive. The scores should therefore embody the best insight available about the way in which the classification was constructed and used." (Cochran, 1954, cited by Agresti, 2002, pp. 88-89). In other words, treating an ordered factor as a numerically scored variable is merely a modelling issue. Provided it makes sense, this will only impact the way you interpret the result, and there is no definitive rule of thumb on how to choose the best representation for an ordinal variable.
Consider the following example on maternal Alcohol consumption and presence or absence of congenital malformation (Agresti, [Categorical Data Analysis](http://www.stat.ufl.edu/~aa/cda/cda.html), Table 3.7 p.89):
```
0 <1 1-2 3-5 6+
Absent 17066 14464 788 126 37
Present 48 38 5 1 1
```
In this particular case, we can model the outcome using logistic regression or simple association table. Let's do it in R:
```
tab3.7 <- matrix(c(17066,48,14464,38,788,5,126,1,37,1), nr=2,
dimnames=list(c("Absent","Present"),
c("0","<1","1-2","3-5","6+")))
library(vcd)
assocstats(tab3.7)
```
Usual $\chi^2$ (12.08, p=0.016751) or LR (6.20, p=0.184562) statistic (with 4 df) do not account for the ordered levels in Alcohol consumption.
Treating both variables as ordinal with equally spaced scores (this has no impact for binary variables, like malformation, and we choose the baseline as 0=absent), we could test for a linear by linear association. Let's first construct an exploded version of this contingency Table:
```
library(reshape)
tab3.7.df <- untable(data.frame(malform=gl(2,1,10,labels=0:1),
alcohol=gl(5,2,10,labels=colnames(tab3.7))),
c(tab3.7))
# xtabs(~malform+alcohol, tab3.7.df) # check
```
Then we can test for a linear association using
```
library(coin)
#lbl_test(as.table(tab3.7))
lbl_test(malform ~ alcohol, data=tab3.7.df)
```
which yields $\chi^2(1)=1.83$ with $p=0.1764$. Note that this statistic is simply the correlation between the two series of scores (that Agresti called $M^2=(n-1)r^2$), which is readily computed as
```
cor(sapply(tab3.7.df, as.numeric))[1,2]^2*(32574-1)
```
As can be seen, there is not much evidence of a clear association between the two variables. As done by Agresti, if we choose to recode Alcohol levels as {0,0.5,1.5,4,7}, that is using mid-range values for an hypothesized continuous scale with the last score being somewhat purely arbitrary, then we would conclude to a larger effect of maternal Alcohol consumption on the development of congenital malformation:
```
lbl_test(malform ~ alcohol, data=tab3.7.df,
scores=list(alcohol=c(0,0.5,1.5,4,7)))
```
yields a test statistic of 6.57 with an associated p-value of 0.01037.
There are alternative coding schemes, including midranks (in which case, we fall back to Spearman $\rho$ instead of Pearson $r$) that is discussed by Agresti, but I hope you catch the general idea here: It is best to select scores that actually reflect a reasonable measures of the distance between adjacent categories of your ordinal variable, and equal spacing is often a good compromise (in the absence of theoretical justification).
Using the GLM approach, we would proceed as follows. But first check how Alcohol is encoded in R:
```
class(tab3.7.df$alcohol)
```
It is a simple unordered factor (`"factor"`), hence a nominal predictor. Now, here are three models were we consider Alcohol as a nominal, ordinal or continuous predictor.
```
summary(mod1 <- glm(malform ~ alcohol, data=tab3.7.df,
family=binomial))
summary(mod2 <- glm(malform ~ ordered(alcohol), data=tab3.7.df,
family=binomial))
summary(mod3 <- glm(malform ~ as.numeric(alcohol), data=tab3.7.df,
family=binomial))
```
The last case implicitly assumes an equal-interval scale, and the $\hat\beta$ is interpreted as @dmk38 did: it reflects the effect of a one-unit increase in Alcohol on the outcome through the logit link, that is the increase in probability of observing a malformation (compared to no malformation, i.e. the odds-ratio) is $\exp(\hat\theta)=\exp(0.228)=1.256$. The Wald test is not significant at the usual 5% level. In this case, the design matrix only includes 2 columns: the first is a constant column of 1's for the intercept, the second is the numerical value (1 to 5) for the predictor, as in a simple linear regression. In sum, this model tests for a linear effect of Alcohol on the outcome (on the logit scale).
However, in the two other cases (`mod1` and `mod2`), we get different output because the design matrix used to model the predictor differs, as can be checked by using:
```
model.matrix(mod1)
model.matrix(mod2)
```
We can see that the associated design matrix for `mod1` includes dummy variables for the $k-1$ levels of Alcohol (0 is always the baseline) after the intercept term in the first column, whereas in the case of `mod2` we have four columns of contrast-coded effects (after the column of 1's for the intercept). The coefficient for the category "3-5" is estimated at 1.03736 under `mod1`, but 0.01633 under `mod2`. Note that AIC and other likelihood-based measures remain, however, identical between these two models.
You can try assigning new scores to Alcohol and see how it will impact the predicted probability of a malformation.
| null | CC BY-SA 2.5 | null | 2010-12-12T11:21:34.503 | 2010-12-12T11:26:47.150 | 2010-12-12T11:26:47.150 | 930 | 930 | null |
5399 | 1 | null | null | 11 | 16840 | Say we have two Gaussian random vectors $p(x_1) = N(0,\Sigma_1), p(x_2) = N(0,\Sigma_2)$, is there a well known result for the expectation of their product $E[x_1x_2^T]$ without assuming independence?
| Expectation of product of Gaussian random variables | CC BY-SA 2.5 | null | 2010-12-12T12:11:28.810 | 2010-12-21T05:30:02.967 | 2010-12-12T15:04:13.580 | null | null | [
"random-variable",
"normal-distribution",
"expected-value"
]
|
5400 | 2 | null | 5321 | 4 | null | (This was supposed to be a comment, but it got too long)
I'd say
```
L = data'*data;
[Ev, Vals] = eig(L);
[Ev,Vals] = sort(Ev,Vals);
```
is pretty dodgy; it's a common notion in numerical linear algebra that forming matrices like $\mathbf A^T\mathbf A$ worsens the so-called "conditioning" of your problem, in that merely forming the matrix can cause the loss of significant figures, and any further operations with it, like trying to obtain the required eigenpairs, will result in a complete mess.
One possible way to show how "squaring" ruins things is to consider the so-called Lauchli matrix, which is returned by the MATLAB function `gallery('lauchli',n)`, which returns an $(n+1)\times n$ matrix. If you attempt to square that and give that to an eigenroutine like `eig()`... disaster! That's just for a small matrix; the larger the matrix you have, the more likely the conditioning of your matrix will be rather bad.
The best way of doing things, then, is to use the singular value decomposition. I discussed SVD and the Lauchli example in [this m.SE answer](https://math.stackexchange.com/questions/3869/3871#3871), but for your purposes, here's what you might try replacing that snippet I quoted with (following shabbychef's answer):
```
[U, Vals, Ev] = svd(data, 'econ');
Vals = diag(Vals);
Vals = Vals.^2
[Vals, sidx] = sort(Vals);
Ev = Ev(:,sidx);
```
On the other hand, `svd()` returns the singular values in decreasing order, so the sorting might no longer be required...
| null | CC BY-SA 2.5 | null | 2010-12-12T12:52:20.237 | 2010-12-12T12:52:20.237 | 2017-04-13T12:19:38.800 | -1 | 830 | null |
5401 | 2 | null | 5382 | 1 | null | I'm going to give a stab at an answer, if I misunderstand your question, please comment here or revise your question and I'll do my best to revise my answer.
If I understand you correctly you are trying to find a users 'true level of expertise'. An elusive quantity to be sure. Given what you have written, I think you expect that true expertise has some relationship with actual amount of work done.
There are some problems meeting the assumption of the technique I am going to suggest first, but I hope the violations will be tolerable. A first step could be setting up a regression equation to predict work completed given work assigned for each component (G1, G2, G3, etc). Residuals from this regression equation are deviations from the normal amount of productivity in that component. Deviations from normal productivity (if you assume those who are more expert get a greater proportion of their work done) may be viewed as a measure of 'true expertise'. It also sets each person's productivity in comparison to others (a stated goal). Of course, I strongly suggest you view a scatter-plot of these data (and compare model fits) as I suspect that the relationship will not be linear and that you'll have have to fit a quadratic.
I'm not sure what you want to do with the self-ratings. Of course, you should eye-ball the self ratings, if someone is giving themselves all high scores, that participant may need to be dropped. Another thing you might consider doing prior to analysis is subtracting each subject's mean rating of expertise from each of their ratings of expertise. Though in this approach you'll lose information about people who are overall more skilled than others. To validate that the self-ratings are meaningful you want to correlate them with work completed. They should be somewhat positive given your assumption that those who complete more work on a given component tend to be (but are not necessarily) more expert. If the self-ratings are shown to be a somewhat valid measure, then you can move on to correlating the self ratings with the above mentioned residual score. Again, the correlation ought to be somewhat positive.
Edit 1: You say you want to calculate the "effective rating" of all users relative to the group of users, not self-ranking, considering their work data and self-ranking. I'm not sure how you can both consider self-ranking and not consider it at the same time. If you mean that you want to create a measure of skill based on something other than self-rankings, but then combine it with the self-ranking data, you can simply sum the Z-score of some measure of skill (e.g. work done or the method I describe above) and the Z-score of self ratings.
| null | CC BY-SA 2.5 | null | 2010-12-12T13:51:48.310 | 2010-12-12T14:00:57.243 | 2010-12-12T14:00:57.243 | 196 | 196 | null |
5402 | 1 | 5410 | null | 5 | 579 | I have heard that many problems in Mathematical Statistics can be stated and solved in terms of Symplectic Geometry. Unfortunately this was a pretty vague statement and I am interested in something more concrete. Also there seem to be some books written form this perspective, but I couldn't find any.
| References for use of symplectic geometry in statistics? | CC BY-SA 2.5 | null | 2010-12-12T14:10:55.290 | 2020-08-12T11:56:11.283 | 2019-04-29T07:40:24.520 | 11887 | 2354 | [
"mathematical-statistics",
"references",
"information-geometry"
]
|
5403 | 2 | null | 5392 | 11 | null | It isn't linear. The formula says it isn't linear. Trust your mathematical instinct!
It only appears linear in the graph because of the scale, with $\sigma_{1}=5$ and $\sigma_{2}=4$. Try it yourself: calculate the slopes at a few places and you will see that they differ. You can exaggerate the difference by picking $\sigma_{1}=37$, say.
Here is some R code:
```
a <- 5; b <- 4; p <- 1
f <- function(w) w^2*a^2 + (1-w)^2*b^2 + 2*w*(1-w)*p*a*b
curve(f, from = 0, to = 1)
```
If you would like to check some slopes:
```
(f(0.5) - f(0.4)) / 0.1
(f(0.8) - f(0.7)) / 0.1
```
| null | CC BY-SA 2.5 | null | 2010-12-12T15:24:15.363 | 2010-12-12T15:29:50.743 | 2010-12-12T15:29:50.743 | null | null | null |
5404 | 2 | null | 5343 | 2 | null | I can tell you what I would do as a machine learner ;):
- Creating two models $M_1,M_2$ for x using $y_1,y_2$ respectively
- A prediction for x is calculated as the average of the predictions of $M_1$ and $M_2$
A "model" can be either a gaussian distribution (as described, if you know that x a) has a gaussian distribution and b) know that the function x to y is nearly the identity) or anything else, e.g. a simple linear regression model (if you are not sure whether there are additional factors).
BUT: If x is indeed a bimodal distribution with the two gaussians y, than the suggested approach would not make any sense. In this case I'd try a more generic approach, i.e. EM to see whether one or two gaussians are more appropriate.
It is quite hard to give a general answer to this questions, since it is not exactly clear what determines whether y1 or y2 can be observed. It could be that for both y x is missing completely at random (in this case y1 and y2 were just different random samples), but on the other hand it could be that for a certain fraction of x only y1 can be observed, but not y2, and vice versa for another fraction of x.
| null | CC BY-SA 2.5 | null | 2010-12-12T15:52:47.647 | 2010-12-12T19:04:16.523 | 2010-12-12T19:04:16.523 | 264 | 264 | null |
5405 | 2 | null | 5327 | 5 | null | To echo Aniko's comment: The primary assumption is the existence of truncation. This is not the same assumption as the two other possibilities that your post suggests to me: boundedness and sample selection.
If you have a fundamentally bounded dependent variable rather than a truncated one you might want to move to a generalized linear model framework with one of the (less often chosen) distributions for Y e.g. log-normal, gamma, exponential, etc. which respect that lower bound.
Alternatively you might then ask yourself whether you think that the process that generates the zero observations in your model is the same as the one that generates the strictly positive values - prices in your application, I think. If this is not the case then something from the class of sample selection models, (e.g. Heckman models) might be appropriate. In that case you'd be in the situation of specifying one model of being willing to pay any price at all, and another model of what price your subjects would pay if they wanted to pay something.
In short, you probably want to review the difference between assuming truncated, censored, bounded, and sample selected dependent variables. Which one you want will come from the details of your application. Once that first most important assumption is made you can more easily determine whether you like the specific assumptions of any model in your chosen class. Some of the sample selection models have assumptions that are rather difficult to check...
| null | CC BY-SA 2.5 | null | 2010-12-12T16:10:31.023 | 2010-12-12T16:10:31.023 | null | null | 1739 | null |
5407 | 2 | null | 5392 | 14 | null | Using $w_1 + w_2 = 1$, compute
$$\eqalign{
\text{Var}(w_1 A + w_2 B) &= \left( w_1 \sigma_1 + w_2 \sigma_2 \right)^2 \cr
&= \left( w_1(\sigma_1 - \sigma_2) + \sigma_2 \right)^2 \text{.}
} $$
This shows that when $\sigma_1 \ne \sigma_2$, the graph of the variance versus $w_1$ (shown sideways in the illustration) is a parabola centered at $\sigma_2 / (\sigma_2 - \sigma_1)$. No portion of any parabola is linear. With $\sigma_1 = 5$ and $\sigma_2 = 4$, the center is at $-5$: way below the graph at the scale in which it is drawn. Thus, you are looking at a small piece of a parabola, which will appear linear.
When $\sigma_1 = \sigma_2$, the variance is a linear function of $w_1$. In this case the plot would be a perfectly vertical line segment.
BTW, you knew this answer already, without calculation, because basic principles imply the plot of variance cannot be a line unless it is vertical. After all, there is no mathematical or statistical prohibition to restrict $w_1$ to lie between $0$ and $1$: any value of $w_1$ determines a new random variable (a linear combination of the random variables A and B) and therefore must have a non-negative value for its variance. Therefore all these curves (even when extended to the full vertical range of $w_1$ ) must lie to the right of the vertical axis. That precludes all lines except vertical ones.
Plot of the variance for $\rho = 1 - 2^{-k}, k = -1, 0, 1, \ldots, 10$:

| null | CC BY-SA 2.5 | null | 2010-12-12T17:13:53.450 | 2010-12-12T21:40:58.743 | 2010-12-12T21:40:58.743 | 919 | 919 | null |
5408 | 2 | null | 4360 | 11 | null | A subjectivist Bayesian argument is practically the only way (from a statistical standpoint) you could go about understanding your intuition, which is--properly speaking--the subject of a psychological investigation, not a statistical one. However, it is patently unfair--and therefore invalid--to use a Bayesian approach to argue that an investigator faked the data. The logic of this is perfectly circular: it comes down to saying "based on my prior beliefs about the outcome, I find your result incredible, and therefore you must have cheated." Such an illogical self-serving argument obviously wouldn't stand up in a courtroom or in a peer review process.
Instead, we could take a tip from [Ronald Fisher's critique of Mendel's experiments](http://www.cs.brown.edu/courses/csci1950-l/presentations/REVISED--AFCB%202007%20Course%20Fisher%20and%20Mendel.ppt) and conduct a formal hypothesis test. Of course it's invalid to test a post hoc hypothesis based on the outcome. But experiments have to be replicated to be believed: that's a tenet of the scientific method. So, having seen one result we think might have been faked, we can formulate an appropriate hypothesis to test future (or additional) results. In this case the critical region would comprise a set of results extremely close to the expectation. For instance, a test at the $\alpha$ = 5% level would view any result between 9,996 and 10,004 as suspect, because (a) this collection is close to our hypothesized "faked" results and (b) under the null hypothesis of no faking (innocent until proven guilty in court!), a result in this range has only a 5% (actually 5.07426%) chance of occurring. Furthermore, we can put this seemingly ad hoc approach in a chi-square context (a la Fisher) simply by squaring the deviation between the observed proportion and the expected proportion, then invoking the [Neyman-Pearson lemma](http://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma) in a one-tailed test at the low tail and applying the [Normal approximation to the Binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation).
Although such a test cannot prove fakery, it can be applied to future reports from that experimenter to assess the credibility of their claims, without making untoward and unsupportable assumptions based on your intuition alone. This is much more fair and rigorous than invoking a Bayesian argument to implicate someone who might be perfectly innocent and just happened to be so unlucky that they got a beautiful experimental result!
| null | CC BY-SA 2.5 | null | 2010-12-12T17:50:29.487 | 2010-12-12T17:50:29.487 | null | null | 919 | null |
5409 | 2 | null | 5402 | 6 | null | A direct connection would be unexpected: the two fields appear to have little in common. For example, [a modern introduction to symplectic geometry published by the American Mathematical Society](http://loveguests.com/books/science/mathematics-statistics/3103-an-introduction-to-symplectic-geometry.html) appears to make no mention of mathematical statistics at all.
At best it seems any connection would come through mathematical physics. A symplectic geometry on phase space naturally arises in the [Hamiltonian formulation of classical mechanics](http://en.wikipedia.org/wiki/Hamiltonian_mechanics#Geometry_of_Hamiltonian_systems) and that in turn can be used to explore global properties of physical systems. The study of periodic and near-periodic orbits becomes somewhat statistical (e.g., [ergodic theorems](http://en.wikipedia.org/wiki/Ergodic_theory)). When applied to a system with many degrees of freedom it would conceivably relate some aspects of symplectic geometry to thermodynamics, which is inherently a statistical theory
| null | CC BY-SA 2.5 | null | 2010-12-12T18:01:19.650 | 2010-12-12T18:01:19.650 | null | null | 919 | null |
5410 | 2 | null | 5402 | 5 | null | I know nothing whatsoever about symplectic geometry, but a bit of googling brought up a [1997 article in the Journal of Statistical Planning & Inference by Barndorff-Nielsen & Jupp](http://dx.doi.org/10.1016/S0378-3758%2897%2900006-2), which contains this quote:
>
Some other links between statistics and symplectic geometry have been discussed
by Friedrich and Nakamura. Friedrich (1991) established some connections between
expected (Fisher) information and symplectic structures. However, his approach and
results are quite different from those considered here. Nakamura (1993, 1994) has
shown that certain parametric statistical models in which the parameter space M is an
even-dimensional vector space (and so has the symplectic structure of the cotangent
space of a vector space) give rise to completely integrable Hamiltonian systems on M.
The cited refs are:
- Friedrich, T., 1991. Die Fisher-lnformation und symplectische Strukturen. Math. Nachr. 153: 273-296.
- Nakamura, Y., 1993. Completely integrable gradient systems on the manifolds of Gaussian and multinomial
distributions. Japan. J. Ind. Appl. Math. 10: 179-189.
- Nakamura, Y., 1994. Gradient systems associated with probability distributions. Japan. J. Ind. Appl. Math. 11: 21-30.
The article's Introduction says B-N & others have used differential geometry as an approach to statistical asymptotics. Symplectic geometry is a branch of differential geometry ([according to Wikipedia](http://en.wikipedia.org/wiki/Symplectic_geometry)). A [Google Books search](http://www.google.co.uk/search?tbs=bks%3A1&q=%22differential+geometry%22+statistics+OR+statistical+-mechanics+-thermodynamics) finds several books about the application of differential geometry to statistics and related fields such as econometrics.
| null | CC BY-SA 2.5 | null | 2010-12-12T18:33:17.970 | 2010-12-12T18:33:17.970 | null | null | 449 | null |
5411 | 2 | null | 5114 | 5 | null | If, given an answer to my comment above, you wish to bound the range of the distribution, why not simply fit a Beta distribution where you rescale to the unit interval? In other words, if you know that the parameter of interest should fall between $[2 , 8]$, then why not define $Y = \frac{X - 5}{6} + \frac{1}{2} = \frac{X - 2}{6}$. Where I've first centered the interval on zero, divided by the width so that Y will have a range of 1, and then added $\frac{1}{2}$ back so that the range of Y is $[0,1]$. (You can think of it either way: directly from $[2,8] \rightarrow [0,1]$ or from $[2,8] \rightarrow [-\frac{1}{2},\frac{1}{2}] \rightarrow [0,1]$, but I thought the latter might be easier at first).
Then, with two data points, you could fit a beta posterior with a uniform beta prior?
| null | CC BY-SA 2.5 | null | 2010-12-12T18:38:47.193 | 2010-12-12T18:44:14.977 | 2010-12-12T18:44:14.977 | 1499 | 1499 | null |
5412 | 2 | null | 5304 | 35 | null | Following up: profile confidence intervals are more reliable (choosing the appropriate cutoff for the likelihood does involve an asymptotic (large sample) assumption, but this is a much weaker assumption than the quadratic-likelihood-surface assumption underlying the Wald confidence intervals). As far as I know, there is no argument for the Wald statistics over the profile confidence intervals except that the Wald statistics are much quicker to compute and may be "good enough" in many circumstances (but sometimes way off: look up the Hauck-Donner effect).
| null | CC BY-SA 2.5 | null | 2010-12-12T19:47:32.683 | 2010-12-12T19:47:32.683 | null | null | 2126 | null |
5414 | 1 | 5417 | null | 19 | 1808 | What are some of the main pitfalls of using linear mixed-effects models? What are the most important things to test/watch out for in assessing the appropriateness of your model? When comparing models of the same dataset, what are the most important things to look for?
| Pitfalls of linear mixed models | CC BY-SA 2.5 | null | 2010-12-12T21:53:35.743 | 2010-12-13T08:21:53.467 | null | null | 52 | [
"mixed-model",
"model-comparison"
]
|
5415 | 2 | null | 5399 | 9 | null | Yes, there is a well-known result. Based on your edit, we can focus first on individual entries of the array $E[x_1 x_2^T]$. Such an entry is the product of two variables of zero mean and finite variances, say $\sigma_1^2$ and $\sigma_2^2$. The [Cauchy-Schwarz Inequality](http://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality#Probability_theory) implies the absolute value of the expectation of the product cannot exceed $|\sigma_1 \sigma_2|$. In fact, every value in the interval $[-|\sigma_1 \sigma_2|, |\sigma_1 \sigma_2|]$ is possible because it arises for some binormal distribution. Therefore, the $i,j$ entry of $E[x_1 x_2^T]$ must be less than or equal to $\sqrt{\Sigma_{1_{i,i}} \Sigma_{2_{j,j}}}$ in absolute value.
If we now assume all variables are normal and that $(x_1; x_2)$ is multinormal, there will be further restrictions because the covariance matrix of $(x_1; x_2)$ must be positive semidefinite. Rather than belabor the point, I will illustrate. Suppose $x_1$ has two components $x$ and $y$ and that $x_2$ has one component $z$. Let $x$ and $y$ have unit variance and correlation $\rho$ (thus specifying $\Sigma_1$) and suppose $z$ has unit variance ($\Sigma_2$). Let the expectation of $x z$ be $\alpha$ and that of $y z$ be $\beta$. We have established that $|\alpha| \le 1$ and $|\beta| \le 1$. However, not all combinations are possible: at a minimum, the determinant of the covariance matrix of $(x_1; x_2)$ cannot be negative. This imposes the non-trivial condition
$$1-\alpha ^2-\beta ^2+2 \alpha \beta \rho -\rho ^2 \ge 0.$$
For any $-1 \lt \rho \lt 1$ this is an ellipse (along with its interior) inscribed within the $\alpha, \beta$ square $[-1, 1] \times [-1, 1]$.
To obtain further restrictions, additional assumptions about the variables are necessary.
Plot of the permissible region $(\rho, \alpha, \beta)$

| null | CC BY-SA 2.5 | null | 2010-12-12T22:41:14.540 | 2010-12-12T22:41:14.540 | null | null | 919 | null |
5417 | 2 | null | 5414 | 17 | null | This is a good question.
Here are some common pitfalls:
- Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio test statistic. The null distribution of this test statistic is approximately chi-squared with degrees of freedom equal to difference in the dimensions of the two parameters spaces. Unfortunately, this test is only approximate and requires several assumptions. One crucial assumption is that the parameters under the null are not on the boundary of the parameter space. Since we are often interested in testing hypotheses about the random effects that take the form:
$$H_0: \sigma^2=0$$
This a real concern. The way to get around this problem is using REML. But still, the p-values will tend to be larger than they should be. This means that if you observe a significant effect using the χ2 approximation, you can be fairly confident that it is actually significant. Small, but not significant, p-values might spur one to use more accurate, but time-consuming, bootstrap methods.
- Comparing fixed effects: If you plan to use the likelihood ratio test to compare two nested models that differ only in their fixed effects, you cannot use the REML estimation method. The reason is that REML estimates the random effects by considering linear combinations of the data that remove the fixed effects. If these fixed effects are changed, the likelihoods of the two models will not be directly comparable.
- P-values: The p-values generated by the likelihood ratio test for fixed effects are approximate and unfortunately tend to be too small, thereby sometimes overstating the importance of some effects. We may use nonparametric bootstrap methods to find more accurate p-values for the likelihood ratio test.
- There are other concerns about p-values for the fixed effects test which are highlighted by Dr. Doug Bates [here].
I am sure other members of the forum will have better answers.
Source: Extending linear models with R -- Dr. Julain Faraway.
| null | CC BY-SA 2.5 | null | 2010-12-13T00:15:05.537 | 2010-12-13T00:15:05.537 | null | null | 1307 | null |
5418 | 1 | null | null | 48 | 2384 | I'm planning to start writing R packages.
I thought it would be good to study the source code of existing packages
in order to learn the conventions of package construction.
My criteria for good packages to study:
- Simple statistical/technical ideas: The point is to learn about the mechanics of package construction. Understanding the package should not require detailed highly domain specific knowledge about the actual topic of the package.
- Simple and conventional coding style: I'm looking for something a bit more than Hello World but not a whole lot more. Idiosyncratic tricks and hacks would be distracting when first learning R packages.
- Good coding style: The code is well written. It reveals both an understanding of good coding, in general, and an awareness of the conventions of coding in R.
Questions:
- Which packages would be good to study?
- Why would the suggested package source code be good to study
relative either to the criteria mentioned above or any other criteria that might be relevant?
Update (13/12/2010)
Following Dirk's comments I wanted to make it clear that no doubt many packages would be good to study first. I also agree that packages will provide models for different things (e.g., vignettes, S3 classes, S4 classes, unit testing, Roxygen, etc.).
Nonetheless, it would be interesting to read concrete suggestions about good packages
to start with and the reasons why they would be good packages to start with.
I've also updated the question above to refer to "packages" rather than "package".
| First R packages source code to study in preparation for writing own package | CC BY-SA 2.5 | null | 2010-12-13T00:56:07.200 | 2010-12-20T19:51:03.293 | 2010-12-13T03:21:06.557 | 183 | 183 | [
"r"
]
|
5419 | 2 | null | 5418 | 1 | null | i would recommend hadley's reshape package. you can find the source at [https://github.com/hadley/reshape](https://github.com/hadley/reshape)
| null | CC BY-SA 2.5 | null | 2010-12-13T01:42:30.347 | 2010-12-13T01:42:30.347 | null | null | 277 | null |
5420 | 2 | null | 5414 | 6 | null | Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care must be taken to build an appropriate covariance structure otherwise tests of hypotheses, confidence intervals, and estimates of treatment means may not be valid. Often one needs knowledge of the experiment to specify the correct random effects.
SAS for Mixed Models is my go to resource, even if I want to do the analysis in R.
| null | CC BY-SA 2.5 | null | 2010-12-13T02:13:33.037 | 2010-12-13T02:13:33.037 | null | null | 2310 | null |
5421 | 2 | null | 5418 | 7 | null | Why not take an empirically-driven random sampling approach? Just pick a few and see which work for you.
Kidding aside, just look at a few packages you yourself use and are familiar with. Downloading them is easy, or if you prefer you can also view them via a web interface at R-Forge, RForge, or Github.
You will most likely end up with different packages for different ideas. Some may help you with the way they integrate, say, a vignette. Some may help with compiled code. Or unit tests. Or Roxygen. There are about 2600 of them, so why obsess over a single best?
| null | CC BY-SA 2.5 | null | 2010-12-13T02:47:06.790 | 2010-12-13T02:47:06.790 | null | null | 334 | null |
5422 | 2 | null | 3511 | 4 | null | You may check the paper: Efficient L1-regularized logistic regression, which is an IRLS-based algorithm for LASSO.
Regarding the implementation, the link may be useful for you
(http://ai.stanford.edu/~silee/softwares/irlslars.htm).
| null | CC BY-SA 2.5 | null | 2010-12-13T03:03:02.913 | 2010-12-13T03:03:02.913 | null | null | null | null |
5423 | 2 | null | 5368 | 0 | null | Factor Analysis differs from PCA by an extra "rotation" step. The purpose is to make the resulting factors more interpretable. check out my blog post for more details and a good resource: [http://blog.bzst.com/2009/03/principal-components-analysis-vs-factor.html](http://blog.bzst.com/2009/03/principal-components-analysis-vs-factor.html)
| null | CC BY-SA 2.5 | null | 2010-12-13T04:35:59.257 | 2010-12-13T04:35:59.257 | null | null | 1945 | null |
5425 | 2 | null | 1995 | 9 | null | Here's another perspective on using multilevel vs. regression models: In an interesting paper by Afshartous and de Leeuw, they show that if the purpose of the modeling is predictive (that is, to predict new observations), the choice of model is different from when the goal is inference (where you try to match the model with the data structure). The paper that I am referring to is
Afshartous, D., de Leeuw, J. (2005). Prediction in multilevel models. J. Educat. Behav. Statist. 30(2):109–139.
I just found another related paper by these authors here: [http://moya.bus.miami.edu/~dafshartous/Afshartous_CIS.pdf](http://moya.bus.miami.edu/~dafshartous/Afshartous_CIS.pdf)
| null | CC BY-SA 2.5 | null | 2010-12-13T05:08:35.643 | 2010-12-13T05:08:35.643 | null | null | 1945 | null |
5426 | 2 | null | 1995 | 4 | null | I learned from Snijders and Bosker, Multilevel Analysis: An introduction to basic and advanced multilevel modeling. It is very well pitched at the beginner I think, it must be because I am a thicko where these things are concerned and it made sense to me.
I second the Gelman and Hill as well, a truly brilliant book.
| null | CC BY-SA 2.5 | null | 2010-12-13T07:47:49.127 | 2010-12-13T07:47:49.127 | null | null | 199 | null |
5427 | 2 | null | 5414 | 9 | null | The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only because random effects account for all the variance. But since the graph of actual vs predicted looks nice you are inclined to think that your model is good.
Everything falls apart when such model is used for predicting new data. Usually then you can use only fixed effects and the fit can be very poor.
| null | CC BY-SA 2.5 | null | 2010-12-13T08:21:53.467 | 2010-12-13T08:21:53.467 | null | null | 2116 | null |
5428 | 2 | null | 5351 | 15 | null | Suppose we have a regression
\begin{align*}
y=X\beta+u
\end{align*}
Then OLS estimate $\hat{\beta}$ is
\begin{align*}
\widehat{\beta}-\beta=(X'X)^{-1}X'u
\end{align*}
and assuming that $\hat{\beta}$ is unbiased estimate we have
\begin{align*}
Var(\widehat{\beta})=E\left[(X'X)^{-1}X'uu'X(X'X)^{-1}\right]
\end{align*}
The usual OLS assumptions are that $E(u|X)=0$ and $E(uu'|X)=\sigma^2I_n$ which gives us
\begin{align*}
Var(\widehat{\beta})=\sigma^2E(X'X)^{-1}
\end{align*}
This covariance matrix is usually reported in statistical packages.
If $u_i$ are heteroscedastic and (or) autocorellated, then $E(uu'|X)\neq\sigma^2I_n$ and the usual output gives misleading results. To get the correct results HAC standard errors are calculated. All the methods for HAC errors calculate
\begin{align*}
diag(E(X'X)^{-1}X'uu'X(X'X)^{-1}).
\end{align*}
They differ on their assumptions what $E(uu'|X)$ looks like.
So it is natural then that function `NeweyWest` requests linear model. Newey-West method calculates the correct standard errors of linear model estimator. So your solution is perfectly correct if you assume that your stock returns follow the model
\begin{align}
r_t=\mu+u_t
\end{align}
and you want to estimate $Var(\mu)$ guarding against irregularities in $u_t$.
If on the other hand you want to estimate "correct" $Var(r_t)$ (whatever that means), you should check out volatility models, such as GARCH and its variants. They assume that
\begin{align*}
r_t=\sigma_t\varepsilon_t
\end{align*}
where $\varepsilon_t$ are iid normal. The goal is then to correctly estimate $\sigma_t$. Then $Var(r_t)=Var(\sigma_t)$ and you have "correct" estimate of your variance, guarding against usual idiosyncrasies of stock returns such as volatility clustering, skewness and etc.
| null | CC BY-SA 3.0 | null | 2010-12-13T09:08:56.433 | 2014-01-02T14:13:55.960 | 2014-01-02T14:13:55.960 | 2116 | 2116 | null |
5429 | 2 | null | 3849 | 3 | null | As suggested by [whuber](https://stats.stackexchange.com/users/919/whuber) & [Srikant Vadali](https://stats.stackexchange.com/users/28/srikant-vadali) I posted [this question](http://www.stata.com/statalist/archive/2010-12/msg00443.html) on Statalist.
[Maarten L. Buis](http://www.maartenbuis.nl/) [pointed](http://www.stata.com/statalist/archive/2010-12/msg00451.html) that estout itself can be [used](http://repec.org/bocode/e/estout/advanced.html#advanced404) for processing PCA (as well as factor analysis) results. Thanks Maarten.
| null | CC BY-SA 2.5 | null | 2010-12-13T12:09:19.943 | 2010-12-13T12:09:19.943 | 2017-04-13T12:44:24.677 | -1 | 22 | null |
5430 | 1 | null | null | 4 | 277 | This image show a histogram (200 bins) of accumulated distances from a radar distance meter (very noisy).
The peak around 7 meters is an object. At thought this looked kind of like a normal distribution, at least if you ignore values <4m (which for this application is reasonable).
I have also achieved a reasonable fit with a [log-normal distribution](http://en.wikipedia.org/wiki/Log-normal_distribution).
What I am trying to do is to filter out true distances based on the probability distribution.

EDIT: log-normal distribution:

Note: this question can also be found at [math.stackexchange.com](https://math.stackexchange.com/questions/14162/is-this-a-probability-distribution)
| How would you filter this? | CC BY-SA 2.5 | null | 2010-12-13T12:43:59.630 | 2011-07-17T06:19:51.090 | 2017-04-13T12:19:38.853 | -1 | 1411 | [
"distributions",
"signal-processing"
]
|
5431 | 1 | null | null | 4 | 1818 | Should small sample sizes of 5 and less be included in reports?
This is an area of medical training that is currently being reported only to the course level trainers and statistically not included in reports to upper management. All of the data is mined from course-level surveys.
| Statistical significance with sample size of 5 or less | CC BY-SA 2.5 | null | 2010-12-13T15:32:39.450 | 2010-12-17T07:43:57.623 | 2010-12-17T07:43:57.623 | 223 | 2362 | [
"hypothesis-testing",
"statistical-significance"
]
|
5432 | 2 | null | 5431 | 5 | null | If you are doing things properly the worst thing you can do is not to reject $H_0$ (does not mean $H_0$ is true, only mean you do not have enough evidence to reject)
Anyway, there can be something significant with 5 observations:
If you need to test, for example, wether your random variable has a negative mean under the assumption of variance one, then having 5 observation over 10 is highly significant ....
Just be carefull not to use a test that uses something like asymptotic normality or other assumption that cannot be satisfyed or checked with only five observations
| null | CC BY-SA 2.5 | null | 2010-12-13T15:48:08.347 | 2010-12-13T15:48:08.347 | null | null | 223 | null |
5433 | 2 | null | 5418 | 15 | null | I would suggest looking at the [zoo](http://cran.r-project.org/web/packages/zoo/index.html) package for the following reasons:
- It has several well-written vignettes;
- It uses a namespace using useDynLib, import, export, and S3method;
- It has several unit tests using RUnit;
- It provides good examples of how to create/document S3 methods;
- It has some calls to C code via the .Call interface;
- It contains a (plotting) demo;
- It aims to be consistent with the core R installation (e.g. functions behave similarly, it doesn't mask / override base functions, etc.)
It doesn't use roxygen, which is very handy, but 7 out of 8 ain't bad. ;-)
To respond to your criteria:
- The concept is simple: zoo is a matrix-like class ordered by something. No domain-specific knowledge necessary.
- zoo does seem to have a few idiosyncratic coding conventions, but nothing over-the-top that prevents understanding the code.
- zoo aims to be as consistent with R as possible.
| null | CC BY-SA 2.5 | null | 2010-12-13T16:50:50.383 | 2010-12-20T19:51:03.293 | 2010-12-20T19:51:03.293 | 1657 | 1657 | null |
5434 | 1 | 5437 | null | 6 | 9726 | I am doing multiple regression with some data (5 predictors, 1 response). Since the response is discrete and non-negative, I thought I would try Poisson regression. However, the data are significantly overdispersed (variance > mean), so I am now trying negative binomial regression.
I was able to fit the model with this code.
```
library(MASS)
model.nb <- glm.nb(Response ~ Pred1 + Pred2 + Pred3 + Pred4 + Pred5 - 1, data=d)
```
Now I would like to see if I can get a better fit by including interactions between the predictors. However, when I try to do so, I get the following error.
```
> model.nb.intr <- glm.nb(Response ~ Pred1 * Pred2 * Pred3 * Pred4 * Pred5 - 1, data=d)
Error: no valid set of coefficients has been found: please supply starting values
```
Any ideas what may be causing this?
| No valid coefficients for NegBin regression | CC BY-SA 2.5 | null | 2010-12-13T17:06:52.920 | 2010-12-14T22:23:25.407 | 2010-12-13T17:36:44.883 | 1390 | 1973 | [
"r",
"regression",
"negative-binomial-distribution"
]
|
5435 | 2 | null | 5431 | 2 | null | It depends what you're testing.
If you are testing for variance and your $H_0$ is that $V = 1$, but your values are $0, +1000, -1000, +2000, -2000$, then it's clear that you should reject $H_0$. If you are testing the hypothesis that the mean is not 2, then you do not have enough data to reject it.
For most things, though, 5 is too little.
(I didn't run through the math above to build intuition. I'd generally recommend a Bayes ratio as the best quantitative test).
| null | CC BY-SA 2.5 | null | 2010-12-13T17:17:46.440 | 2010-12-13T17:17:46.440 | null | null | 2067 | null |
5436 | 2 | null | 5434 | 9 | null | Your model is too complex for the computer to work out some reasonable starting values that do not lead to infinite deviance when doing the `glm.fit` iterations.
Have you got enough data to estimate all these interactions? Do you think it is plausible for all predictors to interact with each other? If not, think about which predictors might interact and include only those terms.
The error asks you to supply some starting values for it to work from. For this, you need to supply a vector of parameter values as argument `start`; from `?glm`:
```
start: starting values for the parameters in the linear predictor.
```
You need to supply 31 model parameters (I hope you have many 1000s of data points?) to `start`, in this order:
```
> colnames(model.matrix(Y ~ Pred1*Pred2*Pred3*Pred4*Pred5 -1, data = DF))
[1] "Pred1" "Pred2"
[3] "Pred3" "Pred4"
[5] "Pred5" "Pred1:Pred2"
[7] "Pred1:Pred3" "Pred2:Pred3"
[9] "Pred1:Pred4" "Pred2:Pred4"
[11] "Pred3:Pred4" "Pred1:Pred5"
[13] "Pred2:Pred5" "Pred3:Pred5"
[15] "Pred4:Pred5" "Pred1:Pred2:Pred3"
[17] "Pred1:Pred2:Pred4" "Pred1:Pred3:Pred4"
[19] "Pred2:Pred3:Pred4" "Pred1:Pred2:Pred5"
[21] "Pred1:Pred3:Pred5" "Pred2:Pred3:Pred5"
[23] "Pred1:Pred4:Pred5" "Pred2:Pred4:Pred5"
[25] "Pred3:Pred4:Pred5" "Pred1:Pred2:Pred3:Pred4"
[27] "Pred1:Pred2:Pred3:Pred5" "Pred1:Pred2:Pred4:Pred5"
[29] "Pred1:Pred3:Pred4:Pred5" "Pred2:Pred3:Pred4:Pred5"
[31] "Pred1:Pred2:Pred3:Pred4:Pred5"
```
I would use the coefs from the first model to fill in the first 5 starting values and then what you do about the others is up to you. You could try starting them all off at 1 and see if that will get the model to fit?
You might also benefit from code in the [pscl package](http://cran.r-project.org/web/packages/pscl/index.html) which can fit hurdle and zero-inflated models to count data.
| null | CC BY-SA 2.5 | null | 2010-12-13T17:33:49.410 | 2010-12-13T17:46:32.053 | 2010-12-13T17:46:32.053 | 1390 | 1390 | null |
5437 | 2 | null | 5434 | 10 | null | Before jumping to a model that includes all interactions, you can try adding only the 2-way interactions:
```
model.nb.intr <- glm.nb(Response ~ (Pred1 + Pred2 + Pred3 + Pred4 + Pred5)^2 - 1, data=d)
```
| null | CC BY-SA 2.5 | null | 2010-12-13T17:57:44.083 | 2010-12-13T19:32:30.540 | 2010-12-13T19:32:30.540 | 930 | 279 | null |
5438 | 2 | null | 5115 | 7 | null | [David Donoho](http://stat.stanford.edu/~donoho/) development of multiscale ideas in statistics, and a lot of theoretically justified while practically very efficient ideas in very high dimensional statistics, CHA: computational harmonic analysis,...
| null | CC BY-SA 2.5 | null | 2010-12-13T18:20:38.607 | 2010-12-15T17:36:43.840 | 2010-12-15T17:36:43.840 | 919 | 223 | null |
5439 | 2 | null | 5431 | 2 | null | Five is pretty small, but not so small that it doesn't contain some information. If you get 5 course evaluations that agree an instructor is ineffective, that's pretty strong evidence. That said, the only time the evidence will be conclusive if it is rather extreme, e.g. all zeroes or all tens. Still, it seems foolish to toss it out.
You may also have a later opportunity to pool evidence, say if the same instructor or course is offered multiple times. Then those fives start adding up.
| null | CC BY-SA 2.5 | null | 2010-12-13T18:33:03.063 | 2010-12-13T18:33:03.063 | null | null | 5792 | null |
5440 | 2 | null | 5115 | 6 | null | [Emanuel Parzen](http://en.wikipedia.org/wiki/Emanuel_Parzen) for kernel density estimation and reproducing kernel Hilbert space theory for stochastic processes.
| null | CC BY-SA 3.0 | null | 2010-12-13T18:39:58.043 | 2013-10-17T21:59:51.590 | 2013-10-17T21:59:51.590 | 8960 | 223 | null |
5441 | 2 | null | 5115 | 7 | null | [Lucien Le Cam](http://en.wikipedia.org/wiki/Lucien_Le_Cam) for his contribution to mathematical statistics. (maybe Local asymptotic normality and contiguity made him famous)
| null | CC BY-SA 2.5 | null | 2010-12-13T18:42:29.023 | 2010-12-13T18:42:29.023 | null | null | 223 | null |
5442 | 2 | null | 1995 | 6 | null | Here's an example where a multilevel model might be "essential." Suppose you want to rate the "quality" of the education provided by a set of schools using students' test scores. One way to define school quality is in terms of average test performance after taking student characteristics into account. You could conceptualized this as,
$$y_{is} = \alpha_s + X_{is}'\beta_s + \epsilon_{is},$$
where $y_{is}$ is the continuous test score for student $i$ in school $s$, $X_{is}$ are student attributes centered at school means, $\beta_s$ is a school-specific coefficient on these attributes, $\alpha_s$ is a "school effect" that measures school quality, and $\epsilon_{is}$ are student level idiosyncrasies in test taking performance. Interest here focuses on estimating the $\alpha_s$'s, which measure the "added value" that the school provides to students once their attributes are accounted-for. You want to take student attributes into account, because you don't want to punish a good school that has to deal with students with certain disadvantages, therefore depressing average test scores despited the high "added value" that the school provides to its students.
With the model in hand, the issue becomes one of estimation. If you have lots of schools and lots of data for each school, the nice properties of OLS (see Angrist and Pischke, Mostly Harmless..., for a current review) suggest that you would want to use that, with suitable adjustments to standard errors to account for dependencies, and using dummy variables and interactions to get at school level effects and school specific intercepts. OLS may be inefficient, but it's so transparent that it might be easier to convince skeptical audiences if you use that. But if your data are sparse in certain ways---particularly if you have few observations for some schools---you may want to impose more "structure" on the problem. You may want to "borrow strength" from the larger-sample schools to improve the noisy estimates that you would get in the small-sample schools if the estimation were done with no structure. Then, you might turn to a random effects model estimated via FGLS, or maybe an approximation to direct likelihood given a certain parametric model, or even Bayes on a parametric model.
In this example, the use of a multilevel model (however we decide to fit it, ultimately) is motivated by the direct interest in the school-level intercepts. Of course, in other situations, these group level parameters may be nothing more than nuisance. Whether or not you need to adjust for them (and, therefore, still work with some kind of multilevel model) depends on whether certain conditional exogeneity assumptions hold. On that, I would recommend consulting the econometric literature on panel data methods; most insights from there carry over to general grouped data contexts.
| null | CC BY-SA 2.5 | null | 2010-12-13T19:08:53.720 | 2010-12-13T19:08:53.720 | null | null | 96 | null |
5443 | 1 | 5547 | null | 5 | 2672 | When building a CART model (specifically classification tree) using rpart (in R), it is sometimes obvious that there are variables (X's) that are meaningful for predicting some of the outcome (y) variables - while other predictors are relevant for other y's only.
How can it be estimated, which explanatory variable is "used" for which of the predicted value in the outcome variable?
Here is an example code in which x2 is the only important variable for predicting "b" (one of the y outcomes). There is no predicting variable for "c", and x1 is a predictor for "a", assuming that x2 permits it.
How can this situation be extracted from the fitted model?
```
N <- 200
set.seed(5123)
x1 <- runif(N)
x2 <- runif(N)
x3 <- runif(N)
y <- sample(letters[1:3], N, T)
y[x1 <.5] <- "a"
y[x2 <.1] <- "b"
fit <- rpart(y ~ x1+x2)
fit2 <- prune(fit, cp= 0.07)
plot(fit2)
text(fit2, use.n=TRUE)
```
Thanks.
| Estimate the "meaningful" predictors for a value in a CART model (rpart) | CC BY-SA 2.5 | null | 2010-12-13T19:24:57.693 | 2016-12-14T12:53:00.710 | null | null | 253 | [
"r",
"cart",
"rpart"
]
|
5444 | 1 | 5445 | null | 4 | 939 | Motivation
I have a prior on a random variable $X\sim \beta^\prime(\alpha,\beta)$ that I would like to use in JAGS. JAGS does not support the $\beta^\prime$ distribution, but it does support the $F$ distributions, and the F distribution is related to the $\beta^\prime(\alpha,\beta)$ like this:
$$\text{if } X\sim \beta^\prime(\alpha,\beta) \text{ then } X\frac{\alpha}{\beta}\sim F(2\alpha, 2\beta)$$
Question
Is there a transformation $$c,d = f(\alpha, \beta)$$ such that:
$$\text{if } X\sim \beta^\prime(\alpha,\beta)\text{ then } X\sim F(c, d)?$$
What is an appropriate analytical approach or solution to this problem?
Current Approach
My solution is to use simulation. Although sufficient for my application, a formal solution would be more satisfying.
```
set.seed(0)
alpha <- 2
beta <- 4
Y <- rf(100000, 2*alpha, 2*beta) * ( beta / alpha )
parms <-signif(fitdistr(Y, 'f', start = list(df1=1, df2=2))$estimate,2)
```
Update whuber's answer states that there is no general transformation.
| How do I transform the parameters of the F distribution? | CC BY-SA 2.5 | 0 | 2010-12-13T19:41:36.827 | 2010-12-13T23:31:16.857 | 2010-12-13T23:31:16.857 | 1381 | 1381 | [
"data-transformation",
"random-variable"
]
|
5445 | 2 | null | 5444 | 6 | null | I don't quite get it: why can't you just generate an $F_{2\alpha, 2\beta}$ variate and rescale it by $\beta/\alpha$? I also don't see the connection between the code in your current approach and your question, because $Y$ appears to be an $F_{2\alpha, 2\beta}$ variate rescaled by $\alpha/\beta$ rather than $\beta/\alpha$.
Nevertheless, taking the question at face value, if there are parameters $c$ and $d$ corresponding to $\alpha$ and $\beta$, then
- The mean of $X$ must be $\beta/\alpha$ times the mean of $F_{2\alpha, 2\beta}$ and it is also equal to the mean of $F_{c,d}$.
- The mode of $X$ must be $\beta/\alpha$ times the mode of $F_{2\alpha, 2\beta}$ and it is also the mode of $F_{c,d}$.
Using standard formulas and solving gives a unique solution
$$\eqalign{
c = & \frac{2 \left(\alpha^2+\alpha^2 \beta \right)}{\alpha -\alpha \beta + 2 \alpha^2 \beta + 2 \beta ^2-2 \alpha \beta^2} \cr
d = & \frac{2 \beta^2}{\alpha -\alpha \beta +\beta^2}.
}$$
We can also equate variances:
- The variance of $X$ must equal $(\beta/\alpha)^2$ times the variance of $F_{2\alpha, 2\beta}$ and it must equal the variance of $F_{c,d}$.
Except possibly for special values of $\alpha$ and $\beta$ (most notably, $\alpha = \beta$), this requirement conflicts with the preceding solution. Therefore there does not exist any such transformation in general.
| null | CC BY-SA 2.5 | null | 2010-12-13T20:30:30.397 | 2010-12-13T20:30:30.397 | null | null | 919 | null |
5446 | 2 | null | 5115 | 12 | null | [George Dantzig](http://en.wikipedia.org/wiki/George_Dantzig) for the Simplex Method, and for being the student who mistook two open statistics problems that Neyman had written on the board for homework problems, and in his "ignorance" solving them. I'd vote for him just for the story.
| null | CC BY-SA 2.5 | null | 2010-12-13T22:10:48.643 | 2010-12-13T22:10:48.643 | null | null | 1764 | null |
5447 | 2 | null | 5115 | 22 | null | [Blaise Pascal and Pierre de Fermat](http://en.wikipedia.org/wiki/Problem_of_points) for creating the theory of probability and inventing the idea of expected value (1654) in order to solve a problem grounded in statistical observations (from gambling).
| null | CC BY-SA 2.5 | null | 2010-12-13T23:21:55.837 | 2010-12-13T23:21:55.837 | null | null | 919 | null |
5448 | 1 | null | null | 3 | 1256 | I have conducted a confirmatory factor analysis (CFA) to test the fit of a model with 5 factors and 5 items per factor. I used the modification indices to alter the model until I obtained statistics indicating an acceptable fit of the model to my data.
As others have done in my area I then conducted a principal components analysis (PCA) with a varimax rotation to check that items loaded on the factors identified by the CFA; however, not all of them did.
- Is it unusual for the results of a PCA to be inconsistent with a confirmatory factor analysis?
- Could this be because I have a small sample size ($n=96$)?
- As the CFA model has 'acceptable fit', should I simply work with the scale structures from those results and not do the PCA?
| Why would the results of PCA differ from a confirmatory factor analysis? | CC BY-SA 3.0 | null | 2010-12-13T23:25:11.007 | 2012-05-15T05:17:25.970 | 2012-05-15T05:17:25.970 | 183 | null | [
"pca",
"factor-analysis"
]
|
5450 | 1 | 5455 | null | 28 | 30144 | In a regression, the interaction term wipes out both related direct effects. Do I drop the interaction or report the outcome? The interaction was not part of the original hypothesis.
| What if interaction wipes out my direct effects in regression? | CC BY-SA 2.5 | null | 2010-12-13T23:43:17.117 | 2013-10-10T09:11:52.483 | 2010-12-14T08:35:11.153 | 930 | 2367 | [
"regression",
"interaction"
]
|
5451 | 2 | null | 4551 | 9 | null | Requesting, and perhaps obtaining The Flow Chart: That graphical thing where you say what the level of your variables are and what sort of relationship you're looking for, and you follow the arrows down to get a Brand Name Test or a Brand Name Statistic. Sometimes offered with mysterious 'parametric' and 'non-parametric' paths.
| null | CC BY-SA 2.5 | null | 2010-12-13T23:50:20.977 | 2010-12-13T23:50:20.977 | null | null | 1739 | null |
5452 | 1 | 5471 | null | 7 | 4469 | All,
I am trying to study the GBM package in R.
I. I wanted to try and figure out where the deviance, initial value, gradient and terminal node estimates came from. Please see this snippet:

To start out, I was not sure where the Bernoulli deviance came from. I expected that the deviance, would be -2 multiplied by the log-likelihood

, ignoring the weight variable w. Can anyone suggest what I am doing (or missing) wrong to dervive what is shown in GBM?
II. I was also confused on where the terminal node estimates came from?
Any help is very appreciated!
Brian
| R Package GBM - Bernoulli Deviance | CC BY-SA 2.5 | null | 2010-12-14T00:09:21.473 | 2016-04-25T10:02:21.510 | null | null | 2040 | [
"machine-learning"
]
|
5453 | 1 | 5458 | null | 4 | 424 | Let's say you have a process that generates data according to r = sin(t) + epsilon, where epsilon ~ N(0,V) is Gaussian noise. The unconditional variance of r is 0.5 + V.
Let's say we're forecasting r with a model m, and that our forecast is "perfect" in that m = sin(t). Construct v = r - m, which is the forecast error, and will be ~ N(0,V).
According to Bayes, we then have p(r|v) ~ p(v|r)*p(r), which is the the product of two Gaussian PDFs, one with variance V, the other (0.5+V). This product will itself be a Gaussian with total variance T = 1/(1/V + 1/(0.5+V)).
The funny thing is that T < V guaranteed, in fact T/V = (V+0.5)/(2V+0.5)! In other words, according to Bayes, the variance of your forecast error is less than the inherent noise in the data generation process itself!?! Isn't that impossible? Can anyone help me sort through this?
Thank you in advance,
-Jesse
---
whuber,
a derivation of the product of two Gaussian pdfs is from [https://people.ok.ubc.ca/jbobowsk/phys327/Gaussian%20Convolution.pdf](https://people.ok.ubc.ca/jbobowsk/phys327/Gaussian%20Convolution.pdf). The formula for the variance I gave, 1 / (1 / (V+0.5) + 1/V) seems to be accurate. Another way of writing this same expression is V*(V+0.5)/(V + (V + 0.5)) from which my equation for T/V follows directly - showing that T < V.
Perhaps you were citing the variance for the convolution of two Gaussian pdfs? The sum of the variances is correct for the convolution. I believe Bayes specifies the product, not the convolution, however, or do I have that wrong?
| Why does continuous Bayesian analysis seem to give this contradictory result? | CC BY-SA 3.0 | null | 2010-12-14T01:45:45.857 | 2014-12-01T12:09:50.643 | 2014-12-01T12:09:50.643 | 930 | 2379 | [
"bayesian",
"inference"
]
|
5454 | 2 | null | 5418 | 9 | null | I do not consider myself an established R package developer but have recently undergone the process of writing and maintaining a package for my work environment.
I had previously been writing / maintaining / updating a set of scripts that I would pass from project to project via the `source()` function. The end result of this was that I'd end up with mostly redundant scripts hanging out in various places on our network drives. It was never clear where the most up-to-date set of scripts were located. I have since migrated to writing / maintaining a package utilizing roxygen. It has drastically simplified my life and made it easier to share my work with colleagues.
Based on your criteria above, I second the recommendation of reviewing the packages that Hadley has written. In particular, I think reading through the [devtools wiki](https://github.com/hadley/devtools/wiki) would be very helpful. Hadley's code is well documented and several of his packages utilize roxygen. I think writing and maintaining one document for both R functions and R documentation is much easier than having them split out in two locations (.R and .RD files).
Hadley's packages also serve some fairly basic concepts and are relatively easy to deparse (imho) if you are looking for pointers on the technical aspect ideas. I find myself digging through the [plyr](https://github.com/hadley/plyr) source code when I'm looking for a pointer on roxygen documentation or other fundamental tasks.
| null | CC BY-SA 2.5 | null | 2010-12-14T02:02:46.990 | 2010-12-14T02:02:46.990 | null | null | 696 | null |
5455 | 2 | null | 5450 | 27 | null | I think this one is tricky; as you hint, there's 'moral hazard' here: if you hadn't looked at the interaction at all, you'd be free and clear, but now that you have there is a suspicion of data-dredging if you drop it.
The key is probably a change in the meaning of your effects when you go from the main-effects-only to the interaction model. What you get for the 'main effects' depends very much on how your treatments and contrasts are coded. In R, the default is treatment contrasts with the first factor levels (the ones with the first names in alphabetical order unless you have gone out of your way to code them differently) as the baseline levels.
Say (for simplicity) that you have two levels, 'control' and 'trt', for each factor. Without the interaction, the meaning of the 'v1.trt' parameter (assuming treatment contrasts as is the default in R) is "average difference between 'v1.control' and 'v1.trt' group"; the meaning of the 'v2.trt' parameter is "average difference between 'v2.control' and 'v2.trt'".
With the interaction, 'v1.trt' is the average difference between 'v1.control' and 'v1.trt' in the 'v2.control' group, and similarly 'v2.trt' is the average difference between v2 groups in the 'v1.control' group. Thus, if you have fairly small treatment effects in each of the control groups, but a large effect in the treatment groups, you could easily see what you're seeing.
The only way I can see this happening without a significant interaction term, however, is if all the effects are fairly weak (so that what you really mean by "the effect disappeared" is that you went from p=0.06 to p=0.04, across the magic significance line).
Another possibility is that you are 'using up too many degrees of freedom' -- that is, the parameter estimates don't actually change that much, but the residual error term is sufficiently inflated by having to estimate another 4 [ = (2-1)*(5-1)] parameters that your significant terms become non-significant. Again, I would only expect this with a small data set/relatively weak effects.
One possible solution is to move to sum contrasts, although this is also delicate -- you have to be convinced that 'average effect' is meaningful in your case. The very best thing is to plot your data and to look at the coefficients and understand what's happening in terms of the estimated parameters.
Hope that helps.
| null | CC BY-SA 2.5 | null | 2010-12-14T02:25:20.237 | 2010-12-14T02:25:20.237 | null | null | 2126 | null |
5456 | 2 | null | 5115 | 29 | null | [Jerzy Neyman](http://en.wikipedia.org/wiki/Jerzy_Neyman) and [Egon Pearson](http://en.wikipedia.org/wiki/E.S._Pearson) for work on experimental design, hypothesis testing, confidence intervals, and the Neyman-Pearson lemma.
| null | CC BY-SA 2.5 | null | 2010-12-14T03:02:22.327 | 2010-12-14T03:02:22.327 | null | null | 2077 | null |
5457 | 1 | 5459 | null | 12 | 4404 | When I tell my non-statistical friends I am a grad student pursuing a PhD in statistics they naturally say "oh so you want to be a professor?". I tell them no I actually plan to work in industry. Then they respond with, "and do what?". I have not found a good response to this question. I want to give them a range of interesting problems that statisticians work on, but my response is usually muddled or is too technical. I think most people picture us working in excel and calculating means and standard deviations.
What is a good response to this question that is brief and interesting?
Thanks! (not sure what a good existing tag is for this question)
| What is it that a statistician does? | CC BY-SA 2.5 | null | 2010-12-14T03:09:29.683 | 2014-06-04T15:11:30.807 | 2010-12-14T04:19:24.930 | null | 2310 | [
"careers"
]
|
5458 | 2 | null | 5453 | 3 | null | Actually, the variances are zero: $V(v|r) = V(r|v) = 0$.$p(v|r)$ is a Dyrac function which has a peak at the right spot (where $r = v -m$).
If you know $v$ or $r$, the other one is a deterministic function of the one you know.
| null | CC BY-SA 2.5 | null | 2010-12-14T03:28:01.720 | 2010-12-14T16:28:03.290 | 2010-12-14T16:28:03.290 | 2067 | 2067 | null |
5459 | 2 | null | 5457 | 21 | null | The area I am most interested in is the realm of biostatistics. Statistics can be used in this regard to do anything from summarize the results of a drug trial, determining whether Prozac really is more effective than the placebo sugar pill, to tumor detection in cancer patients. Please check out this presentation I found:
[What is Biostatistics?](http://www.stat.ncsu.edu/working_groups/biostat/biomed.pdf)
Remember, a statistician is a function that maps a set of data to a set of decisions.
| null | CC BY-SA 3.0 | null | 2010-12-14T03:54:53.847 | 2011-10-17T15:45:36.370 | 2011-10-17T15:45:36.370 | 1118 | 1118 | null |
5460 | 2 | null | 5457 | 4 | null | A statistician performs the following generic steps:
- Has a goal- test a theory, generate a prediction etc
- Obtains data (via surveys, experiments, thrid-parties etc) that is consistent with the above goal.
- Develops a statistical model that says roughly: Variable of interest = function of covariates + random error.
- Estimate the impact of covariates on the variable of interest using various tools from math/stats.
- Use the resulting estimates to evaluate theory/generate predictions.
You can contextualize the above generic steps to various settings. The context can be changed depending on the audience. For example:
- With doctors: discuss how you would assess the effectiveness of drugs
- With a more general audience discuss how you would assess the winner in an upcoming election
- With business professionals discuss how you would assess the relative impact of advertising designs etc.
| null | CC BY-SA 2.5 | null | 2010-12-14T04:18:06.930 | 2010-12-14T04:18:06.930 | null | null | null | null |
5461 | 1 | null | null | 1 | 201 | The title might be a bit misleading. Unfortunately statistics is not my area of forte, so gentle guidance along the right path is much appreciated. That said, here's my problem:
I'm working on analyzing precise time measurements from an experimental process. I have 16 measurement groups. Of these, one should (on average) differ slightly in magnitude from the others. The remaining 15 should have similar or identical properties. I'm trying to determine which of these measurement groups is the odd one out. My first pass at the problem involved a naive data collection routine rotating between sample groups until enough samples from each group were collected.
My data is clearly not normal. Furthermore, sample values seem to fluctuate as a function of time (non-scientific visual inspection shows that larger values and smaller values tend to cluster across sample groups). The variable under consideration should not be fluctuating like this, so I believe this effect is attributable to some other element in an admittedly noisy environment.
My first pass at analysis was to run a KS test comparing each sample group to the set consisting of all other collected data. This worked as expected: the sample which was known to vary showed a much lower p-value than the other candidates.
The KS test isn't ideal here. Despite a theoretical belief that the data is continuous, my measurement resolution limits dictate that there must be a number of duplicate data points (3 effective places of resolution and n between 10k and 1m per group).
Unfortunately, this sample set was taken from a somewhat idealized experimental condition. I need to improve my analysis to include some larger elements of noise. I have done the experiment in my real-world condition, and have been having trouble finding reliable ways to analyze my data.
The problem seems to be related to the fact that while my effect remains constant over much larger (and thus longer-running) sampling regimes, the macro-level time changes seem to be interfering. As I move to an order of magnitude more samples in my real-world condition, rather than converging on one single value as different, the KS test starts to show that ALL the values differ (and the ordering is inconsistent).
I've re-written my data collection to log both the timestamp for each sample, and several timing values taken over the course of my experimental procedure (originally I was just logging a single aggregate timing result by calculating a difference using these values). With this expanded data in hand, I've been looking for appropriate analysis techniques.
Here's where I'm stumped. I'm not sure what I should do with this timestamp data. I've turned a multiple sample comparison problem into a time sequence. If I can use the time data to even out my values, I'm still (at least potentially) left with a complicated multi-variable analysis. PCA assumes normality so that's out. ICA isn't appropriate because I know that most of my measured variables are not, in fact, independent of each other. Once I've sorted that out, I still need to compare to the overall group and figure out which one is different. Because each sample-per-group has a discrete timestamp, I don't have a simple way to compare across my sampling groups.
It seems like the Wilcoxon–Mann–Whitney (Wilcoxon rank-sum) test is likely to be of use here, but only after I've done work on the underlying data. Suggestions?
Edit: for what it's worth, this is a practical problem. Approximate solutions and things that narrow the candidate field are perfectly reasonable. No points for excessively pretty P values here.
| Removing macro-level time variance | CC BY-SA 2.5 | null | 2010-12-14T04:19:26.873 | 2010-12-15T03:04:59.567 | 2010-12-14T18:38:37.697 | 1750 | 1750 | [
"time-series",
"hypothesis-testing",
"statistical-significance",
"multiple-comparisons"
]
|
5462 | 1 | 5464 | null | 11 | 26301 | This is only for my high school senior project, so it doesn't have to be perfect. I'm doing a project on global warming, and I want to survey people for their opinions. I know that if I use a convenience sample of my classmates, I'll have a lot of bias. I was wondering if there was a site on the Internet where I could post my survey for random people to answer, so that I could get as close as possible to an SRS. If not, I'd be interested in some other suggestions for ways to carry out my survey.
| Is there a site to post my survey to so I can get a sample representative of the population? | CC BY-SA 2.5 | null | 2010-12-14T05:23:40.037 | 2014-04-28T11:30:47.080 | 2012-02-01T18:52:08.653 | 919 | 2371 | [
"survey",
"internet"
]
|
5463 | 2 | null | 5462 | 4 | null | I admire your ambition! To answer your question directly, I'm not familiar with any site. Perhaps a message board you're familiar with? Has your instructor addressed the issue of bias? It may be enough that you recognize the issue of bias in your sample and speak to how this will affect your survey. Most properly randomized nationally representative surveys have an N of somewhere between 1800 and 2300. [The GSS probably has a few questions regarding global warming](http://www.norc.uchicago.edu/GSS+Website/).
However, this will be inaccessible if you're not familiar with SPSS or Stata. I know others on here can help you better than I in regards to weighting responses to correct for bias.
| null | CC BY-SA 3.0 | null | 2010-12-14T05:45:34.817 | 2014-04-28T11:30:47.080 | 2014-04-28T11:30:47.080 | 22047 | 2262 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.