Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5464 | 2 | null | 5462 | 10 | null | For a high school project it will probably be difficult to gain a representative sample with your limited time and budget. I'd be somewhat skeptical an online vendor can achieve a random sample of anything. If I had to do it I think I would use the phone book and either call or go to the addresses to distribute the survey in your community. Although this isn't perfect (likley under-represents younger people, minorities, and people of lower socio-economic status) it makes the project more feasible in your situation (as opposed to conducting stratified sampling based on geographic regions). Both of these solutions are local samples, and would not get you a national sample. An agency I work for samples the local community based on parcel addresses and apartment listings obtained from the postal office (these sources of information are not always public though, and would require more front end work than opening up a phone book).
idclark is right and the [General Social Survey](http://www.norc.uchicago.edu/GSS+Website/Documentation/) has several variables addressing public opinion on global warming. You could either analyze that data directly (as it is open to the public) or if you are required to construct your own survey you could mimic the GSS's questions and then see how close your sample appears to be compared to the GSS. At the moment I am having a much easier time searching the GSS variable lists from the [ICPSR archive](http://www.icpsr.umich.edu/icpsrweb/ICPSR/ssvd/variables?study=General+Social+Surveys,+1972-2008+%5bCumulative+File%5d) than I am from the actual GSS website. You can actually conduct basic data analysis online at ICPSR of the [GSS data](http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/4697?archive=ICPSR&sdaAvailable=true&q=GSS) (you can basically do frequencies and cross-tabulations). If all you need are frequencies this circumvents the need to use stat software such as SPSS or Stata.
Getting a representative sample is difficult not only because true populations which to construct your sample are sometimes hard to come by, but just as importantly not everybody you do sample responds to the survey. I wouldn't be fixated on a national sample either, I'm sure you can construct just as interesting questions to answer by estimating public opinion of the local community, school mates, teachers, etc. What I find interesting is in comparison (like say comparing your school teachers opinions to those of the students). You could also conduct an experiment via surveys (like say manipulate how you present arguments for or against global warming and see how people differ in their opinions of global warming).
| null | CC BY-SA 2.5 | null | 2010-12-14T06:13:02.323 | 2010-12-14T06:13:02.323 | null | null | 1036 | null |
5465 | 1 | 5470 | null | 69 | 89945 | I am looking for some statistics (and probability, I guess) interview questions, from the most basic through the more advanced. Answers are not necessary (although links to specific questions on this site would do well).
| Statistics interview questions | CC BY-SA 2.5 | null | 2010-12-14T06:20:48.903 | 2022-05-21T08:18:45.927 | 2017-06-05T18:47:39.890 | 11887 | 795 | [
"intuition",
"careers"
]
|
5466 | 2 | null | 5465 | 6 | null | I often ask "how would you define/explain what forecasting is?"
Answer to that type of very general question helps me to see if people are connected to a particular case of forecasting. There is not a right answer but answering this synthetically during an interview is not always easy:)
| null | CC BY-SA 2.5 | null | 2010-12-14T07:08:35.253 | 2010-12-14T23:00:17.420 | 2010-12-14T23:00:17.420 | 795 | 223 | null |
5467 | 2 | null | 5462 | 1 | null | a sample representative of population cannot be obtained through internet as you will only get people interested in answering your survey online, which will give you a biased sample.
| null | CC BY-SA 2.5 | null | 2010-12-14T07:17:58.210 | 2010-12-14T07:17:58.210 | null | null | 1709 | null |
5468 | 2 | null | 5448 | 6 | null | What is the rationale of applying an exploratory/unsupervised method (PCA or FA with VARIMAX rotation) after having tested a confirmatory model, especially if this is done on the same sample?
In your CFA model, you impose constraints on your pattern matrix, e.g. some items are supposed to load on one factor but not on the others. A large modification index indicates that freeing a parameter or removing an equality constraint could result in better model fit. Item loadings are already available through your model fit.
On the contrary, in PCA or FA there is no such constraint, even following an orthogonal rotation (whose purpose is just to make factor more interpretable in that items would generally tend to load more heavily on a factor than on several ones). But, it is worth noting that these models are conceptually and mathematically different: the FA model is a measurement model, where we assume that there is some unique error attached to each item; this is not the case under the PCA framework. It is thus not surprising that you failed to replicate your factor structure, which may be an indication that there are possible item cross-loading, low item reliability, low stability in your factor structure, or the existence of a higher-order factor structure, that is enhanced by your low sample size.
In both case, but especially CFA, $N=96$ is a very limited sample size. Although some authors have suggested a ratio individuals:items of 5 to 10, this is merely the number of dimensions that is important. In your case, the estimation of your parameters will be noisy, and in the case of PCA you may expect fluctuations in your estimated loadings (just try bootstrap to get an idea of 95% CIs).
| null | CC BY-SA 2.5 | null | 2010-12-14T08:01:23.333 | 2010-12-14T09:50:58.897 | 2010-12-14T09:50:58.897 | 930 | 930 | null |
5469 | 2 | null | 5457 | 3 | null | I'm not a statistician, but a lot of my work involves statistics, and I work in health care.
The two things that I spend most of my time doing are:
a) examining the sizes of effects and trends and seeing if they are "real"
b) presenting very large datasets in a simple way so that managers and users of our services can understand them- usually in the form of graphs.
Having said that, I have NEVER successfully explained my job to anyone at a party, so I think I'm in the same boat as you! I love it when people say "that sounds interesting"- because it doesn't, not the way I tell it!
| null | CC BY-SA 2.5 | null | 2010-12-14T08:10:11.293 | 2010-12-14T08:10:11.293 | null | null | 199 | null |
5470 | 2 | null | 5465 | 41 | null | Not sure what the job is, but I think "Explain x to a novice" would probably be good-
a) because they will probably need to do this in the job
b) it's a good test of understanding, I reckon.
| null | CC BY-SA 2.5 | null | 2010-12-14T08:12:55.613 | 2010-12-14T08:12:55.613 | null | null | 199 | null |
5471 | 2 | null | 5452 | 10 | null | It is a mathematical trick. We have
\begin{align*}
\log\frac{p_i}{1-p_i}=f(x_i)
\end{align*}
and from this we get
\begin{align*}
\frac{1}{1-p_i}=1+\exp(f(x_i))
\end{align*}
The log likelihood is
\begin{align*}
\sum_{i=1}^n\left[y_i\log(p_i)+(1-y_i)\log(1-p_i)\right]&=\sum_{i=1}^n\left[y_i\log\frac{p_i}{1-p_i}+\log(1-p_i)\right]\\
&=\sum_{i=1}^n\left[y_if(x_i)-\log\frac{1}{1-p_i}\right]\\
&=\sum_{i=1}^n\left[y_if(x_i)-\log(1+\exp(f(x_i)))\right]\\
\end{align*}
Only some terms were rearranged. I hope I made clear how exactly it was done.
| null | CC BY-SA 2.5 | null | 2010-12-14T09:16:36.017 | 2010-12-14T13:20:06.913 | 2010-12-14T13:20:06.913 | 2116 | 2116 | null |
5472 | 2 | null | 5465 | 21 | null | Standard Q where I work is along the lines of:
>
Have a look at this output of a multiple logistic regression from a statistical package you claim to have used (preferably one we use too). XXX is the independent variable of principal interest. How woud you interpret the results for a colleague with knowledge of the subject matter but no formal statistical training? (If necessary prompt for separate interpretation of point estimate, CI, p-value).
| null | CC BY-SA 2.5 | null | 2010-12-14T09:17:59.400 | 2010-12-14T14:52:25.680 | 2010-12-14T14:52:25.680 | 919 | 449 | null |
5473 | 2 | null | 5450 | 5 | null | In a regular multiple regression with two quantitative predictor variables, including their interaction just means including their observation-wise product as an additional predictor variable: $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 (X_1 \cdot X_2) = (b_0 + b_2 X_2) + (b_1 + b_3 X_2) X_1$
This typically introduces high multicollinearity since the product will strongly correlate with both original variables. With multicollinearity, individual parameter estimates depend strongly on which other variables are considered - like in your case. As a counter-measure, centering the variables often reduces multicollinearity when the interaction is considered.
I'm not sure if this directly applies to your case since you seem to have categorical predictors but use the term "regression" instead of "ANOVA". Of course the latter case is essentially the same model, but only after choosing the contrast coding scheme as Ben explained.
| null | CC BY-SA 2.5 | null | 2010-12-14T11:41:45.440 | 2010-12-14T11:41:45.440 | null | null | 1909 | null |
5474 | 2 | null | 5452 | 0 | null | I'm also studying the GBM package!
- mpiktas, i think you forgot a log on the left hand side in the 2nd equation? I assume you substitute p_i with 1/(1+exp(-f(x_i))), but then in the 2nd row above there is log(1/...) = log(1+...), or am i wrong? Anyway, i think then you did it right in the third row...
- Can you tell us what is the motivation for choosing p_i = 1/(1+exp(-f(x_i)))? Where does that come form? The p_i should mirror the proportion of success, i.e. class proportions, right?
Thanks!
peter
| null | CC BY-SA 2.5 | null | 2010-12-14T12:41:25.990 | 2010-12-14T12:41:25.990 | null | null | null | null |
5475 | 2 | null | 5465 | 9 | null | I was asked once how I would explain the relevance of the central limit theorem to a class of freshmen in the social sciences that barely have knowledge about statistics.
| null | CC BY-SA 2.5 | null | 2010-12-14T12:57:48.713 | 2010-12-14T12:57:48.713 | null | null | 1934 | null |
5476 | 2 | null | 5465 | 2 | null | While doing the variance analysis of quantitative variable, sometimes it found that frequency of the variable are very high (>5) then we use the Fisher's exact test to find independence of the variable.
| null | CC BY-SA 3.0 | null | 2010-12-14T13:12:15.970 | 2011-12-24T09:40:02.100 | 2011-12-24T09:40:02.100 | -1 | 5792 | null |
5477 | 2 | null | 5462 | 1 | null | Marketing people are using self-selected samples in online-surveys all the time, so they probably have methods for cleaning their data. Sadly, I don't have a good pointer right now to look for their methods.
| null | CC BY-SA 2.5 | null | 2010-12-14T13:13:52.900 | 2010-12-14T13:13:52.900 | null | null | 1766 | null |
5478 | 2 | null | 5465 | 17 | null | You might also want to reflect on whether the interview is the best medium for measuring the construct of interest.
If you want to measure prior knowledge of probability or statistics, you might be better off relying more on a written test.
You can ask more questions, and thus increase reliability of measurement. It's more standardised both in administration, and in scoring. And once the instrument is developed, it probably uses fewer resources to administer.
You could then use the interview as a more focussed tool looking at factors such as verbal and interpersonal skills.
| null | CC BY-SA 2.5 | null | 2010-12-14T14:01:47.267 | 2010-12-14T14:01:47.267 | null | null | 183 | null |
5479 | 1 | 5505 | null | 5 | 2240 | I am teaching myself DLM's using R's `dlm` package and have two strange results. I am modeling a time series using three combined elements: a trend (`dlmModPoly`), seasonality (`dlmModTrig`), and moving seasonality (`dlmModReg`).
The first strange result is with the `$f` (one-step-ahead foreacast) result. Most of this forecast appears to be one month behind the actual data, which I believe I've seen in many examples of one-step-ahead forecasting online and in books. The strange thing is that the moving seasonality is NOT similarly lagged, but hits exactly where it should. Is this normal?
If I use the result's `$m` to manually assemble the componenet, everything lines up perfectly, so it's weird, though it makes sense in a way: the moving seasonality has exogenous data to help it while the rest of the forecast does not. (Still, it'd be nice to simply `lag` the resulting `$f` and see a nice match.)
More troubling is the difference I see if I change the degree of `dlmModPoly`'s polynomial (from 1 to 2) in an attempt to get a smoother level. This introduces a huge spike in all three components at month 9. The spikes all basically cancel out in the composite, but obviously make each piece, say the level or the seasonality, look rather ridiculous there.
Is this just one of those things that happens and I should be prepared to throw away the result's first year of data as "break-in"? Or is it an indication that something is wrong? (Even in the degree 1 polynomial case, the first year's moving seasonality's level is a bit unsettled, but no huge spike as when I use a degree 2 polynomial.)
Here is my R code:
```
lvl0 <- log (my.data[1])
slp0 <- mean (diff (log (my.data)))
buildPTR2 <- function (x)
{
pm <- dlmModPoly (order=1, dV=exp (x[1]), dW=exp (x[2]), m0=lvl0)
tm <- dlmModTrig (s=12, dV=exp (x[1]), q=2, dW=exp (x[3:4]))
rm <- dlmModReg (moving.season, dV=exp (x[1]))
ptrm <- pm + tm + rm
return (ptrm)
}
mlptr2 <- dlmMLE (log (my.data), rep (1, 6), buildPTR2)
dptr2 <- buildPTR2 (mlptr2$par)
dptrf2 <- dlmFilter (log (my.data), dptr2)
tsdiag (dptrf2)
buildPTR3 <- function (x)
{
pm <- dlmModPoly (order=2, dV=exp (x[1]), dW=c(0, exp (x[2])), m0=c(lvl0, slp0))
tm <- dlmModTrig (s=12, dV=exp (x[1]), q=2, dW=exp (x[3:4]))
rm <- dlmModReg (moving.season, dV=exp (x[1]))
ptrm <- pm + tm + rm
return (ptrm)
}
mlptr3 <- dlmMLE (log (my.data), rep (1, 8), buildPTR3)
dptr3 <- buildPTR3 (mlptr3$par)
dptrf3 <- dlmFilter (log (my.data), dptr3)
```
Per the follow-on question: the data itself is monthly data for 10 years, with each month being the weekly average attendance at a theatrical production. The data definitely has seasonal and moving seasonal effects. I want to model the trend and the seasonal effects to give the management some insight, and to prepare for forecasting. (Which is not directly possible with `dlm` when you include a `dlmModReg` component, though that's the next step.)
(I am trying to use an order=2 polynomial component that I believe creates an IRW trend, which is supposed to be nicely smooth.)
If it matters, my moving seasonality is a yearly Big Bash Gala event that can fall in two different months, and I indicate it with 0 for most months and 1 for months in which the Big Bash falls.
| DLM results looking wonky | CC BY-SA 2.5 | null | 2010-12-14T14:05:31.890 | 2011-09-16T18:35:35.160 | 2010-12-14T20:13:14.577 | 1764 | 1764 | [
"r",
"time-series",
"dlm"
]
|
5480 | 2 | null | 5457 | 2 | null | My attempt at a simple answer that is both applicable across sub-domains and understandable (in gist) to the lay-person: When science develops theories about the world, these theories are compared to real-world data. The role of the statistician is to assess how well one or more competing theories account for the data. This is achieved with mathematics that let the statistician quantify their uncertainty about the conclusions they draw from the comparison of theory to data.
| null | CC BY-SA 2.5 | null | 2010-12-14T14:12:47.083 | 2010-12-14T14:12:47.083 | null | null | 364 | null |
5481 | 2 | null | 5450 | 13 | null | Are you sure the variables have been appropriately expressed? Consider two independent variables $X_1$ and $X_2$. The problem statement asserts that you are getting a good fit in the form
$$Y = \beta_0 + \beta_{12} X_1 X_2 + \epsilon$$
If there is some evidence that the variance of the residuals increases with $Y$, then a better model uses multiplicative error, of which one form is
$$Y = \beta_0 + \left( \beta_{12} X_1 X_2 \right) \delta$$
This can be rewritten
$$\log(Y - \beta_0) = \log(\beta_{12}) + \log(X_1) + \log(X_2) + \log(\delta);$$
that is, if you re-express your variables in the form
$$\eqalign{
\eta =& \log(Y - \beta_0) \cr
\xi_1 =& \log(X_1)\cr
\xi_2 =& \log(X_2)\cr
\zeta =& \log(\delta) \sim N(0, \sigma^2)
}$$
then the model is linear and likely has homoscedastic residuals:
$$\eta = \gamma_0 + \gamma_1 \xi_1 + \gamma_2 \xi_2 + \zeta,$$
and it may just so happen that $\gamma_1$ and $\gamma_2$ are both close to 1.
The value of $\beta_0$ can be discovered through standard methods of [exploratory data analysis](http://www.library.wisc.edu/selectedtocs/bb548.pdf) or, sometimes, is indicated by the nature of the variable. (For instance, it might be a theoretical minimum value attainable by $Y$.)
Alternatively, suppose $\beta_0$ is positive and sizable (within the context of the data) but $\sqrt{\beta_0}$ is inconsequentially small. Then the original fit can be re-expressed as
$$Y = (\theta_1 + X_1) (\theta_2 + X_2) + \epsilon$$
where $\theta_1 \theta_2 = \beta_0$ and both $\theta_1$ and $\theta_2$ are small. Here, the missing cross terms $\theta_1 X_2$ and $\theta_2 X_1$ are presumed small enough to be subsumed within the error term $\epsilon$. Again, assuming a multiplicative error and taking logarithms gives a model with only direct effects and no interaction.
This analysis shows how it is possible--even likely in some applications--to have a model in which the only effects appear to be interactions. This arises when the variables (independent, dependent, or both) are presented to you in an unsuitable form and their logarithms are a more effective target for modeling. The distributions of the variables and of the initial residuals provide the clues needed to determine whether this may be the case: skewed distributions of the variables and heteroscedasticity of the residuals (specifically, having variances roughly proportional to the predicted values) are the indicators.
| null | CC BY-SA 2.5 | null | 2010-12-14T15:21:58.523 | 2010-12-14T15:21:58.523 | null | null | 919 | null |
5482 | 2 | null | 5347 | 10 | null | (Because this is approach is independent of the other solutions posted, including one that I have posted, I'm offering it as a separate response).
You can compute the exact distribution in seconds (or less) provided the sum of the p's is small.
We have already seen suggestions that the distribution might approximately be Gaussian (under some scenarios) or Poisson (under other scenarios). Either way, we know its mean $\mu$ is the sum of the $p_i$ and its variance $\sigma^2$ is the sum of $p_i(1-p_i)$. Therefore the distribution will be concentrated within a few standard deviations of its mean, say $z$ SDs with $z$ between 4 and 6 or thereabouts. Therefore we need only compute the probability that the sum $X$ equals (an integer) $k$ for $k = \mu - z \sigma$ through $k = \mu + z \sigma$. When most of the $p_i$ are small, $\sigma^2$ is approximately equal to (but slightly less than) $\mu$, so to be conservative we can do the computation for $k$ in the interval $[\mu - z \sqrt{\mu}, \mu + z \sqrt{\mu}]$. For example, when the sum of the $p_i$ equals $9$ and choosing $z = 6$ in order to cover the tails well, we would need the computation to cover $k$ in $[9 - 6 \sqrt{9}, 9 + 6 \sqrt{9}]$ = $[0, 27]$, which is just 28 values.
The distribution is computed recursively. Let $f_i$ be the distribution of the sum of the first $i$ of these Bernoulli variables. For any $j$ from $0$ through $i+1$, the sum of the first $i+1$ variables can equal $j$ in two mutually exclusive ways: the sum of the first $i$ variables equals $j$ and the $i+1^\text{st}$ is $0$ or else the sum of the first $i$ variables equals $j-1$ and the $i+1^\text{st}$ is $1$. Therefore
$$f_{i+1}(j) = f_i(j)(1 - p_{i+1}) + f_i(j-1) p_{i+1}.$$
We only need to carry out this computation for integral $j$ in the interval from $\max(0, \mu - z \sqrt{\mu})$ to $\mu + z \sqrt{\mu}.$
When most of the $p_i$ are tiny (but the $1 - p_i$ are still distinguishable from $1$ with reasonable precision), this approach is not plagued with the huge accumulation of floating point roundoff errors used in the solution I previously posted. Therefore, extended-precision computation is not required. For example, a double-precision calculation for an array of $2^{16}$ probabilities $p_i = 1/(i+1)$ ($\mu = 10.6676$, requiring calculations for probabilities of sums between $0$ and $31$) took 0.1 seconds with Mathematica 8 and 1-2 seconds with Excel 2002 (both obtained the same answers). Repeating it with quadruple precision (in Mathematica) took about 2 seconds but did not change any answer by more than $3 \times 10^{-15}$. Terminating the distribution at $z = 6$ SDs into the upper tail lost only $3.6 \times 10^{-8}$ of the total probability.
Another calculation for an array of 40,000 double precision random values between 0 and 0.001 ($\mu = 19.9093$) took 0.08 seconds with Mathematica.
This algorithm is parallelizable. Just break the set of $p_i$ into disjoint subsets of approximately equal size, one per processor. Compute the distribution for each subset, then convolve the results (using FFT if you like, although this speedup is probably unnecessary) to obtain the full answer. This makes it practical to use even when $\mu$ gets large, when you need to look far out into the tails ($z$ large), and/or $n$ is large.
The timing for an array of $n$ variables with $m$ processors scales as $O(n(\mu + z \sqrt{\mu})/m)$. Mathematica's speed is on the order of a million per second. For example, with $m = 1$ processor, $n = 20000$ variates, a total probability of $\mu = 100$, and going out to $z = 6$ standard deviations into the upper tail, $n(\mu + z \sqrt{\mu})/m = 3.2$ million: figure a couple seconds of computing time. If you compile this you might speed up the performance two orders of magnitude.
Incidentally, in these test cases, graphs of the distribution clearly showed some positive skewness: they aren't normal.
For the record, here is a Mathematica solution:
```
pb[p_, z_] := Module[
{\[Mu] = Total[p]},
Fold[#1 - #2 Differences[Prepend[#1, 0]] &,
Prepend[ConstantArray[0, Ceiling[\[Mu] + Sqrt[\[Mu]] z]], 1], p]
]
```
(NB The color coding applied by this site is meaningless for Mathematica code. In particular, the gray stuff is not comments: it's where all the work is done!)
An example of its use is
```
pb[RandomReal[{0, 0.001}, 40000], 8]
```
---
### Edit
An `R` solution is ten times slower than Mathematica in this test case--perhaps I have not coded it optimally--but it still executes quickly (about one second):
```
pb <- function(p, z) {
mu <- sum(p)
x <- c(1, rep(0, ceiling(mu + sqrt(mu) * z)))
f <- function(v) {x <<- x - v * diff(c(0, x));}
sapply(p, f); x
}
y <- pb(runif(40000, 0, 0.001), 8)
plot(y)
```

| null | CC BY-SA 3.0 | null | 2010-12-14T16:46:07.687 | 2012-11-12T23:15:33.837 | 2012-11-12T23:15:33.837 | 919 | 919 | null |
5483 | 1 | 5485 | null | 3 | 164 | Lets say for example a class of students and their grades are the data set.
Lets say there are around 35 students.
Here is what you know:
```
Your mark
The class average
The median mark
The standard deviation
```
Are there any other conclusions about this data that can be made given this information?
Thanks
| What else can be deduced from the following class grade summary information? | CC BY-SA 2.5 | null | 2010-12-14T17:33:16.430 | 2010-12-15T04:35:56.323 | 2010-12-15T04:35:56.323 | 183 | 2380 | [
"r",
"distributions",
"dataset"
]
|
5484 | 2 | null | 5457 | 2 | null | A statistician tells you what conclusions can be reached from a data set and, maybe even more important, what conclusions cannot be reached.
| null | CC BY-SA 2.5 | null | 2010-12-14T17:34:59.467 | 2010-12-14T17:34:59.467 | null | null | 666 | null |
5485 | 2 | null | 5483 | 4 | null | You can probably also compute upper bounds on the number of students who did better or worse than you did. The population version would be via [Chebyshev's inequality](http://en.wikipedia.org/wiki/Chebyshev%27s_inequality).
For example, if $X_{me}$ is 'my' score, $s^2$ is the sample variance, $\bar{X}$ is the sample mean, and $X_i$ are the class scores, including 'mine', we have
$$s^2 = \frac{1}{n-1}\sum_{i} (X_i - \bar{X})^2$$
Now partition the sum into parts which are more extreme than my score and those which are less extreme:
$$(n-1) s^2 = \sum_{i \in I} (X_i - \bar{X})^2 + \sum_{i \not\in I}(X_i - \bar{X})^2$$,
where $I$ is the set of indices $i$ such that $|X_i - \bar{X}| \ge |X_{me} - \bar{X}|$. Now bound the summands in the right sum by zero from below, and bound those in the left sum by the condition defining $I$:
$$(n-1) s^2 \ge \sum_{i \in I} (X_{me} - \bar{X})^2 + \sum_{i \not\in I}0 = |I| (X_{me} - \bar{X})^2$$
And thus you have
$$|I| \le (n-1)s^2 / (X_{me} - \bar{X})^2$$
Now $I$ includes all those in the class that did better than I did, if I beat the mean, and thus you have an upper bound on that number in that case.
Using the sample median and a sample version of [Gauss' Inequality](http://en.wikipedia.org/wiki/Gauss%27s_inequality), you may be able to prove a tighter bound, but with some more assumptions on the distribution, and the proof is not as clear.
(n.b. This is, I believe, essentially how Chebyshev's inequality is proved.)
| null | CC BY-SA 2.5 | null | 2010-12-14T17:40:38.690 | 2010-12-14T22:52:58.770 | 2010-12-14T22:52:58.770 | 795 | 795 | null |
5487 | 1 | null | null | 3 | 2401 | I need to calculate the sample size required for an observational study in which incidence of disease is 19-29%. Population affected is 600,000 people.
The study has two samples similar on baseline characteristics treated
with two different drugs.
Statistical analysis will be chi-square and Fisher's test.
I have to demonstrate non-inferiority of one drugs with respect of the other one.
I don't have any other info.
Can anyone help me to calculate the sample sizes required in order to obtain power of 0.80
with an alpha 0.05?
Thanks a lot in advance
| Sample size calculation for study aiming to demonstrate non-inferiority of a drug | CC BY-SA 2.5 | null | 2010-12-14T18:30:49.127 | 2012-05-28T16:05:36.517 | 2010-12-15T04:32:31.740 | 183 | null | [
"sample-size",
"epidemiology"
]
|
5489 | 2 | null | 5453 | 4 | null | As per whuber's suggestion, I am posting the summary of mistakes in a separate answer.
- The distribution of $t$ is not stated. If $t$ is not a random variable, then the mean of $r$ is $\sin t$ and unconditional variance hence is $V$, not $\frac{1}{2}+V$.
- If we assume that $t$ is uniformly distributed in interval $[0,2\pi]$ and independent from $\varepsilon$ as per shabbychef suggestion, then unconditional variance is $\frac{1}{2}+V$. But then distribution of $r$ is certainly not normal. So the product of $p(v|r)$ and $p(r)$ will not be Gaussian, and following variance calculations are incorrect.
| null | CC BY-SA 2.5 | null | 2010-12-14T20:04:37.797 | 2010-12-14T20:04:37.797 | null | null | 2116 | null |
5490 | 1 | null | null | 2 | 893 | Assume we have a sensor field with dimension M*M. In order to apply any data compression technique, first I want to know what is the compression limit or minimum entropy of the entire sensor field. How could I compute the minimum entropy or compression limit for the sensor field?
or
Actually I want to have the theoretical compression limit. Let's put the problem for an image. I want to know whether there are any mathematical methods to calculate the theoretical compression limit. Please let me know or suggest any readings to formulate the problem.
Thanks
| How to compute theoretical compression limit? | CC BY-SA 2.5 | null | 2010-12-14T20:08:27.367 | 2010-12-14T21:36:25.123 | 2010-12-14T21:36:25.123 | null | 2384 | [
"entropy",
"compression"
]
|
5491 | 2 | null | 4991 | 5 | null | If you have time varying parameters and want to do things sequentially (filtering), then SMC makes the most sense. MCMC is better when you want to condition on all of the data, or you have unknown static parameters that you want to estimate. Particle filters have issues with static parameters (degeneracy).
| null | CC BY-SA 2.5 | null | 2010-12-14T20:29:05.030 | 2010-12-14T20:29:05.030 | null | null | 643 | null |
5492 | 2 | null | 5490 | 2 | null | The "entropy" is defined only within the [context of a probabilistic model](http://en.wikipedia.org/wiki/Entropy_%28information_theory%29#Data_compression) for the data. If you characterize the image as a set of $M^2$ distinct "characters" and assume the frequencies of those characters adequately reflect their probabilities, then you need only apply the formula
Entropy = Sum (over all characters $c$) of [-log(probability of $c$) * probability of $c$].
A standard (but by no means the only) estimate of the probability of a character in a set of $N = M^2$ characters is
Estimated probability of $c$ = (Number of occurrences of $c$) / $N$.
| null | CC BY-SA 2.5 | null | 2010-12-14T20:33:56.257 | 2010-12-14T20:33:56.257 | null | null | 919 | null |
5493 | 2 | null | 5487 | 5 | null | [G*Power](http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/) is a commonly-recommended program for sample size calculations. I've only dabbled with it a couple of times in the past, but it's more than capable of handling the situation you describe.
| null | CC BY-SA 2.5 | null | 2010-12-14T20:42:14.083 | 2010-12-14T20:42:14.083 | null | null | 71 | null |
5494 | 2 | null | 5115 | 7 | null | [Leland Wilkinson](http://www.cs.uic.edu/~wilkinson/) for his contribution to statistical graphics.
| null | CC BY-SA 2.5 | null | 2010-12-14T21:11:31.903 | 2010-12-14T21:11:31.903 | null | null | 609 | null |
5495 | 2 | null | 5434 | 3 | null | If you can't get satisfaction with R you can fit this model and more complicated
ones with AD Model Builder which is free software available at [http://admb-project.org](http://admb-project.org). ADMB permits you to model the over dispersion in a variety of ways,
rather than being confined to the GLM paradigm. I can advise you if you are interested.
| null | CC BY-SA 2.5 | null | 2010-12-14T22:23:25.407 | 2010-12-14T22:23:25.407 | null | null | 1585 | null |
5496 | 2 | null | 5457 | 9 | null | A statistician is a numerical detective, uncovering the stories hidden in a mass of data.
| null | CC BY-SA 2.5 | null | 2010-12-14T23:13:34.167 | 2010-12-14T23:13:34.167 | null | null | 159 | null |
5497 | 2 | null | 5457 | 3 | null | The TV show Numb3rs is useful as many people have seen it. I tell them that I'm like the guys on Numb3rs except I deal with solving business problems rather than crimes. (Substitute "business problems" for whatever field you work in.) That usually gets the response "Wow, cool!" which is better than what I used to get.
| null | CC BY-SA 2.5 | null | 2010-12-14T23:17:09.983 | 2010-12-14T23:17:09.983 | null | null | 159 | null |
5498 | 2 | null | 5479 | 6 | null | So you have monthly data with trend and seasonality and you want to both analyse the trend/seasonal components and produce forecasts. These are two separate tasks. While you can do both with `dlm`, there are simpler approaches if you separate the tasks.
For studying the trend and seasonality, I suggest using STL via the `stl()` function in R. It is robust and has nice graphics that are easy to explain to non-statisticians.
For forecasting, I would use a seasonal ARIMA model. To start with, ignore the Big Bash Gala event, and try fitting a seasonal ARIMA model using auto.arima() from the `forecast` package. The automatic seasonal differencing selection is not very good, so I would use
```
fit1 <- auto.arima(x,D=1)
```
where `x` is your data.
Then, to add in the BBG effect, set up a dummy regression variable (`z`) taking value 1 when BBG occurs in the month and 0 otherwise. Add that in to your model using
```
fit2 <- auto.arima(x,xreg=z,D=1)
```
Produce forecasts from each model using the `forecast()` function. For `fit2`, you will need future values of `z`.
Even if you want to stick with `dlm`, the above approaches will provide useful comparative benchmarks.
| null | CC BY-SA 2.5 | null | 2010-12-14T23:41:45.517 | 2010-12-14T23:41:45.517 | null | null | 159 | null |
5499 | 2 | null | 5483 | 4 | null | Since the data set is grades, you probably also know the minimum and maximum possible scores, e.g. 0 and 100. Given the median and first two moments, you can fit a variety of models to the sample statistics, say a scaled beta distribution. This would give you tighter estimates than Chebyshev's inequality, although you'd be making some serious regularity and smoothness assumptions. (The nifty thing with a beta distribution is that the mean and variance completely define it, so you could use the median as a validation statistic.) Sounds like fun!
| null | CC BY-SA 2.5 | null | 2010-12-15T03:00:22.257 | 2010-12-15T03:00:22.257 | null | null | 5792 | null |
5500 | 2 | null | 5461 | 3 | null | From what I can tell, your problems seem to be:
1) smooth the time series data to remove correlated fluctuations.
2) Identify which of the inputs differs, using the smoothed data.
You're seem to be worried about not being able to solve (2) once you solve (1). But let's solve 1 first and then worry about 2, right?
Here's one idea. You say you're sampling in a round robin fashion and you have 16 inputs. So maybe treat each 16 draws as one "round", sum up all of the values in each round, and divide each value in the 16-draw round by that sum to normalize.
It seems that this would work if your time series data is correlated on a longer time scale than 16 data points. If the data is correlated on a much longer time scale you could even normalize in larger chunks, like 160 or 1,600 data points, to maximize noise reduction (and comp. efficiency).
| null | CC BY-SA 2.5 | null | 2010-12-15T03:04:59.567 | 2010-12-15T03:04:59.567 | null | null | 2073 | null |
5501 | 2 | null | 5462 | 7 | null | Your best bet is mechanical turk, [https://www.mturk.com/mturk/welcome](https://www.mturk.com/mturk/welcome). It will cost you some money, but not much. If your questions are short and can be done in ~ 1 min you can easily charge 10 cents or so per answer, so if you want 50 responses it will cost only $5. You can get the data formatted in .xml, which you can import into R or even (shudder) Excel. The people you sample will be much more varied than you'd get in a high school class. And you can ask for demographic info (age, gender, nationality, etc), as questions in your survey. Also making the survey is easy, just adjust amazon's templates.
See [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1601785](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1601785) for more issues with the biases you mention, the WEIRD study. Also see why people at mech turk participate: [http://behind-the-enemy-lines.blogspot.com/2008/09/why-people-participate-on-mechanical.html](http://behind-the-enemy-lines.blogspot.com/2008/09/why-people-participate-on-mechanical.html).
| null | CC BY-SA 2.5 | null | 2010-12-15T03:22:35.440 | 2010-12-15T03:22:35.440 | null | null | 2073 | null |
5502 | 1 | 5521 | null | 13 | 5236 | Is there a principled way to estimate factor scores when you have ordinal, discrete variables.
I have $n$ ordinal, discrete, variables. If I make the assumption that underlying each response is a continuous, normally distributed variable, then I can calculate an $n\times n$ polychoric correlation matrix. I can then run a factor analysis on this matrix and get factor loadings for each variable.
How can I combine the factor loadings and the variables to estimate the factor scores. The typical ways to estimate scores would appear to require that I treat the ordinal data as interval.
I suppose I might need to dig deeper into the guts of polychoric correlation to figure out a link function.
| Factor scores from discrete, ordinal responses | CC BY-SA 2.5 | null | 2010-12-15T04:04:07.567 | 2012-03-15T12:13:35.497 | 2010-12-16T05:25:16.043 | 82 | 82 | [
"factor-analysis",
"ordinal-data"
]
|
5503 | 1 | 5522 | null | 4 | 357 | I think, I read this quote some where:
>
For every field "x" there exists a field "computational x"
Has anyone else read this or remembers reading anything close to this?
If I remember correctly, it was by Dr. Jan de Leeuw.
Can anyone please tell if my memory fails me here? (I could not find any link after a lot of googling)
| Is there a quote like this from some statistician? | CC BY-SA 2.5 | null | 2010-12-15T04:40:11.687 | 2010-12-15T19:22:15.603 | null | null | 1307 | [
"reproducible-research"
]
|
5504 | 1 | 5516 | null | 4 | 3657 | The annual returns on stocks and treasury bonds over the next 12 months are uncertain. Suppose that these returns can be described by normal distributions with stocks having a mean of 15% and a standard deviation of 20%, and bonds having a mean of 6% and a standard deviation of 9%. Which asset is more likely to have a negative return?
| Normal distribution probability | CC BY-SA 2.5 | null | 2010-12-15T06:00:52.117 | 2010-12-15T17:16:05.057 | null | null | 2385 | [
"self-study"
]
|
5505 | 2 | null | 5479 | 0 | null | Difficult to diagnose without looking at the results, but here is a wild guess: if in the fit the variance associated to the level component is large, that component will "follow" your data. The filtered estimates will nearly coincide with observations, and the forecasts will appear to lag them --which, as I understand, is what you observe--.
I have no explanation for what you mention in your third paragraph.
I would think that the behaviour you note in the fourth paragraph is not so abnormal: you have monthly data and the Kalman filter has to settle down. For a transient period (at least a year) you may look at a wandering state.
My understanding is that the local linear trend (dlmModPoly(order=2,...)) needs
not give a smoother fit than the local level model (dlmModPoly(order=1,...)). It all depends on the values of the fitted variances of the level and slope components of the state.
| null | CC BY-SA 2.5 | null | 2010-12-15T06:14:58.613 | 2010-12-15T06:14:58.613 | null | null | 892 | null |
5506 | 2 | null | 5504 | 6 | null | Here is the R code to quickly solve this:
```
> pnorm(0, mean=15, sd=20)
[1] 0.2266274
> pnorm(0, mean=6, sd=9)
[1] 0.2524925
```
So bonds will be more likely to have a negative return
| null | CC BY-SA 2.5 | null | 2010-12-15T06:37:51.053 | 2010-12-15T06:37:51.053 | null | null | 2144 | null |
5507 | 2 | null | 5504 | 5 | null | Stock: P( (X-15)/20 < (0-15)/20 ) = P(Z<-3/4) = .2266,
Bond: P( (Y-6)/9 < (0-6)/9 ) = P(Z<-2/3) = .2525
More likely the bond.
Note that the price is log-normally distributed.
| null | CC BY-SA 2.5 | null | 2010-12-15T07:24:41.860 | 2010-12-15T07:24:41.860 | null | null | 2387 | null |
5508 | 1 | 5540 | null | 3 | 336 | I'm using multinomial probit to estimate some parameters, and I keep seeing references to the fact that MNP was considered computationally "intractible" relative to binomial probit up until the early 21st century. The question is: why? I get that adding variables makes things take longer (I've got a background in CS), but for the life of me I can't see why estimation should be any worse than, say, $O(n^3)$ in the "nomial-ness" of the model. That is, when you add a new choice, you can update your simulations based on transformations of the joint error terms, and from there it's binomial probit with a couple indicators thrown in. Is there something deeper going on in the background that I'm not taking into consideration?
Many thanks,
Kyle
| Computational considerations of multinomial probit versus binomial probit | CC BY-SA 2.5 | null | 2010-12-15T07:57:00.543 | 2010-12-15T23:46:54.900 | null | null | 53 | [
"maximum-likelihood",
"multinomial-distribution"
]
|
5509 | 2 | null | 5503 | 5 | null | Maybe you are after this talk?
>
Tutorial: Methods for Reproducible
Research, by Roger D. Peng (slide
3)
Also, papers on Reproducible research written by de Leeuw that I am aware of are [Reproducible Research: the Bottom Line](http://preprints.stat.ucla.edu/301/301.pdf), and [Statistical Software -- Overview](http://preprints.stat.ucla.edu/570/statsoft.pdf). But a quick check didn't reveal any citation like the one you show.
| null | CC BY-SA 2.5 | null | 2010-12-15T08:05:29.210 | 2010-12-15T17:44:09.527 | 2010-12-15T17:44:09.527 | 919 | 930 | null |
5510 | 2 | null | 3313 | 0 | null | did you convert the aforementioned library to R? I need to convert the same library into R and wanted to ask if you might share your results?
Thanks
| null | CC BY-SA 2.5 | null | 2010-12-15T08:30:42.657 | 2010-12-15T08:30:42.657 | null | null | null | null |
5511 | 2 | null | 5115 | 13 | null | [W. Edwards Deming](http://en.wikipedia.org/wiki/W._Edwards_Deming) for promoting statistical process control
| null | CC BY-SA 2.5 | null | 2010-12-15T08:53:41.307 | 2010-12-15T08:53:41.307 | null | null | 74 | null |
5512 | 2 | null | 5114 | 1 | null | You could also try the triangular distribution. To fit this, you basically specify a lower bound (this would be X=2), an upper bound (this would be X=8), and a "most likely" value. The wikepedia page [http://en.wikipedia.org/wiki/Triangular_distribution](http://en.wikipedia.org/wiki/Triangular_distribution) has more info on this distribution. If there is not much faith in the "most likely" value (as it appears to be, prior to observing any data), it may be a good idea to place a non-informative prior distribution on it, and then use the two data points to estimate this value. One good one is the jeffrey's prior, which for this problem would be p(c) = 1/(pi*sqrt((c-2)*(c-8))), where "c" is the "most likely value" (consistent with the wikipedia notation).
Given this prior, you can work out the posterior distribution of c analytically, or via simulation. The analytic form of the likelihood is not particularly nice, so simulation seems to be more attractive. This example is particularly well suited to rejection sampling (see wiki page for a general description of rejection sampling), because the maximised likelihood is 1/3^n regardless of the value of c, which provides the "upper bound". So you generate a "candidate" from the jeffrey's prior (call it c_i), and then evaluate the likelihood at this candidate L(x1,..,xn|c_i), and divide by the maximised likelihood, to give (3^n)*L(x1,..,xn|c_i). You then generate a U(0,1) random variable, and if u is less than (3^n)*L(x1,..,xn|c_i), then accept c_i as a posterior sampled value, otherwise throw away c_i and start again. Repeat this process until you have enough accepted samples (100, 500, 1,000, or more depending on how accurate you want). Then just take the sample average of whatever function of c you are interested in (the likelihood of a new observation is an obvious candidate for your application).
An alternative to accept-reject is to use the value of the likelihood as a weight (and don't generate the u), and then proceed with taking weighted averages using all candidates, rather than un-weighted averages with the accepted candidates
| null | CC BY-SA 2.5 | null | 2010-12-15T12:27:20.673 | 2010-12-15T12:27:20.673 | null | null | 2392 | null |
5513 | 2 | null | 1906 | 1 | null | Predictive Analytics World: [pawcon.com](http://www.predictiveanalyticsworld.com/).
| null | CC BY-SA 3.0 | null | 2010-12-15T12:47:51.133 | 2011-09-20T21:34:25.253 | 2011-09-20T21:34:25.253 | 930 | null | null |
5514 | 1 | null | null | 4 | 1451 | Could anyone kindly provide an explanation (mathematically or non-mathematically) about the non-existence of the intercept term in [conditional logistic regression](http://www.ats.ucla.edu/stat/sas/library/logistic.pdf)? Is the interpretation of the coefficients similar to that of (unconditional) [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression)?
Thank you for your help.
| Queries on conditional logistic regression | CC BY-SA 2.5 | null | 2010-12-15T14:18:11.087 | 2013-09-03T09:53:10.170 | 2013-09-03T09:53:10.170 | 21599 | null | [
"logistic",
"survival",
"epidemiology",
"clogit"
]
|
5515 | 2 | null | 5502 | 8 | null | It's commonplace to extract factor scores from ordinal-variable indicators. Researchers using likert measures do it all the time. Because factor scores are based on covariance, it's usually not that big a deal that the "intervals" might not be uniform within and across items, particularly if the items are comparable & use reasonably-compact scales (e.g., 5 or 7 pt "agree/disagree" likert items): all the subjects are responding to the same items, and if the items are indeed valid measures of some latent variable, the responses should should display a uniform covariance pattern. See Gorsuch, R. L. (1983). Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum. 2nd. ed., pp. 119-20. But if bothers you to assume the responses for you ordinal variables are linear -- or even more important, if you want factor scores that aren't linear but reflect recurring nonlinear associations among categorical items (as you would be doing if your variables were nominal or qualitative)-- you should use a nonlinear scaling alternative to conventional factor analysis, such as latent class analysis or item response theory. (There is of course a family resemblance between this query and your query on use of ordinal predictors in logit regression models; maybe I can once again inspire chi or someone else who knows more than I to treat us to an even more fine-grained account of why you needn't worry-- or maybe why you should.)
| null | CC BY-SA 2.5 | null | 2010-12-15T14:54:08.093 | 2010-12-15T14:54:08.093 | null | null | 11954 | null |
5516 | 2 | null | 5504 | 8 | null | The objective of this exercise is to help you develop your ability to reason with probability distributions. You would like to get to the point where your reflex is to think through such problems this way:
"A negative return is anything less than 0%.
"For stocks, 0% is three-quarters of a standard deviation (i.e., three quarters of 20%) less than the mean (15%). Therefore we can think of the chance of a negative return as the area to the left of -3/4 under a standard bell curve.
"For bonds, 0% is only two-thirds of a standard deviation less than its mean (0% - 6% = -2/3 of 9%). The chance of a negative return is represented by the area to the left of -2/3 under the same bell curve.
"Because negative two-thirds (bonds) is greater than negative three-quarters (stocks), negative returns occupy more of the left tail of the bond return distribution. That makes a negative bond return more likely."
The whole point is to translate information about means and standard deviations into areas under a distribution function.
Notice that the only calculations required in this case are simple ones; with textbook problems like this, you can do them in your head. In "real world" problems you can usually still think a problem through by approximating the calculations. This gives you the ability to think on your feet, which is one key to mastering any subject.
| null | CC BY-SA 2.5 | null | 2010-12-15T15:14:55.810 | 2010-12-15T17:16:05.057 | 2010-12-15T17:16:05.057 | 919 | 919 | null |
5517 | 1 | 9680 | null | 12 | 6628 | The library languageR provides a method (pvals.fnc) to do MCMC significance testing of the fixed effects in a mixed effect regression model fit using lmer. However, pvals.fnc gives an error when the lmer model includes random slopes.
Is there a way to do an MCMC hypothesis test of such models?
If so, how? (To be accepted an answer should have a worked example in R)
If not, is there a conceptual/computation reason why there is no way?
This question might be related to [this one](https://stats.stackexchange.com/questions/152/is-there-a-standard-method-to-deal-with-label-switching-problem-in-mcmc-estimatio) but I didn't understand the content there well enough to be certain.
Edit 1: A proof of concept showing that pvals.fnc() still does 'something' with lme4 models, but that it doesn't do anything with random slope models.
```
library(lme4)
library(languageR)
#the example from pvals.fnc
data(primingHeid)
# remove extreme outliers
primingHeid = primingHeid[primingHeid$RT < 7.1,]
# fit mixed-effects model
primingHeid.lmer = lmer(RT ~ RTtoPrime * ResponseToPrime + Condition + (1|Subject) + (1|Word), data = primingHeid)
mcmc = pvals.fnc(primingHeid.lmer, nsim=10000, withMCMC=TRUE)
#Subjects are in both conditions...
table(primingHeid$Subject,primingHeid$Condition)
#So I can fit a model that has a random slope of condition by participant
primingHeid.lmer.rs = lmer(RT ~ RTtoPrime * ResponseToPrime + Condition + (1+Condition|Subject) + (1|Word), data = primingHeid)
#However pvals.fnc fails here...
mcmc.rs = pvals.fnc(primingHeid.lmer.rs)
```
It says: Error in pvals.fnc(primingHeid.lmer.rs) :
MCMC sampling is not yet implemented in lme4_0.999375
for models with random correlation parameters
Additional question: Is pvals.fnc performing as expected for random intercept model? Should the outputs be trusted?
| How can one do an MCMC hypothesis test on a mixed effect regression model with random slopes? | CC BY-SA 2.5 | null | 2010-12-15T16:18:49.643 | 2013-08-24T15:05:00.227 | 2017-04-13T12:44:52.277 | -1 | 196 | [
"r",
"mixed-model",
"statistical-significance",
"monte-carlo"
]
|
5518 | 2 | null | 1906 | 2 | null | SIAM's [Data Mining Conference](http://www.siam.org/meetings/sdm11/), SDM11.
| null | CC BY-SA 2.5 | null | 2010-12-15T17:28:47.470 | 2010-12-15T17:28:47.470 | null | null | 795 | null |
5519 | 2 | null | 5514 | 3 | null | Conditional logistic regression compares cases to controls. The coefficients multiply differences in factor values between cases and controls. The intercept terms cancel in the likelihood and therefore play no identifiable role in the model. See the section on "Conditional Logistic Regression" in [Ying So's tutorial for SAS](http://www.ats.ucla.edu/stat/sas/library/logistic.pdf).
| null | CC BY-SA 2.5 | null | 2010-12-15T17:31:04.437 | 2010-12-15T17:31:04.437 | null | null | 919 | null |
5520 | 1 | 5568 | null | 7 | 1598 | I am wondering if there is any reasonably simple way of calculating the following problem:
Drawing, with replacement, $n$ balls from a bin of $N$ different colored balls, with a known probability of drawing each color of ball, what is the expected number of "unique" balls, i.e., balls with no other ball of the same color?
e.g.
$P(red) = 0.25$
$P(blue) = 0.3$
$P(green) = 0.2$
$P(yellow) = 0.25$
Some example outcomes with 5 balls:
$\{red, red, green, blue, yellow\}$ - 3 unique balls
$\{red, red, green, green, blue\}$ - 1 unique ball
$\{blue, blue, blue, yellow, yellow\}$ - 0 unique balls
Or, with 3 balls:
$\{red, green, blue\}$ - 3 unique
$\{red, red, blue\}$ - 1 unique
$\{red, red, red\}$ - 0 unique
For 1 ball, it's trivially 1; for 2 balls, it's 1 - the probability of the outcomes where the two balls are the same color * 2 balls, after that it starts getting more complicated.
| Expected number of uniques in a non-uniformly distributed population | CC BY-SA 2.5 | null | 2010-12-15T18:08:15.290 | 2010-12-21T15:01:39.427 | 2010-12-16T22:12:01.823 | 919 | 2395 | [
"expected-value",
"multinomial-distribution"
]
|
5521 | 2 | null | 5502 | 8 | null | The 'principled' approach (that is to say the a priori defensible approach that may not empirically make much difference) is to use a graded response model, a rather useful member of the IRT family often used for Likert type items. The R package ltm makes this very straightforward.
You're then assuming there is a ordinal logistic regression relationship between the unobserved trait and each of your indicators. Choosing this model class allows you to take the ordinal nature of the indicators seriously and provides information about what part of the trait each item is most informative about. Like factor analysis, it gives you a standard error for the score, although FA people seem to ignore these for some reason.
On the other hand, choosing this model class limits your ability to do all the classic factor analysis stuff like rotating things until you like the look of them. I think this is a plus, but reasonable people disagree. If you're doing that sort of thing to find out how many 'scales' you have, you'll want to look at the Mokken procedures that try to identify scales, since the FA 'fit another dimension and rotate to simple structure' won't work.
| null | CC BY-SA 2.5 | null | 2010-12-15T19:05:19.570 | 2010-12-15T19:05:19.570 | null | null | 1739 | null |
5522 | 2 | null | 5503 | 4 | null | Well here's one place de Leeuw says it: [http://preprints.stat.ucla.edu/491/useR.pdf](http://preprints.stat.ucla.edu/491/useR.pdf)
It might also be found in a more formal document, but nothing in my collection...
| null | CC BY-SA 2.5 | null | 2010-12-15T19:22:15.603 | 2010-12-15T19:22:15.603 | null | null | 1739 | null |
5524 | 2 | null | 5465 | 5 | null | For an observational data context:
Consider this regression model applied to this substantive problem. What, if anything, in it can be interpreted causally? [Further probe] What would you need to learn to change your opinion?
| null | CC BY-SA 2.5 | null | 2010-12-15T19:33:16.230 | 2010-12-15T19:33:16.230 | null | null | 1739 | null |
5525 | 1 | 5526 | null | 5 | 25594 | Is there a way to get the number of parameters of a linear model like that?
```
model <- lm(Y~X1+X2)
```
I would like to get the number 3 somehow (intercept + X1 + X2). I looked for something like this in the structures that `lm`, `summary(model)` and `anova(model)` return, but I didn't figure it out. In case I don't get an answer, I'll stick on
```
dim(model.matrix(model))[2]
```
Thank you
| Get the number of parameters of a linear model | CC BY-SA 2.5 | null | 2010-12-15T19:41:44.733 | 2016-05-02T22:14:57.473 | null | null | 632 | [
"r",
"regression"
]
|
5526 | 2 | null | 5525 | 11 | null | Try something like:
```
> x <- replicate(2, rnorm(100))
> y <- 1.2*x[,1]+rnorm(100)
> summary(lm.fit <- lm(y~x))
> length(lm.fit$coefficients)
[1] 3
> # or
> length(coef(lm.fit))
[1] 3
```
You can have a better idea of what an R object includes with
```
> str(lm.fit)
```
| null | CC BY-SA 2.5 | null | 2010-12-15T19:56:17.260 | 2010-12-15T19:56:17.260 | null | null | 930 | null |
5527 | 2 | null | 5508 | 3 | null | The currently popular method of fitting multinomial probit models is [maximum simulated likelihood](http://jblevins.org/notes/msl) using the Geweke–Hajivassiliou–Keane algorithm ([Geweke 1989](http://www.jstor.org/stable/1913710); [Hajivassiliou and McFadden 1998](http://www.jstor.org/stable/2999576); [Keane and Wolpin 1994](http://www.jstor.org/stable/2109768)). So the algorithm dates from the late 1990s. If you've thought up a more efficient method I suggest you submit it to Econometrica.
| null | CC BY-SA 2.5 | null | 2010-12-15T20:02:45.727 | 2010-12-15T20:02:45.727 | null | null | 449 | null |
5528 | 2 | null | 5465 | 8 | null | >
How do you numericize something that
is not numerical?
Example, ["Automatic Feature Extraction for Classifying Audio Data"](https://doi.org/10.1007/s10994-005-5824-7)
Rationale: Can they figure out how to analyze something statistically that is not already in a big table?
| null | CC BY-SA 4.0 | null | 2010-12-15T20:06:07.013 | 2022-05-21T08:18:45.927 | 2022-05-21T08:18:45.927 | 79696 | 74 | null |
5529 | 2 | null | 5465 | 9 | null | >
How do you prevent over-fitting when
you are creating a statistical model?
Good answer: cross-validation
| null | CC BY-SA 2.5 | null | 2010-12-15T20:08:32.143 | 2010-12-15T20:08:32.143 | null | null | 74 | null |
5530 | 2 | null | 5465 | 11 | null | >
Here is a big data set. What is your
plan for dealing with outliers? How
about missing values? How about transformations?
Can they deal with real-world data?
| null | CC BY-SA 2.5 | null | 2010-12-15T20:10:26.060 | 2010-12-15T20:10:26.060 | null | null | 74 | null |
5531 | 2 | null | 5465 | 3 | null | >
Here is a TinkerToy set. Show me
how Euclidean distance works in three
dimensions. Now show me how multiple regression works.
Can they explain how statistics works in the physical world?
| null | CC BY-SA 2.5 | null | 2010-12-15T20:14:20.347 | 2010-12-15T20:14:20.347 | null | null | 74 | null |
5532 | 1 | 5541 | null | 2 | 324 | Working off a fairly limited statistical understanding, so apologies in advance if this question is ludicrously basic. This has to have happened to other people who have successfully solved this issue, but searching for what I think are relevant terms is getting me things like "Karlin's Conjecture for Random Replacement Sampling Plans," and I'm starting to panic.
Attempting to contact 1000 people for a survey. Plan was to get a list of the target population, draw a random sample of 1000, then contact those people. Problem was that the original list contained people who weren't actually part of the population, so our random sample of 1000 only contained around 650 people we could actually contact. For time & budgetary reasons, filtering the entire original list so it only contains our target population is impossible.
How do we fix this? One option is to take a second random sample of 1000, filter for the people who actually are in our target population, then contact everyone who fits the bill (guessing around 1300 total from both samples?).
Another option is to take a second random sample of 1000, filter for the people who actually are in our target population, take the resulting list of 1300 and randomly contact 1000 of them.
A third option is to throw the original sample back in to the list, take a larger initial random sample (1500-2000 people?), and contact everyone from that sample who meets our criteria. For time and sanity reasons we'd like to avoid this if possible.
All of these feel somehow "off" to me, given my incredibly basic statistical education. So, gurus: what am I missing? Is there a fourth option that makes more sense? Will our results be statistically valid given any of the above approaches? And does this affect our analysis in any way?
| Survey Sampling: What am I missing with this plan? | CC BY-SA 2.5 | null | 2010-12-15T20:21:02.263 | 2010-12-16T02:22:22.583 | null | null | 2398 | [
"sampling",
"survey"
]
|
5534 | 1 | 5607 | null | 8 | 2045 | I was reading Robert Serfling's 1980 book "Approximation Theorems of Mathematical Statistics" and came across the following construction of the Dvoretzky–Kiefer–Wolfowitz inequality for arbitrary distributions $F$, which DKW prove for distributions on $[0,1]$.
>
Given independent $X_i$ with d.f. F and defined on a common probability space, one can construct independent uniform $[0,1]$ variates $Y_i$ such that $\mathbf{P}[X_i = F^{-1}(Y_i)] =1,\forall i$.
Why? Is this true for arbitrary distributions (including discontinuous ones)?
Secondly,
>
Let $G$ denote the uniform $[0,1]$ distribution and $G_n$ the sample distribution function of the $Y_i$s. Then $F(x)=G(F(x))$ and with probability $1$, $F_n(x) = G_n(F(x))$.
Why?
Now I don't understand quantile functions as well I would like, and so am having some trouble following these arguments.
Edit: All of this is on page 59 of the book.
---
whuber, thank you so much for your careful answer. It is appreciated. The answer to my question does indeed lie in the last paragraph of your reply - now if I could only wrap my head around it.
What is throwing me off is the following example which I have recreated from Galen Shorack's "Probability for Statisticians" (page 111). Here, the Lebesgue measure of the set $[X\neq F^{-1}Y]$ is not zero. Would you agree? I am referring to the points in the interval $(2,3)$ and in $(3,3.5)$ for which the inverse transformation does not bring any points back.
Thank you again for looking into this.
```
\documentclass[a4paper]{article}
% Graphics
\usepackage{graphics}
\usepackage{graphicx}
\usepackage{pstricks}
\usepackage{pst-plot}
\usepackage{pstricks-add}
\usepackage{epstopdf}
\begin{document}
% An arbitrary CDF
\begin{figure}[htbp]
\begin{center}
\begin{psgraph}[arrows=<->](0,0)(-1.5,-.3)(6,1.2){.5\textwidth}{2.5cm}
\psplot[algebraic, linecolor=black]{-1}{1}{.025*x^2+.1*x+.175} % {this goes from .1 --> .3}
\psplot[algebraic, linecolor=black]{1}{2}{-.1*x^2+.4*x+.1} % {this goes form .4 --> .5}
\psline[linecolor=black](2,.5)(3,.5) % this stays at .5
\psline[linecolor=black](3,.6)(3.5,.6) % this stays at .6
\psplot[algebraic, linecolor=black]{3.5}{5}{ -0.1333*(x-5)^2+.9} % this goes from .6 --> .9
\psdots[dotstyle=*](3,.6)(1,0.4)
\end{psgraph}
\end{center}
\caption{Arbitrary CDF with discontinuities and flat sections}
\label{fig:cdf}
\end{figure}
% The quantile function
\begin{figure}[htbp]
\begin{center}
\begin{psgraph}[arrows=<->](0,0)(-.3, -1.5)(1.2, 6){2cm}{5cm}
% inverses using the Matlab finverse symbolic toolbox function
\psplot[algebraic, linecolor=black]{.1}{.3}{20*(x/10 - 3/400)^(1/2) - 2}
\psline[linecolor=black](.3, 1)(.4, 1)
\psplot[algebraic, linecolor=black]{.4}{.5}{-((x-0.5)/(-0.1))^(0.5)+2}
\psline[linecolor=black](.5, 3)(.6, 3)
\psplot[algebraic, linecolor=black]{.6}{.9}{-((x-.9)/(-0.1333))^(0.5)+5}
\psdots[dotstyle=*](.6, 3)(.5, 2)
\end{psgraph}
\end{center}
\caption{Quantile function of CDF in figure (\protect \ref{fig:cdf})}
\label{fig:qf}
\end{figure}
\end{document}
```
PS. I could not use the comment box for the reply as I needed to use the `<code`> environment.
| Transforming arbitrary distributions to distributions on $[0,1]$ | CC BY-SA 3.0 | null | 2010-12-15T20:52:37.257 | 2013-10-05T15:08:23.810 | 2013-10-05T15:08:23.810 | 919 | 2399 | [
"distributions"
]
|
5535 | 2 | null | 5462 | 1 | null | Just a thought, but it might make sense to implement an inexpensive Google Ad Words campaign where at least the participants would be coming directly from Google search and you can somewhat control some sort of stratification. Of course, it is never possible to get a true sample.
Along these lines, there is some work being done nowadays to adjust for selection bias of Web Surveys via propensity scores. This is being done by some of the majors, including Harris.
[http://www.amstat.org/sections/srms/proceedings/y2004/files/Jsm2004-000032.pdf](http://www.amstat.org/sections/srms/proceedings/y2004/files/Jsm2004-000032.pdf)
-Ralph Winters
| null | CC BY-SA 2.5 | null | 2010-12-15T21:15:14.057 | 2010-12-15T21:15:14.057 | null | null | null | null |
5536 | 2 | null | 5525 | 1 | null | I think you could use the component `lm.fit$rank` or else subtract `lm.fit$df.residual` from the sample size to get what you want. (I assume you want the number of free parameters.)
| null | CC BY-SA 2.5 | null | 2010-12-15T21:19:18.657 | 2010-12-16T16:11:57.860 | 2010-12-16T16:11:57.860 | 892 | 892 | null |
5538 | 2 | null | 5534 | 14 | null | This is merely saying that $F(x) = \Pr[X \le x] = \Pr[F(X) \le F(x)]$ which is exactly what it means for $F(X)$ to have a uniform distribution.
OK, let's go a little slower.
For continuous distributions, forget for a moment that the CDF $F$ is a CDF and think of it as just a nonlinear way to re-express the values of $X$. In fact, to make the distinction clear, suppose that $G$ is any monotonically increasing way of re-expressing $X$. Let $Y$ be the name of its re-expressed value. $G^{-1}$, by definition, is the "back transform": it expresses $Y$ back in terms of the original $X$.
What is the distribution of $Y$? As always, we discover this by picking an arbitrary value that $Y$ might take on, say $y$, and ask for the chance that $Y$ is less than or equal to $y$. Back-transform this question in terms of the original way of expressing $X$: we are inquiring about the chance that $X$ is less than or equal to $x = G^{-1}(y)$. Now take $G$ to be $F$ and remember that $F$ is the CDF of $X$: by definition, the chance that $X$ is less than or equal to any $x$ is $F(x)$. In this case,
$$F(x) = F(G^{-1}(y)) = F(F^{-1}(y)) = y.$$
We have established that the CDF of $Y$ is $\Pr[Y \le y] = y$, the uniform distribution on $[0,1]$.
It can help to look at this graphically. Draw the graph of $F$. As $X$ ranges over the reals, $F$ ranges between $0$ and $1$. The function $F$ is constructed specifically so that the distribution of $F(x)$ is uniform. That is, if you want to pick a random value for $X$, pick a uniformly random value along the y-axis between $0$ and $1$ and find the value of $X$ where $F(X)$ equals that random height.
In the continuous case we have $X = F^{-1}(Y)$ so clearly $\Pr[X = F^{-1}(Y)] = 1$. In the discontinuous case there's no difficulty, either, provided we define $F^{-1}$ appropriately. But if there's a jump in $X$ from $x_0$ to $x_1 \gt x_0$, all we can say in general is that the event where $x_0 \le X \lt x_1$ has zero probability, not that it's impossible for $X$ to lie in this interval. For this reason we cannot assert that $X = F^{-1}(Y)$ everywhere, but we can assert that the event that this equality does not hold has probability zero, because it consists of at most a countable number of jumps.
(For a practical example of why this arcane technical distinction is important, consider a gambling problem. Suppose you will play Roulette repeatedly with a fixed bet until either you are broke or you double your money. Let $X$ be the random variable representing your net gain if you ever go broke or double your money. Otherwise, define $X$ to be any number you want. $X$ has a Bernoulli distribution because there is some chance $p$ you will go broke, the chance of doubling your money is $1-p$ (work it out!), and the chance of playing forever is zero. Nevertheless, playing forever is a possibility: it is part of the mathematical set of possible outcomes.)
As a simple exercise in learning to reason with the uniform probability transform, graph $F$ for a Bernoulli($p$) variable. The graph equals $0$ for all $x \lt 0$, jumps to $1-p$ at $0$, is horizontal again for $0 \lt x \lt 1$, then jumps to $1$ at $x=1$ and stays there for all greater $x$. A uniform variate on the interval $[0,1]$ on the y-axis will cover the initial jump with a probability $p$; $F^{-1}$ maps this down to $x = 0$. Otherwise, the variate covers the final jump with the remaining probability $1-p$ and $F^{-1}$ maps this down to $1$. We see, then, how a uniform distribution of $Y$ reproduces this simple discrete distribution function. Illustrations of CDFs of discrete and continuous/discrete distributions appear on [this page](http://www.quantdec.com/envstats/notes/class_05/distributions.htm) I wrote for a stats course long ago.
| null | CC BY-SA 2.5 | null | 2010-12-15T22:09:06.303 | 2010-12-15T22:09:06.303 | null | null | 919 | null |
5539 | 2 | null | 5525 | 3 | null | May be it's a little bit hackish but you can do :
```
n <- length(coefficients(model))
```
| null | CC BY-SA 2.5 | null | 2010-12-15T22:24:19.247 | 2010-12-15T22:24:19.247 | null | null | 2028 | null |
5540 | 2 | null | 5508 | 4 | null | It boils down to how you feel about assuming the Independence of Irrelevant Alternatives (IIA) as an assumption about choice behaviour. So the first thing to do is look that up.
Multinomial logit assumes IIA and multinomial probit does not. The computational price of not assuming it is what gets expensive. Almost any econometrics text will cover the details, but assuming you already understand logistic regression, the intuitive picture is this:
In latent variable formulation, logistic regression models need to integrate over a latent distribution to get the probability of a 1 rather than a 0. In choice modelling contexts this is thought of as the expected utility of choosing 1 rather than 0, although we only see the final (stochastic) choice.
A similar situation arises when there are multiple choices. If we are happy to assume IIA, that is: that a third choice does not affect your relative preferences over two existing choices, then the integrals are separable and straightforward. If you are not happy to assume IIA (and that can be reasonable when the new choice is a plausible substitute for one of the existing ones) then you will have to estimate an arbitrary (unobserved) covariance structure over all the options and then do the multidimensional integrations to get your choice probabilities out. It is these integrations or replacements for them that cause the computational problem.
| null | CC BY-SA 2.5 | null | 2010-12-15T23:46:54.900 | 2010-12-15T23:46:54.900 | null | null | 1739 | null |
5541 | 2 | null | 5532 | 6 | null | This is a very common situation, and usually one just plans for a larger initial sample size, so that once ineligible subjects are thrown out, the desired sample size is reached. In your case it appears that ~65% of the original list qualifies. So to get 350 more qualifying subjects you need to draw a random sample of about 540. In fact, depending on how you do the filtering, you could keep drawing additional random samples of size 1 until you get the required number of qualifying subjects.
| null | CC BY-SA 2.5 | null | 2010-12-16T00:48:28.910 | 2010-12-16T00:48:28.910 | null | null | 279 | null |
5542 | 1 | 5548 | null | 5 | 753 |
## Background
I am generally interested in learning appropriate methods of using data to specify priors. A [previous question](https://stats.stackexchange.com/q/1/1381) asks how to elicit priors from experts and received some good recommendations. Here, I am interested in learning how to specify a prior using data. I plan to use these priors in a meta-analysis to synthesize additional data that I collect.
update Although John provides a 'correct' answer, in my case, it would require substantial modification of the original model to implement, so I would prefer to find a way to estimate the prior as a discrete step.
## Questions
- What is the best way to specify such a data informed prior?
- If I am working with parameters for a particular species (monkeys), and this species belongs to a group of organisms(primates), and data are available for primates but not for monkeys, would it be appropriate to fit a distribution based on the primate data?
## Example cases, first with proposed solution
- I have 100 observations from 100 primate species of primate thumb length:
set.seed(0)
thumb <- rgamma(100, 4, 0.1)
library(MASS)
fitdistr(thumb, 'gamma')
Indeed, when there is no apriori reason to select a particular distribution, the distribution can be chosen by maximum likelihood:
for(dist in c('gamma', 'lognormal', 'weibull') {
logLik(fitdistr(thumb, dist))
}
- I have collected 50 means, standard errors, and sample sizes from 50 different primate species, and 50 independent observations from another 50 species of eye diameter:
eye <- data.frame( diameter = rgamma(100, 4, 0.1),
se = c(rlnorm(50, 0.5,1), rep(NA, 50)),
n = c(rep(1:5, 10), rep(1, 50)))
eye <- signif(eye, 3)
How can I incorporate the sample statistics into my calculation of a prior?
| What methods can be used to specify priors from data? | CC BY-SA 2.5 | null | 2010-12-16T01:07:58.040 | 2014-04-08T21:43:23.227 | 2017-04-13T12:44:41.607 | -1 | 1381 | [
"probability",
"bayesian",
"meta-analysis",
"prior"
]
|
5543 | 1 | 5564 | null | 31 | 7110 | I have found some distributions for which BUGS and R have different parameterizations: Normal, log-Normal, and Weibull.
For each of these, I gather that the second parameter used by R needs to be inverse transformed (1/parameter) before being used in BUGS (or JAGS in my case).
Does anyone know of a comprehensive list of these transformations that currently exists?
The closest I can find would be comparing the distributions in table 7 of the [JAGS 2.2.0 user manual](http://sourceforge.net/projects/mcmc-jags/files/Manuals/2.x/jags_user_manual.pdf) with the results of `?rnorm` etc. and perhaps a few probability texts. This approach appears to require that the transformations will need to be deduced from the pdfs separately.
I would prefer to avoid this task (and possible errors) if it has already been done, or else start the list here.
Update
Based on Ben's suggestions, I have written the following function to transform a dataframe of parameters from R to BUGS parameterizations.
```
##' convert R parameterizations to BUGS paramaterizations
##'
##' R and BUGS have different parameterizations for some distributions.
##' This function transforms the distributions from R defaults to BUGS
##' defaults. BUGS is an implementation of the BUGS language, and these
##' transformations are expected to work for bugs.
##' @param priors data.frame with colnames c('distn', 'parama', 'paramb')
##' @return priors with jags parameterizations
##' @author David LeBauer
r2bugs.distributions <- function(priors) {
norm <- priors$distn %in% 'norm'
lnorm <- priors$distn %in% 'lnorm'
weib <- priors$distn %in% 'weibull'
bin <- priors$distn %in% 'binom'
## Convert sd to precision for norm & lnorm
priors$paramb[norm | lnorm] <- 1/priors$paramb[norm | lnorm]^2
## Convert R parameter b to JAGS parameter lambda by l = (1/b)^a
priors$paramb[weib] <- 1 / priors$paramb[weib]^priors$parama[weib]
## Reverse parameter order for binomial
priors[bin, c('parama', 'paramb')] <- priors[bin, c('parama', 'paramb')]
## Translate distribution names
priors$distn <- gsub('weibull', 'weib',
gsub('binom', 'bin',
gsub('chisq', 'chisqr',
gsub('nbinom', 'negbin',
as.vector(priors$distn)))))
return(priors)
}
##' @examples
##' priors <- data.frame(distn = c('weibull', 'lnorm', 'norm', 'gamma'),
##' parama = c(1, 1, 1, 1),
##' paramb = c(2, 2, 2, 2))
##' r2bugs.distributions(priors)
```
| For which distributions are the parameterizations in BUGS and R different? | CC BY-SA 3.0 | null | 2010-12-16T01:48:12.853 | 2014-03-19T15:51:42.810 | 2011-11-18T05:50:51.333 | 1381 | 1381 | [
"r",
"distributions",
"bugs",
"jags",
"parameterization"
]
|
5544 | 2 | null | 5502 | 4 | null | Can I just clarify something here please, do you have items scored on different scales you need to pre-process and combine (interval, ordinal, nominal), or are you looking to do a factor analysis on just ordinal scale variables?
If it is the latter - here is one approach.
[http://cran.r-project.org/web/packages/Zelig/vignettes/factor.ord.pdf](http://cran.r-project.org/web/packages/Zelig/vignettes/factor.ord.pdf)
(note this link is now dead). There are other [vignettes](http://cran.r-project.org/web/packages/Zelig/vignettes) up, but not this one.
| null | CC BY-SA 3.0 | null | 2010-12-16T02:03:23.547 | 2012-03-15T12:13:35.497 | 2012-03-15T12:13:35.497 | 4884 | 2238 | null |
5545 | 1 | null | null | 5 | 883 | I have to run repeated correlations. If one of them reach the significant value (as given in the Pearson table), how can I be sure that it isn't actually a false positive?
I'm pretty new in the statistical analysis, so I apologize if my question seems naive.
Thanks for any answers.
| How to prevent false positive findings with repeated correlations? | CC BY-SA 2.5 | null | 2010-12-16T02:04:39.983 | 2010-12-16T09:59:10.713 | 2010-12-16T09:59:10.713 | 930 | 2402 | [
"correlation",
"multiple-comparisons"
]
|
5546 | 2 | null | 5532 | 2 | null | I hate answering a question with a list of questions, but I hope thinking about some of these issues will give you some insight as to the best way to proceed. The first two questions are mainly for clarification as it wasn't clear from reading your question:
- Have you already obtained a list of 1000 people?
- Have you already collected 650 samples?
- If you have already collected your survey data, did all 650 people who qualify for your survey actually complete it? Even if they qualify, I have to imagine some # of those people will not complete your survey for a variety of reasons. You'll need to account for some additional attrition to reach your goal of n = 1000.
- What are the costs associated with obtaining a larger list?
- Are the costs of collecting the data sufficiently high enough to dissuade you from obtaining "extra" sample above you target of n = 1000?
- Do you really need n = 1000? What are the negative effects of n = 900? n = 700?
- Are there high start up costs to collecting the data in phases?
- How is your survey administered? Can you build in the filtering system into the survey? i.e. "If answer to question 1 == yes, then terminate"?
- Does your survey contain any temporal elements that may introduce bias if you conduct the survey in two/three/four phases? i.e. are people going to respond or behave differently if they answer the survey in December vis a vis March?
Without knowing more of the specifics of your survey and the population in question, the most straight forward approach in my opinion would be to obtain a random sample that is sufficiently large enough to account for the people who won't qualify and those who won't take your survey. This sounds like option three that you described above. If you already have n = 650, then I would recommend obtaining a list large enough to collect the remaining 350 samples. Does your survey contain demographic data or other benchmarks that you can use to compare across groups (waves of data collection) to see if you have introduced any bias into your data?
| null | CC BY-SA 2.5 | null | 2010-12-16T02:22:22.583 | 2010-12-16T02:22:22.583 | null | null | 696 | null |
5547 | 2 | null | 5443 | 1 | null | Tal,
My understanding based on this article, is that you cannot obtain the individual variables related to each separate class when the class is a factor. However, using rpart.control, you are able to identify the variables that are important at each node.
[http://cran.r-project.org/web/packages/caret/vignettes/caretVarImp.pdf](http://cran.r-project.org/web/packages/caret/vignettes/caretVarImp.pdf)
Recursive Partitioning: The reduction in the loss function (e.g. mean squared error)
attributed to each variable at each split is tabulated and the sum is returned. Also, since there may be candidate variables that are important but are not used in a split, the top competing variables are also tabulated at each split. This can be turned on using the maxcompete argument in rpart.control. This method does not currently provide class specific measures of importance when the response is a factor.
| null | CC BY-SA 2.5 | null | 2010-12-16T02:34:59.040 | 2010-12-16T02:34:59.040 | null | null | 2238 | null |
5548 | 2 | null | 5542 | 8 | null | If you have all this data, I think the best answer is to actually fit a single large model, using Hierarchical Modeling rather than do it in two steps (generating a prior then fitting a model). This is basically the [answer I gave to this question](https://stats.stackexchange.com/questions/5181/estimating-distribution-parameters-from-few-data-points/5263#5263). I explain this a little bit more there.
In a hierarchical model you model each of the parameters you are interested in (for example, the location and scale parameters of the thumb lengths for a species) are drawn from a common prior distribution. The hyperparameters of the hierarchical model parameterize the common prior distribution of the parameters for the species', and you estimate the hyperparameters at the same time as the parameters you're interested in. The hyperparameters of course need their own prior distribution, but these can be relatively diffuse.
| null | CC BY-SA 2.5 | null | 2010-12-16T03:31:59.010 | 2010-12-16T03:31:59.010 | 2017-04-13T12:44:33.977 | -1 | 1146 | null |
5549 | 2 | null | 5520 | 1 | null | Maybe I'm being naive here, but would the Multinomial Distribution not work for this?
P(1 unique) = P(1 Blue) + P(1 Red) + P(1 Blue), there are probably a lot of details that would need to be fill in, like P(1 Blue) = The multinomial distribution for all possible combinations of the other ball combinations where there's one blue.
[http://en.wikipedia.org/wiki/Multinomial_distribution](http://en.wikipedia.org/wiki/Multinomial_distribution)
| null | CC BY-SA 2.5 | null | 2010-12-16T05:25:07.710 | 2010-12-16T05:25:07.710 | null | null | 2387 | null |
5550 | 1 | 5553 | null | 3 | 608 | I want to perform model comparison according to several criteria using R.
My dataframe's name is `df`
```
head(df)
Y X1 X2 X3 X4
1 18 307 130 3504 12.0
2 15 350 165 3693 11.5
3 18 318 150 3436 11.0
4 16 304 150 3433 12.0
5 17 302 140 3449 10.5
6 15 429 198 4341 10.0
```
I want to compare all possible combinations with Y as the dependent variable. I'll give you what I've done so far to get the R.squared for all possible models. I would like your comments for better coding, especially on getting all those model formulas
```
# First get all possible combinations of the 4 inependent variables
comb_list <- lapply(1:4,function(i) combn(4,i))
# now create the formulas
comb_list_forms <- lapply(unlist(
sapply(1:length(comb_list), function(i)
sapply(1:dim(comb_list[[i]])[2], function(x) {nam <- names(df[1+comb_list[[i]][,x]])
formul <- "Y~"
sapply(1:length(nam), function(y) formul <<- paste(formul,nam[y],sep="+"))
formul <- sub("~+", "~", formul,fixed = T)
}))),as.formula)
# finally get r.squared
attach(df)
models.r.sq <- sapply(comb_list_forms, function(i) summary(lm(i))$r.squared)
```
Thank you
| Automating model selection criteria production | CC BY-SA 2.5 | null | 2010-12-16T07:56:16.983 | 2011-02-15T11:31:07.203 | 2010-12-16T08:18:23.023 | 632 | 632 | [
"r",
"model-selection"
]
|
5551 | 2 | null | 5520 | 2 | null | I will give the outline of the solution. Numbers of each coloured ball in a draw follows multinomial distribution as tshauck pointed out. Let $R$ denote the number of red balls, $G$ - number of green balls, $B$ - number of blue balls and $Y$ - number of yellow balls in the draw of the size $n$. Then the probability that in random draw we have exactly $x_1$ red balls, $x_2$ green balls, $x_3$ blue balls and $x_4$ yellow ball is
$$P(R=x_1,G=x_2,B=x_3,Y=x_4)=\frac{n!}{x_1!x_2!x_3!x_4!}p_r^{x_1}p_g^{x_2}p_b^{x_3}p_y^{x_4}$$
where $p_r$ is probability of picking red ball, $p_g$ - green, $p_b$ - blue, $p_y$ - yellow.
Denote the number of unique balls in a draw by $U$. Then $U=f(R,G,B,Y)$. Since you have the distribution of vector $(R,G,B,Y)$ you can calculate distribution of $U$. Since we have four colours $U$ can get values $0,1,2,3,4$. So to get probability that $U=0$ you need to find all the possible combinations of $(R,G,B,Y)$ for which $U=0$ and add the probabilities of these combinations. So when $U=0$? When
- All the balls appear more than once: $R>1$, $G>1$, $B>1$, $Y>1$
- One colour is absent and all others appear more than once
a. $R=0$, $G>1$, $B>1$, $Y>1$
b. $R>1$, $G=0$, $B>1$, $Y>1$
c. $R>1$, $G>1$, $B=0$, $Y>1$
d. $R>1$, $G>1$, $B>1$, $Y=0$
- Two colours are absent and all others appear more than once. $6$ cases
- Three colours are absent, all the draw is of one colour. $4$ cases
All the four cases are mutually exclusive, so you can add the probabilities. Cases of $U=1,2,3,4$ can be treated similarly.
This of course is not an elegant solution, but it I do not see why it should not work. I suggest asking this in [math.stackexchange.com](http://math.stackexchange.com).
Update 1
This approach is for calculating the probability distribution of $U$. For expected value of $U$ - whuber's answer is the right one.
| null | CC BY-SA 2.5 | null | 2010-12-16T07:57:26.163 | 2010-12-16T18:46:12.313 | 2010-12-16T18:46:12.313 | 2116 | 2116 | null |
5552 | 2 | null | 4267 | 2 | null | At this year's [NIPS](http://nips.cc/) there was a short paper on [distributed, very large-scale SVD that works in a single pass over a streaming input matrix](http://nlp.fi.muni.cz/~xrehurek/nips/rehurek_nips.pdf).
The paper's more implementation-oriented but puts things into perspective with real wall-clock times and all. The table near the beginning is a good survey too.
| null | CC BY-SA 2.5 | null | 2010-12-16T08:03:12.553 | 2010-12-16T08:38:52.443 | 2010-12-16T08:38:52.443 | 223 | null | null |
5553 | 2 | null | 5550 | 5 | null | I think package [leaps](http://cran.r-project.org/web/packages/leaps/index.html) will help you. It is designed specifically to select a "best" subset of variables for linear regression.
As for the formula creation, you can use foreach package like this:
```
#Create the list where each list element contains one combination
xcomb <- foreach(i=1:4, .combine=c) %do% {
combn(paste("X", 1:4, sep=""), i, simplify=FALSE)
}
#Convert list elements to formula
formlist <- lapply(xcomb, function(l) formula(paste("Y", paste(l, collapse="+"), sep="~")))
```
| null | CC BY-SA 2.5 | null | 2010-12-16T08:10:19.380 | 2010-12-16T13:20:56.190 | 2010-12-16T13:20:56.190 | 2116 | 2116 | null |
5554 | 2 | null | 5545 | 5 | null | I think you should consider [Bonferroni](http://en.wikipedia.org/wiki/Bonferroni_correction) correction. This seems a [multiple comparison](http://en.wikipedia.org/wiki/Multiple_comparisons) problem, so Bonferroni is only one of the solutions, but it is the easiest to implement.
| null | CC BY-SA 2.5 | null | 2010-12-16T08:17:59.500 | 2010-12-16T08:17:59.500 | null | null | 2116 | null |
5555 | 1 | 5557 | null | 0 | 277 | I've been having some trouble with this story problem. Any help you could give me would be really appreciated.
A store manager monitors the store's temperature by taking 4 independent temperature readings at 4 locations where, if the system is working correctly, the temp. is normally distributed with a mean of 68 degrees F and a standard deviation of 3 degrees. What's the mean and standard deviation of the average of these 4 readings if the system is working correctly? What upper and lower limits should the manager set for the average of these 4 readings for there to be a .01 probability of violating a limit when the system is working correctly?
Thanks for any advice/help you can give me!
| Probability and limits | CC BY-SA 2.5 | null | 2010-12-16T09:44:12.623 | 2010-12-16T11:12:31.337 | null | null | 2385 | [
"distributions",
"standard-deviation",
"self-study",
"mean"
]
|
5556 | 2 | null | 5550 | 3 | null | This is another way to get the formulas
```
comb_list_forms <- unlist(lapply(1:length(comb_list), function(i)
lapply(1:dim(comb_list[[i]])[2], function(x)
as.formula(paste("Y ~ ", paste(names(df[1+comb_list[[i]][,x]]), collapse= "+"))))))
```
| null | CC BY-SA 2.5 | null | 2010-12-16T10:29:14.630 | 2010-12-16T10:29:14.630 | null | null | 339 | null |
5557 | 2 | null | 5555 | 5 | null | The two key elements that you need to be familiar with before working through this exercise are:
- The sampling distribution of a mean: in your case, the average of your four measurements will follow a normal distribution, with a mean that equals that of the parent distribution but with a standard deviation of $3/\sqrt{4}$.
- How to translate probabilistic assertions in terms of the underlying Probability Density Function: given that there is an infinite number of possible PDFs for a distribution described by two parameters (its location and shape), it is often more convenient to work with the standardized normal distribution which is simply $\mathcal{N}(0;1)$, because if $X\sim \mathcal{N}(\mu; \sigma)$, then we know that $Z=\frac{X-\mu}{\sigma}\sim\mathcal{N}(0;1)$.
Then, you just have to figure out how to find $z_1$ and $z_2$ such that $\Pr(z_1\leq Z\leq z_2)=1-0.01$. It often helps to draw the graph of $\Pr(Z\leq z)$ which is bell-shaped, centered on its mean, and whose total area equals 1. For instance, for a given quantile $z_1$, $\Pr(Z<z_1)=p_1$ where $p_1$ is the shaded area shown below (here, $z_1=-1$, that is 1 standard deviation below the mean):

As the total area equals 1, the remaining (unshaded) area equals $1-p_1$. Likewise, you may readily express any bounded area as a sum or difference of such inequalities.
| null | CC BY-SA 2.5 | null | 2010-12-16T11:07:02.010 | 2010-12-16T11:12:31.337 | 2010-12-16T11:12:31.337 | 930 | 930 | null |
5558 | 2 | null | 5550 | 2 | null | You could consider using `step` on the full model. This uses AIC rather than $R^2$ as the criterion for selection as this is considered to be superior.
In any case, the functions `add1` and `drop1` that `step` uses may be of use.
| null | CC BY-SA 2.5 | null | 2010-12-16T12:46:10.393 | 2010-12-16T12:46:10.393 | null | null | 229 | null |
5559 | 1 | null | null | 2 | 3236 | What is the best way to divide Cross Validated/Stack Overflow/Server fault reputations into separate bins, considering reputation systems tend to be highly skewed? For example: Suppose I want a set of 64 bins - 0-250,250-500,500-100...10,000-11,000. Is there a method that takes into account the number of users that have a specific reputation wen specifying the bin limits or can I just assign arbitrary bins?
slotishtype
| How to divide ordinal set into bins? | CC BY-SA 2.5 | null | 2010-12-16T13:15:13.730 | 2020-01-13T00:40:09.660 | 2020-01-13T00:40:09.660 | 11887 | 2405 | [
"ordinal-data",
"binning"
]
|
5560 | 1 | 5569 | null | 3 | 461 | I have a random events generator. I know in advance the set of event that can be generated (in my case I have only three possible events). The probabilities of the events are not known. I need to estimate these probabilities. For that I run an experiment. For example I generate 20 events. So, I have a sequence of events. For example: `a,a,b,a,c,c,c,a,b,....c`. Having the sequence I can count for the number of every event (so, in my case I get three integer numbers $n_1$, $n_2$ and $n_3$).
I can calculate the probabilities of every event in the following way:
$\nu_1 = \frac{n_1}{n_1+n_2+n_3}$
$\nu_2 = \frac{n_2}{n_1+n_2+n_3}$
$\nu_3 = \frac{n_3}{n_1+n_2+n_3}$
But these values are approximate. For example, I can have probability of 0.25 for the event `a`, so if I generate 20 events, I have to get approximately 5 `a`-events. But just by chance I can get 10 or 0 `a`-events.
So, I want to have a density distribution of probabilities. Since, $\nu_1$, $\nu_2$ and $\nu_3$ are dependent (the sum of them is equal to one) I going to use $\nu_1$ and $\nu_2$. So, I want to have an explicit form for the density distribution of $\nu_1$ and $\nu_2$.
$\rho (\nu_1,\nu_2) = F (n_1,n_2,n_3)$.
Does anybody know where I can get it?
| How to estimate the likelihood function for random generator of three events? | CC BY-SA 2.5 | null | 2010-12-16T14:57:00.873 | 2010-12-16T16:39:56.680 | null | null | 2407 | [
"distributions",
"probability",
"measurement",
"likelihood"
]
|
5561 | 2 | null | 5550 | 4 | null | Ok, here is another way (which is certainly not as elegant or concise than the other ones, but it has the merit of not requiring Emacs to check parenthesis matches :-)
Let's say we have a vector of predictors of interest like this
```
> Xs <- paste("X", 1:4, sep="")
```
Then, we can just use
```
> allXs <- lapply(seq(along=Xs),
function(x) combn(Xs, x, simplify=FALSE,
FUN=paste, collapse="+"))
```
where
```
> unlist(allXs)
```
gives you all 15 combinations of the X's.
Another option is to just change the right-hand side of a formula, say
```
fm <- as.formula(paste("Y ~ ", paste(Xs, collapse= "+")))
```
so as to reflect the different combinations that you enumerate in your `comb_list` object. This can be done using the [Formula](http://cran.r-project.org/web/packages/Formula/index.html) package:
```
> fm <- as.formula(paste("Y ~ ", paste(Xs, collapse= "|")))
> fm2 <- Formula(fm)
> foo <- function(x) formula(fm2, rhs=x, collapse=TRUE)
> foo(1:2)
Y ~ X1 + X2
<environment: 0x102593040>
> foo(c(1,3,4))
Y ~ X1 + X3 + X4
<environment: 0x1021d02a0>
```
| null | CC BY-SA 2.5 | null | 2010-12-16T15:04:03.347 | 2010-12-16T15:04:03.347 | null | null | 930 | null |
5563 | 1 | 5565 | null | 5 | 9394 | Is there a 'limit' function in R? `help(lim)` or `help(limit)` weren't useful.
And in general, is R suitable for mathematical computations?
| Is there a limit function in R? | CC BY-SA 2.5 | null | 2010-12-16T15:13:16.647 | 2023-05-19T21:45:49.777 | 2010-12-16T15:20:42.767 | 930 | 1700 | [
"r",
"mathematical-statistics"
]
|
5564 | 2 | null | 5543 | 36 | null | I don't know of a canned list.
update: this list (plus additional information) is now published as [Translating Probability Density Functions: From R to BUGS and Back Again](http://journal.r-project.org/archive/2013-1/lebauer-dietze-bolker.pdf) (2013), DS LeBauer, MC Dietze, BM Bolker R Journal 5 (1), 207-209.
Here is my list (edits provided by original questioner):
Normal and log-normal are parameterized in terms of $\tau$ (precision) rather
than $\sigma$ or $\sigma^2$ (std. dev. or variance); $\tau = 1/\sigma^2 =
1/\mbox{var}$
Beta, Poisson, Exponential, Uniform are all the same
Negative binomial in BUGS has only the discrete parameterization
(size,prob), not the "ecological" (size,mu, where size can be
non-integer) parameterization.
edit:
Weibull in BUGS is ($\nu$=`shape`,$\lambda$=`lambda`), in R is ($a$=`shape`,$b$=`scale`) [the math notation is consistent with the notation used in the respective documentation] As pointed out at [How do I parameterize a Weibull distribution in JAGS / BUGS?](https://stats.stackexchange.com/questions/18550/how-do-i-parameterize-a-weibull-distribution-in-jags-bugs) , $\lambda= (1/b)^a$
Gamma in BUGS is (`shape`,`rate`). This is the default in R, but R
also allows (shape,scale) [if the scale argument is named]; rate =
1/scale
Order matters, especially in BUGS (which doesn't have named
arguments), e.g. R `dbinom(x,size,prob)` vs BUGS `dbin(p,n)` [same
parameters, opposite order].
Name differences:
- Binomial: R=dbinom,BUGS=dbin
- Chi-squared: R=dchisq,BUGS=dchisqr
- Weibull: R=dweibull,BUGS=dweib
- Negative binomial: R=dnbinom, BUGS=dnegbin
edit: for truncated distributions BUGS uses `I()`, JAGS uses `dinterval()` [it's worth looking in the JAGS documentation if you're going to use this, there may be other subtle differences]
| null | CC BY-SA 3.0 | null | 2010-12-16T15:25:54.663 | 2014-03-19T15:51:42.810 | 2017-04-13T12:44:25.243 | -1 | 2126 | null |
5565 | 2 | null | 5563 | 4 | null | All links in this answer are dead.
Following the advice of [this answer](https://web.archive.org/web/20170924085650/http://tolstoy.newcastle.edu.au/R/e11/help/10/07/2296.html) on the r-help mailing list, you can find examples in the sympy help file in the [rSymPy](https://web.archive.org/web/20100827101138/http://code.google.com/p/rsympy/) and in the Ryacas vignette in the [Ryacas package](https://web.archive.org/web/20110522064153/http://code.google.com/p/ryacas/).
In general, all of the work that I do in R requires mathematical computations, and R does an excellent job of facilitating this work.
| null | CC BY-SA 4.0 | null | 2010-12-16T15:29:34.803 | 2023-05-19T21:45:49.777 | 2023-05-19T21:45:49.777 | 362671 | 1381 | null |
5566 | 1 | 5570 | null | 5 | 897 | I flip a coin 20 times and get 14 heads. I want to calculate the p-value of the hypothesis that my coin is fair.
What probability should I calculate?
Wikipedia says that I need to calculate the probability to get 14 or more heads in 20 flips.
Why is it 14 "or more"? Why not 14 or less?
| Testing if a coin is fair | CC BY-SA 4.0 | null | 2010-12-16T15:36:19.483 | 2018-06-12T19:04:40.093 | 2018-06-12T19:04:40.093 | 34826 | 2407 | [
"maximum-likelihood",
"p-value"
]
|
5568 | 2 | null | 5520 | 9 | null | Let $X_i$ be the random variable equal to $1$ when there is exactly one ball of color $i$ ($i = 1, 2, \ldots, m$; to avoid confusion I write $m$ instead of $N$). The count of color $i$ follows a Binomial($p_i$, $n$) distribution, implying the expectation of $X_i$ is
$$\eqalign{
\mathbb{E}[X_i] = &\sum_{j=0}^{n} \binom{n}{j} p_i^j (1-p_i)^{n-j} X_i(j) \cr
= &\binom{n}{1} p_i (1-p_i)^{n-1} \cr
= &n p_i (1-p_i)^{n-1}.
}$$
The number of unique colors is the sum of the $X_i$. Because expectation is linear we obtain
$$\eqalign{
\mathbb{E}[X] = &\sum_{i=1}^{m}\mathbb{E}[X_i] = \sum_{i=1}^{m}n p_i (1-p_i)^{n-1} \cr
= &n\sum_{i=1}^{m} p_i (1-p_i)^{n-1}.
}$$
| null | CC BY-SA 2.5 | null | 2010-12-16T16:20:37.650 | 2010-12-21T15:01:39.427 | 2010-12-21T15:01:39.427 | 919 | 919 | null |
5569 | 2 | null | 5560 | 4 | null | You should look at the [multinomial distribution](http://en.wikipedia.org/wiki/Multinomial_distribution).
| null | CC BY-SA 2.5 | null | 2010-12-16T16:39:56.680 | 2010-12-16T16:39:56.680 | null | null | null | null |
5570 | 2 | null | 5566 | 12 | null | It's neither because the alternative to being fair is that the coin favors heads or tails.
You are free to invent any test you like. For example, I could (idiosyncratically) decide the coin is unfair if and only if the number of heads is either 6 or 15 (the "critical region"), because this event occurs with only 5% chance when the coin is fair. The key question is how well does a test perform. The [Neyman-Pearson lemma](http://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma) shows that this particular one I just invented is a poor test. A good test is one whose critical region not only is unlikely when the null is true, but is also highly likely when the null is false.
There is no one best critical region for this kind of two-sided test, but a reasonable compromise is to adopt a procedure that will detect deviations from fairness in both directions. That suggests a critical region that contains the most extreme possibilities: a bunch near 0 and a bunch near 20. A good choice at the 5% level is to consider any outcome of 15 or more or 5 or less to be significant.
Let us then adopt the best symmetric test for a fair coin. This means we want the critical region to include $20-i$ heads whenever it includes $i$ heads (that is, $20-i$ tails). This treats heads and tails on an equal footing. It is only in the context of a particular test (like this one) that a p-value has any meaning. The p-value corresponding to an outcome of 14 is, by definition, the smallest significance of any such test that includes 14 in its critical region. By symmetry it must include 20 - 14 = 6 and by the Neyman-Pearson lemma it must include all values larger than 14 and all values less than 6. The chance of this under the null is 11.53%. This chance increases uniformly as the chance of heads deviates more and more from 1/2 in either direction.
| null | CC BY-SA 2.5 | null | 2010-12-16T16:42:57.647 | 2010-12-16T17:54:03.310 | 2010-12-16T17:54:03.310 | 919 | 919 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.