Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7009 | 1 | 7010 | null | 2 | 919 | I have the pdf
$$f(y ; \theta) = \frac{1}{\theta} \exp( \frac{-y}{\theta}), \ y > 0$$
and I'm supposed to determine if the following two estimators are unbiased or not: $ \hat \theta = nY_{min} $ and $ \hat \theta = \frac{1}{n}\sum_{i=1}^n Y_i $. I'm running into some problems because when I try to find the expected value of $ Y_{min} $ and $ Y_i $, the integral is undefined.
| Seeing if estimators are unbiased | CC BY-SA 2.5 | null | 2011-02-09T04:10:04.727 | 2011-02-09T23:10:24.540 | 2011-02-09T09:32:50.203 | 8 | 2387 | [
"self-study",
"unbiased-estimator"
]
|
7010 | 2 | null | 7009 | 2 | null | Please see the following Wikipedia [article](http://en.wikipedia.org/wiki/Exponential_distribution) and it shows that both the integral exist. Also, if you look lower in the article, you will also see that both the estimators are unbiased.
$Y_{min}$ is also exponentially distributed with mean parameter $\frac {\theta} {n}$
Therefore, $E[nY_{min}]=n\times\frac {\theta} {n} = \theta$. Similarly, $E[\frac{1}{n}\sum_{i=1}^n Y_i] = \frac{n \theta}{n}=\theta$
I am sure both the integrals exists
| null | CC BY-SA 2.5 | null | 2011-02-09T04:26:13.793 | 2011-02-09T09:33:03.997 | 2011-02-09T09:33:03.997 | 8 | 1307 | null |
7011 | 2 | null | 3531 | 4 | null | WinBUGS has support for RJMCMC with an addon. I've used it for GLMs, including those with an intrinsic CAR component. Not R, obviously, but through R2WinBUGS you can patch them together.
| null | CC BY-SA 2.5 | null | 2011-02-09T04:39:53.350 | 2011-02-09T04:39:53.350 | null | null | 26 | null |
7012 | 2 | null | 6990 | 1 | null | This post explains it quite well: [http://blog.gribblelab.org/2009/03/09/repeated-measures-anova-using-r/](http://blog.gribblelab.org/2009/03/09/repeated-measures-anova-using-r/)
| null | CC BY-SA 2.5 | null | 2011-02-09T06:40:13.363 | 2011-02-09T06:40:13.363 | null | null | 582 | null |
7013 | 2 | null | 841 | 4 | null | The difficulty clearly comes because $X$ and $Y$ are corellated (I assume $(X,Y)$ is jointly gaussian, as Aniko) and you can't make a difference (as in @svadali's answer) or a ratio (as in Standard Fisher-Snedecor "F-test") because those would be of dependent $\chi^2$ distribution, and because you don't know what this dependence is which make it difficult to derive the distribution under $H_0$.
My answer relies on Equation (1) below. Because the difference in variance can be factorized with a difference in eigenvalues and a difference in rotation angle the test of equality can be declined into two tests. I show that it is possible to use the Fisher-Snedecor Test together with a test on the slope such as the one suggested by @shabbychef because of a simple property of 2D gaussian vectors.
Fisher-Snedecor Test:
If for $i=1,2$ $(Z^i_{1},\dots,Z^i_{n_i} )$ iid gaussian random variables with empirical unbiased variance $\hat{\lambda}^2_i$ and true variance $\lambda^2_i$, then it is possible to test if $\lambda_1=\lambda_2$ using the fact that, under the null,
It uses the fact that $$R=\frac{\hat{\lambda}_X^2}{\hat{\lambda}_Y^2}$$ follows a [Fisher-Snedecor distribution](http://en.wikipedia.org/wiki/F-distribution) $F(n_1-1,n_2-1)$
A simple property of 2D gaussian vector
Let us denote by
$$R(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix}
$$
It is clear that there exists $\lambda_1,\lambda_2>0$ $\epsilon_1$, $\epsilon_2$ two independent gaussian $\mathcal{N}(0,\lambda_i^2)$ such that
$$\begin{bmatrix} X \\ Y \end{bmatrix} = R(\theta)\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \end{bmatrix}
$$
and that we have
$$Var(X)-Var(Y)=(\lambda_1^2-\lambda_2^2)(\cos^2 \theta -\sin^2 \theta) \;\; [1]$$
Testing of $Var(X)=Var(Y)$ can be done through testing if (
$\lambda_1^2=\lambda_2^2$ or $\theta=\pi/4 \; mod \; [\pi/2]$)
Conclusion (Answer to the question)
Testing for $\lambda_1^2=\lambda_2^2$ is easely done by using ACP (to decorrelate) and Fisher Scnedecor test. Testing $\theta=\pi/4 [mod \; \pi/2]$ is done by testing if $|\beta_1|=1$ in the linear regression $ Y=\beta_1 X+\sigma\epsilon$ (I assume $Y$ and $X$ are centered).
Testing wether $\left ( \lambda_1^2=\lambda_2^2 \text{ or }\theta=\pi/4 [mod \; \pi/2]\right )$ at level $\alpha$ is done by testing if $\lambda_1^2=\lambda_2^2$ at level $\alpha/3$ or if $|\beta_1|=1$ at level $\alpha/3$.
| null | CC BY-SA 2.5 | null | 2011-02-09T08:45:33.313 | 2011-03-13T20:31:13.040 | 2011-03-13T20:31:13.040 | 223 | 223 | null |
7015 | 1 | 7017 | null | 2 | 5430 |
- Is linear regression only suitable for variables with a normal
distribution?
- If so, is there an alternative nonparametric test to test mediation or moderation?
| Normal distribution necessary to assess moderating and mediating effects? | CC BY-SA 2.5 | null | 2011-02-09T11:01:45.140 | 2011-02-09T20:01:38.423 | 2011-02-09T11:10:17.940 | 183 | null | [
"distributions",
"mediation",
"interaction",
"multiple-regression"
]
|
7017 | 2 | null | 7015 | 6 | null | For those not familiar with the language, moderation and mediation were both discussed in Barron and Kenny's influential article ([free pdf](http://www.public.asu.edu/~davidpm/classes/psy536/Baron.pdf)).
### Mediation
With regards to mediation, bootstrapping is often used where normality does not seem like a reasonable assumption.
- For SPSS and perhaps other implementations, check out the macros on the website of Andrew Hayes
- For R have a look at the mediation package
### Moderation
With regard to moderator regression, you could also explore bootstrapping options if you were concerned with the accuracy of the p-values you were obtaining.
| null | CC BY-SA 2.5 | null | 2011-02-09T12:17:35.667 | 2011-02-09T12:17:35.667 | null | null | 183 | null |
7019 | 1 | null | null | 1 | 435 | I have a data set with a range of 0 to 65,000. The vast majority of data points (it is a huge sample) are concentrated between 0 and 1000. There is only one point that has 65,000. I want to plot this using a semi-logarithmic plot. However, I would like the graph to have around 50 points. If I use scales like 2,4,6,8,16,32,64,128,256 etc I have only a couple of points. But If I use a multiple of 1.1 or 1.5 etc, the scales are not integers. Is there a standard way to ensure you more concentrated intervals at a lower level on the scale?
Cheers,
s
| Log-scale with concentrated data using integers | CC BY-SA 2.5 | null | 2011-02-09T13:16:41.170 | 2011-04-11T01:13:15.487 | null | null | 2405 | [
"scales",
"logarithm"
]
|
7020 | 1 | 7021 | null | 9 | 3625 | Suppose I will be getting some samples from a binomial distribution. One way to model my prior knowledge is with a Beta distribution with parameters $\alpha$ and $\beta$. As I understand it, this is equivalent to having seen "heads" $\alpha$ times in $\alpha + \beta$ trials. As such, a nice shortcut to doing the full-blown Bayesian inference is to use $\frac{h+\alpha}{n+\alpha+\beta}$ as my new mean for the probability of "heads" after having seen $h$ heads in $n$ trials.
Now suppose I have more than two states, so I will be getting some samples from a multinomial distribution. Suppose I want to use a Dirichlet distribution with parameter $\alpha$ as a prior. Again as a shortcut I can treat this as prior knowledge of event $i$'s probability as being equivalent to $\frac{\alpha_i}{\sum \alpha_j}$, and if I witness event $i$ $h$ times in $n$ trials my posterior for $i$ becomes $\frac{h + \alpha_i}{n + \sum \alpha_j}$.
Now in the binomial case, it works out that prior knowledge of "heads" occurring $\alpha$ times in $\alpha + \beta$ trials is equivalent to "tails" occurring $\beta$ times in $\alpha + \beta$ trials. Logically I don't believe I can have stronger knowledge of "heads" likelihood than of "tails." This gets more interesting with more than two outcomes though. If I have say a 6-sided die, I can imagine my prior knowledge of side "1" being equivalent to 10 ones in 50 trials and my prior knowledge of side "2" as being equivalent to 15 twos in 100 trials.
So after all of that introduction, my question is how I can properly model such asymmetric prior knowledge in the multinomial case? It seems as though if I'm not careful I can easily get illogical results due to total probability/likelihood not summing to 1. Is there some way I can still use the Dirichlet shortcut, or do I need to sacrifice this altogether and use some other prior distribution entirely?
Please forgive any confusion caused by potential abuses in notation or terminology above.
| Bayesian inference for multinomial distribution with asymmetric prior knowledge? | CC BY-SA 2.5 | null | 2011-02-09T14:22:32.753 | 2011-02-09T23:27:35.783 | 2011-02-09T23:27:35.783 | 2485 | 2485 | [
"probability",
"bayesian",
"prior",
"multinomial-distribution",
"dirichlet-distribution"
]
|
7021 | 2 | null | 7020 | 3 | null | You have framed your question very well.
I think what you are looking for here is a case of hierarchical modeling. And you may want to model multiple layers of hierarchy (at the moment you only talk about priors). Having another layer of hyper-priors for the hyper--parameters lets you model the additional variabilities in hyper-parameters (as you are concerned about the variability issues of hyper-parameters). It also makes your modeling flexible and robust (may be slower).
Specifically in your case, you may benefit by having priors for the Dirichlet distribution parameters (Beta is a special case). This [post](http://www.stat.columbia.edu/~cook/movabletype/archives/2009/04/conjugate_prior.html) by Gelman talks about how to impose priors on the parameters of Dirichlet distribution. He also cites on of his papers in a journal of toxicology.
| null | CC BY-SA 2.5 | null | 2011-02-09T14:51:50.390 | 2011-02-09T17:09:59.697 | 2011-02-09T17:09:59.697 | 1307 | 1307 | null |
7022 | 1 | null | null | 5 | 3882 | Given a set of data (~5000 values) I'd like to draw random samples from the same distribution as the original data. The problem is there is no way to know for sure what distribution the original data comes from.
It makes sense to use normal distribution in my case, although I'd like to be able to motivate that decision, and of course I also need to estimate the $(\mu,\sigma)$ pair.
Any idea on how to accomplish this, preferably within Java environment. I have been using [Apache Commons Math](http://commons.apache.org/math/apidocs/org/apache/commons/math/distribution/NormalDistributionImpl.html) and recently stumbled upon [Colt](http://acs.lbl.gov/software/colt/) library. I was hoping to get it done without bothering with MATLAB and R.
| Parameter estimation for normal distribution in Java | CC BY-SA 2.5 | 0 | 2011-02-09T16:50:00.400 | 2012-01-19T13:04:51.117 | 2011-02-09T17:13:52.090 | null | 3014 | [
"estimation",
"normal-distribution",
"java"
]
|
7024 | 2 | null | 7019 | 2 | null | Can you just use a scale comprised of powers of (1+r) for some small r, and round to the nearest integer? For example, in R, with r = 0.25:
```
> x <- unique(round(1.25^(0:50)))
> x
[1] 1 2 3 4 5 6 7 9 12 15 18 23 28 36 44 56 69 87 108 136 169 212 265 331 414
[26] 517 646 808 1010 1262 1578 1972 2465 3081 3852 4815 6019 7523 9404 11755 14694 18367 22959 28699 35873 44842 56052 70065
```
| null | CC BY-SA 2.5 | null | 2011-02-09T17:35:34.287 | 2011-02-09T17:35:34.287 | null | null | 2425 | null |
7026 | 2 | null | 7007 | 1 | null | There are all kinds of analyses you can do. It really depends on what you want to know.
Let me show you, with the example of gender: If you want to know wheter there are gender differences, you can apply a variance test (univariate or multivariate) on the likert scales with the demographic variable as independent factors.
If you just want to see, what scale is distributed how in each gender, then just split the dataset into two.
If you want to know whether there is some correlation between the demographic variables calculate a correlation matrix.
The gist is: You can do anything, but that won't help you and you'll just get lost in the data, if you don't really know what you want to learn. So ask yourself:
- What is my goal?
- What do I need to know to reach my goal?
Then ask again with more specifics.
| null | CC BY-SA 2.5 | null | 2011-02-09T19:12:23.193 | 2011-02-09T19:12:23.193 | null | null | 1435 | null |
7029 | 1 | 7039 | null | 26 | 1295 | I ran across this density the other day. Has someone given this a name?
$f(x) = \log(1 + x^{-2}) / 2\pi$
The density is infinite at the origin and it also has fat tails. I saw it used as a prior distribution in a context where many observations were expected to be small, though large values were expected as well.
| Does the distribution $\log(1 + x^{-2}) / 2\pi$ have a name? | CC BY-SA 2.5 | null | 2011-02-09T19:34:28.307 | 2011-02-12T05:07:24.860 | 2011-02-12T05:07:24.860 | 183 | 319 | [
"distributions",
"probability"
]
|
7030 | 2 | null | 7022 | 4 | null | How big are the samples that you need? If substantially smaller than the 5000 points you have, say maximum 100 points or so, you could just take a random subset of your sample. Then you don't even need to assume normality - it's guaranteed to come from the distribution you want!
Otherwise, it seems that the `org.apache.commons.math.stat.descriptive.moment` package has a [Mean](http://commons.apache.org/math/apidocs/org/apache/commons/math/stat/descriptive/moment/Mean.html) and [StandardDeviation](http://commons.apache.org/math/apidocs/org/apache/commons/math/stat/descriptive/moment/StandardDeviation.html) class which use the correct formulas. These should give you $\mu$ and $\sigma$, respectively.
| null | CC BY-SA 2.5 | null | 2011-02-09T19:37:08.393 | 2011-02-09T19:37:08.393 | null | null | 2898 | null |
7031 | 2 | null | 7015 | 2 | null | Many of us use linear regression in rough-and-ready fashion to learn about the relative importance of predictors, to assess the shape of relationships, and so on. But if one wants to make strict probabilistic inferences one needs to satisfy the set of standard assumptions entailed in such regression. The most important of these are random sampling and independent observations. Another one is that the regression residuals (not any particular X or Y variable) should be normally distributed. Without satisfying all of the standard assumptions, one cannot vouch for the accuracy of standard errors or p-values.
| null | CC BY-SA 2.5 | null | 2011-02-09T20:01:38.423 | 2011-02-09T20:01:38.423 | null | null | 2669 | null |
7032 | 1 | 7043 | null | 6 | 2040 | Why is it that "missing data" and "outliers" can affect the performance of least square estimation?
| Effect of missing data and outliers on least square estimation | CC BY-SA 2.5 | null | 2011-02-09T20:20:17.480 | 2012-07-24T10:41:12.550 | 2011-02-12T05:03:00.997 | 183 | 3125 | [
"regression",
"estimation",
"least-squares"
]
|
7033 | 2 | null | 7029 | 5 | null | Perhaps not.
I could not find it in this fairly extensive list of distributions:
[Leemis and McQuestion 2008 Univariate Distribution Relationships. American Statistician 62(1) 45:53](http://www.math.wm.edu/~leemis/2008amstat.pdf)
| null | CC BY-SA 2.5 | null | 2011-02-09T23:08:02.450 | 2011-02-10T20:59:20.080 | 2011-02-10T20:59:20.080 | 449 | 2750 | null |
7034 | 2 | null | 7009 | 0 | null | The fact that the sample mean is an unbiased estimator is obtained combining these two facts:
1. The sample mean is an unbiased estimator of the population mean
2. The population mean is equal to theta
| null | CC BY-SA 2.5 | null | 2011-02-09T23:10:24.540 | 2011-02-09T23:10:24.540 | null | null | null | null |
7036 | 1 | 7041 | null | 6 | 4618 | This question may have been asked before, but I couldn't find it. So, here goes.
From about 3000 data points that can be characterized as "wins" or "losses" (binomial), it turns out that there are 52.8% dumb luck wins. This is my dependent variable.
I also have some additional data that may help in predicting the above wins that could be considered an independent variable.
The question I'm trying to answer is:
If my independent variable can be used to predict 55% wins, how many trials are required (giving me 55% wins) for me to be 99% sure that this wasn't dumb luck?
The following R code is purposely hacky so I can see everything that is happening.
```
#Run through a set of trial sizes
numtri <- seq(2720, 2840, 1)
#For a 52.8% probability of dumb luck wins, and the current trial size,
#calculate the number of wins at the 99% limit
numwin <- qbinom(p=0.99, size=numtri, prob=0.528)
#Divide the number of wins at the 99% limit by the trial size to
#get the percent wins.
perwin <- numwin/numtri
#Plot the percent wins versus the trial size, looking for the
#"predicted" 55% value. See the plot below.
plot(numtri, perwin, type="b", main="Looking for 0.55")
grid()
#Draw a red line at the 55% level to show which trial sizes are correct
abline(h=0.55, lwd=2, col="red")
#More than one answer? Why?........Integer issues
head(numtri)
head(numwin)
head(perwin)
```
From the graph, the answer is: 2740 <= numtri <= 2820
As you can guess, I'm also looking for the required number of trials for 56% wins, 57% wins, 58% wins, etc. So, I'll be automating the process.
Back to my question. Does anyone see a problem with the code, and if not, does anyone have a way to cleanly sneak up on the right and left edges of the "answer"?

Edit (02/13/2011) ==================================================
Per whuber's answer below, I now realize that to compare my alternate 55% wins (which came from the data) with my 52.8% "dumb luck" null (which also came from the data), I have to deal with the fuzz from both measured values. In other words, to be "99% sure" (whatever that means), the 1% tail of BOTH proportions needs to be compared.
For me to get comfortable with whuber's formula for N, I had to reproduce his algebra. In addition, because I know that I'll use this framework for more complicated problems, I bootstrapped the calculations.
Below are some of the results. The upper left graph was produced during the bootstrap process where N (the number of trials) was at 9000. As you can see, the 1% tail for the null extends further into the alternate than the 1% for the alternate. The conclusion? 9000 trials is not enough. The lower left graph is also during the bootstrap process where N was at 13,000. In this case, the 1% tail for the null falls into an area of the alternate that is less than the required 1% value. The conclusion? 13,000 trials is more than enough (too many) for the 1% level.
The upper right graph is where N=11109 (whuber's calculated value) and the 1% tail of the null extends the right amount into the alternate, aligning the 1% tails. The conclusion? 11109 trials are required for the 1% significance level (I am now comfortable with whuber's formula).
The lower right graph is where I used whuber's formula for N and varied the alternate q (at both the 1% and 5% significance levels) so I could have a reference of what might be an "acceptable zone" for alternates. In addition, I overlaid some actual alternates versus their associated number of data points (the blue dots). One point falls well below the 5% level, and even though it might provide 72% "wins", there simply aren't enough data points to differentiate it from 52.8% "dumb luck". So, I'll probably drop that point and the two others below 5% (and the independent variables that were used to select those points).

| Number of trials required from a binomial distribution to get the desired odds | CC BY-SA 2.5 | null | 2011-02-10T01:24:08.580 | 2011-02-13T20:36:13.740 | 2011-02-13T20:36:13.740 | 2775 | 2775 | [
"r",
"binomial-distribution",
"p-value"
]
|
7037 | 2 | null | 7032 | 4 | null | If you're using R, try the following example.
```
library(tcltk)
demo(tkcanvas)
```
Move the dots around to create all of the outliers you want. The regression will keep up with you.
| null | CC BY-SA 2.5 | null | 2011-02-10T02:16:59.203 | 2011-02-10T02:16:59.203 | null | null | 2775 | null |
7038 | 2 | null | 7036 | 4 | null | ```
numtri[c(min(which(perwin <= 0.55)),max(which(perwin >= 0.55)))]
```
| null | CC BY-SA 2.5 | null | 2011-02-10T02:20:51.027 | 2011-02-10T02:20:51.027 | null | null | 159 | null |
7039 | 2 | null | 7029 | 15 | null | Indeed, even the first moment does not exist. The CDF of this distribution is given by
$$F(x) = 1/2 + \left(\arctan(x) - x \log(\sin(\arctan(x)))\right)/\pi$$
for $x \ge 0$ and, by symmetry, $F(x) = 1 - F(|x|)$ for $x \lt 0$. Neither this nor any of the obvious transforms look familiar to me. (The fact that we can obtain a closed form for the CDF in terms of elementary functions already severely limits the possibilities, but the somewhat obscure and complicated nature of this closed form quickly rules out standard distributions or power/log/exponential/trig transformations of them. The arctangent is, of course, the CDF of a Cauchy (Student $t_1$) distribution, exhibiting this CDF as a (substantially) perturbed version of the Cauchy distribution, shown as red dashes.)

| null | CC BY-SA 2.5 | null | 2011-02-10T02:41:06.410 | 2011-02-10T02:41:06.410 | null | null | 919 | null |
7040 | 1 | 7042 | null | 4 | 14543 | In Meta analysis, how to interpret the Egger’s linear regression method intercept (B0) 10.34631, 95% confidence interval (1.05905, 19.63357), with t=3.54535, df=3. The 1-tailed p-value (recommended) is 0.01911, and the 2-tailed p-value is 0.03822. I am a medical doctor.
*Updated*
The data are comparison of Regions 1 and 2
```
Region 1 Region 2 Region 3
Cases Dead Cases Dead Cases Dead Total cases Total dead cases
2006 2320 528 1484 108 73 3 3877 639
2007 3024 645 1592 75 32 1 4648 721
2008 3012 537 1920 53 3 0 4935 590
2009 3073 556 1477 40 246 8 4796 604
2010 3540 494 1460 26 138 1 5138 521
Total 14969 2760 7933 302 492 13 23394 3075
```
| Egger’s linear regression method intercept in meta analysis | CC BY-SA 2.5 | null | 2011-02-10T05:59:08.420 | 2011-02-10T15:35:02.927 | 2011-02-10T15:35:02.927 | 2956 | 2956 | [
"meta-analysis",
"funnel-plot",
"publication-bias"
]
|
7041 | 2 | null | 7036 | 6 | null | I wonder what it means to be "99% sure."
The code seems to equate "dumb luck" with $p$ = 52.8% probability of wins. Let's imagine conducting $N$ trials, during which we observe $k$ wins. Suppose, for instance, $N$ = 1000 and you observe $k$ = 530 wins. That's greater than the expected number $p N$ = 528, but it's so close that a sceptic would argue it could be due to chance: "dumb luck." So, you need a test. The best test uses a threshold $t$ (which depends, obviously, on $N$ and $p$ and will be a bit greater than $N p$). The chance--still assuming $p$ = 52.8%--of having more than $t$ wins needs to be so low that if you do observe more than $t$ wins, both you and a reasonable sceptic will allow that $p$ is probably greater than 52.8%. This chance is usually called $\alpha$, the "false positive rate." Your first task is to set this threshold. How will you do it? That is, what value of $\alpha$ do you want to use? Does "99% sure" mean that $1 - \alpha$ should equal 99%?
The second task is to consider your chances of beating the threshold if in fact $p$ equals 55% (or whatever you choose). Does "99% sure" mean you want to have a 99% chance of having more than $t$ wins in $N$ trials when $p$ = 55%?
So, it seems to me there are two chances involved here: one in determining the test to distinguish "dumb luck" and another one that will subsequently determine how many trials you need. If this analysis correctly reflects the situation, the problem is easily solved. Note a few things:
- There are plenty of uncertainties about this procedure and some of them are not captured by the statistical model. Therefore we shouldn't aim for too much precision. There's really no difference between, say, 2742 and 2749 trials.
- In case it turns out hundreds or thousands of trials are needed, the (usual) Normal approximation to the Binomial distribution will be highly accurate. Point #1 justifies using an approximation.
- Consequently, there is a simple formula to compute the number of trials directly: no need to explore hundreds of possibilities!
The formula alluded to in #3 is this: in $N$ trials with probability $p$ of winning, the standard error of the proportion of wins is
$$s = \sqrt{p(1-p)/N}.$$
The threshold therefore should be
$$t(N) = N(p + z_\alpha s)$$
where $z_\alpha$ is the upper $100 - 100\alpha$ percentile of the standard normal distribution. If the true probability of a win actually is $q$ (such as $q$ = 55%), the chance of exceeding $t(N)$ in $N$ trials equals
$$\Phi\left(\frac{(q-t(N)/N)\sqrt{N}}{\sqrt{q(1-q)}}\right)
= \Phi\left(\frac{(q-p)\sqrt{N} - z_\alpha \sqrt{p(1-p)}}{\sqrt{q(1-q)}}\right)
$$
where $\Phi$ denotes the standard normal cumulative distribution function. Equating this with your desired degree of certainty (such as $\beta$ = 99%) and solving this equation for $N$ (rounding up to the nearest integer) gives the solution
$$N = \left(\frac{z_\alpha \sqrt{p(1-p)} + z_\beta \sqrt{q(1-q)}}{q-p}\right)^2$$
(assuming $q \gt p$).
---
### Example
For $p$ = 52.8%, $q$ = 55%, $\alpha$ = 1%, and $\beta$ = 99%, which give $z_\alpha$ = $z_\beta$ = 2.32635, I obtain $N$ = 11109 with this formula. In 11109 trials you would expect to win about 5866 times by "dumb luck" and maybe as many as $t$ = 5988 times due to chance variation. If the true winning percentage is 55%, you expect to win about 6110 times, but there's a 1% chance you will win 5988 or fewer times, and thereby (due to "bad luck") not succeed in showing (with 99% confidence) that "dumb luck" isn't operating.
### R code for the example
```
#
# Specify inputs.
#
p <- 52.8/100 # Null hypothesis
q <- 55/100 # Alternative
alpha <- 1/100 # Test size
beta <- 99/100 # Chance of confirming the alternative
#
# Compute number of trials.
#
s.p <- sqrt(p*(1-p))
s.q <- sqrt(q*(1-q))
z.alpha <- qnorm(1 - alpha)
z.beta <- qnorm(beta)
delta <- q - p
N <- ceiling(((z.alpha * s.p + z.beta * s.q)/delta)^2)
#
# Explain.
#
t <- floor(N*(p + z.alpha*se.p)) # Threshold
m <- round(N*p) # Expectation (null)
k <- round(N*q) # Expectation (alternate)
size <- pnorm((t + 1/2 - N*p)/(s.p * sqrt(N))) # Actual test size
sureness <- 1 - pnorm((N*q - (t + 1/2))/(s.q * sqrt(N)))
print(list(N=N, threshold=t, expectation=m, alternate=k,
size=size, sureness=sureness))
```
| null | CC BY-SA 2.5 | null | 2011-02-10T07:24:36.527 | 2011-02-10T14:25:46.007 | 2011-02-10T14:25:46.007 | 919 | 919 | null |
7042 | 2 | null | 7040 | 3 | null | I suppose you are not interested in "hardcore" statistical explanation. So, more the intercept deviates from zero, the more pronounced the asymmetry. If the p-value of the intercept is 0.1 or smaller, the asymmetry is considered to be statistically significant. More [here](http://goo.gl/PlNhR).
| null | CC BY-SA 2.5 | null | 2011-02-10T08:09:11.320 | 2011-02-10T08:09:11.320 | null | null | 609 | null |
7043 | 2 | null | 7032 | 6 | null | I'm not sure about the "missing data", but I can give an answer on "outliers"
This is basically due to the "unbounded" influence that a single observation can have in least squares (or at least in conventional least squares). A very, very simple example of least squares should show this. Suppose you only estimate an intercept $\mu$ using data $Y_i \ (i=1,\dots,n)$. The least square equation is
$$\sum_{i=1}^{n} (Y_i-\mu)^2$$
Which is minimised by choosing $\hat{\mu}=n^{-1}\sum_{i=1}^{n} Y_i=\overline{Y}$. Now suppose I add one extra observation to the sample, equal to $X$, how will the estimate change? The new best estimate using $n+1$ observations is just
$$\hat{\mu}_{+1}=\frac{X+\sum_{i=1}^{n} Y_i}{n+1}$$
Rearranging terms gives
$$\hat{\mu}_{+1}=\hat{\mu}+\frac{1}{n+1}(X-\hat{\mu})$$
Now for a given sample $\hat{\mu}$ and $n$ are fixed. So I can essentially "choose" $X$ to get any new average that I want!
Using the same argument, you can show that deleting the $j$th observation has a similar effect:
$$\hat{\mu}_{-j}=\hat{\mu}+\frac{-1}{n-1}(Y_{j}-\hat{\mu})$$
And similarly (a bit tediously), you can show that removing $M$ observations gives:
$$\hat{\mu}_{-M}=\hat{\mu}+\frac{-M}{n-M}(\overline{Y}_{M}-\hat{\mu})$$
Where $\overline{Y}_{M}$ is the average of the observations that you removed.
The same kind of thing happens in general least squares, the estimate "chases" the outliers. If you are worried about this, then "least absolute deviations" may be a better way to go (but this can be less efficient if you don't have any outliers).
Influence functions are a good way to study this stuff (outliers and robustness). For example, you can get an approximate change in the variance $s^2=n^{-1}\sum_{i=1}^{n}(Y_i-\overline{Y})^2$ as:
$$s^2_{-j} = s^2 +\frac{-1}{n-1}((Y_j-\overline{Y})^2-s^2) + O(n^{-2})$$
| null | CC BY-SA 2.5 | null | 2011-02-10T08:13:06.480 | 2011-02-10T08:13:06.480 | null | null | 2392 | null |
7045 | 1 | 10475 | null | 4 | 4318 | Is there a package or library that can help me suggest a formula given the independent variables which will work well in glm, for example this formula can be something like x^2+log(y)+Z, it does not necessarily need to be the standard linear model x+y+z in order to explain a variable.
| How to obtain in R a good formula for glm (general linear models) to predict a binomial variable? | CC BY-SA 2.5 | null | 2011-02-10T11:40:35.520 | 2012-07-31T19:27:54.183 | 2011-02-10T13:59:01.247 | 919 | 1808 | [
"r",
"logistic",
"generalized-linear-model"
]
|
7046 | 2 | null | 7040 | 4 | null | @Andrej has already given an answer, i.e there is evidence of funnel plot asymmetry.
@DrWho, I would be interested in the reference that suggests using a one-tailed test.
The following can give you an idea of the underlying logic of applying this regression model to test for publication bias:
Most of these regression approaches are using the so-called standard normal deviate (SND) which is defined as effect size divided by its standard error ($ES_i / SE_i$). The inverse standard error (“precision”) serves as predictor variable. Then, an unweighted OLS regression is estimated. When there is no evidence of funnel plot asymmetry, the intercept should not significantly differ from zero, i.e. $H_0: b_0 = 0$. In other words, the intercepts provide a measure of funnel plot asymmetry (Sterne/Egger 2005: 101). This is due to two reasons:
(1) Since the standard error depends on sample size, the inverse standard error for small studies will be close to zero.
(2) Even though small studies may produce large effect sizes, the SND will be small since the standard error will be large. Again, for small studies, the SND will be close to zero. For large studies, however, we will observe large SNDs and the inverse standard errors will also be large (Egger et al 1997: 629f.).
Egger, M., G. Davey Smith, M. Schneider, C. Minder (1997), Bias in meta-analysis detected by a simple, graphical test British Medical Journal 315: 629-634.
Sterne, J.A.C., B.J. Becker, M. Egger (2005), The funnel plot. S. 75-98 in: H.R. Rothstein, A.J. Sutton, M. Borenstein (Hrsg.), Publication Bias in Meta-Analysis. Prevention, Assessment and Adjustments, The Atrium, Southern Gate, Chichester: John Wiley & Sons, Ltd.
| null | CC BY-SA 2.5 | null | 2011-02-10T12:11:08.863 | 2011-02-10T12:11:08.863 | null | null | 307 | null |
7047 | 1 | 7067 | null | 2 | 1157 | Hey guys. I got two images from video frames. They have a certain portion of overlap. After warping one of them, I'm currently trying to blend them together. In other words, I would like to stitch them together. But I don't know how to accomplish that. Can anybody please give me some help? Thank you!
Let's say the image data is store in 'image1_warped' and 'image2'. Appreciated your help!
| Matlab image blending | CC BY-SA 2.5 | null | 2011-02-10T12:18:06.050 | 2011-02-10T21:15:49.847 | null | null | 3133 | [
"matlab",
"image-processing"
]
|
7048 | 1 | 7416 | null | 17 | 1385 | While preparing for a talk I will give soon, I recently started digging into two major (Free) tools for interactive data visualization: [GGobi](http://www.ggobi.org/) and [mondrian](http://rosuda.org/mondrian/) - both offer a great range of capabilities (even if they're a bit buggy).
I wish to ask for your help in articulating (both to myself, and for my future audience) When is it helpful to use interactive plots? Either for data exploration (for ourselves) and data presentation (for a "client")?
For when explaining the data to a client, I can see the value of animation for:
- Using "identify/linking/brushing" for seeing which data point in the graph is what.
- Presenting a sensitivity analysis of the data (e.g: "if we remove this point, here is what we will get)
- Showing the effect of different groups in the data (e.g: "let's look at our graphs for males and now for the females")
- Showing the effect of time (or age, or in general, offering another dimension to the presentation)
For when exploring the data ourselves, I can see the value of identify/linking/brushing when exploring an outlier in a dataset we are working on.
But other then these two examples, I am not sure what other practical use these techniques offer. Especially for our own data exploration!
It could be argued that the interactive part is good for exploring (For example) a different behavior of different groups/clusters in the data. But when (in practice) I approached such situation, what I tended to do was to run the relevant statistical procedures (and post-hoc tests) - and what I found to be significant I would then plot with colors clearly dividing the data to the relevant groups. From what I've seen, this is a safer approach then "wondering around" the data (which could easily lead to data dredging (were the scope of the multiple comparison needed for correction is not even clear).
I'd be very happy to read your experience/thoughts on this matter.
(this question can be a wiki - although it is not subjective and a well thought-out answer will gladly win my "answer" mark :) )
| When is interactive data visualization useful to use? | CC BY-SA 2.5 | null | 2011-02-10T14:49:20.220 | 2013-02-22T00:04:37.230 | 2011-02-20T14:30:18.040 | null | 253 | [
"data-visualization",
"data-mining",
"interactive-visualization"
]
|
7049 | 1 | 7076 | null | 13 | 15513 | Can the poisson distribution be used to analyze continuous data as well as discrete data?
I have a few data sets where response variables are continuous, but resemble a poisson distribution rather than a normal distribution. However, the poisson distribution is a discrete distribution and is usually concerned with numbers or counts.
| Using poisson regression for continuous data? | CC BY-SA 3.0 | null | 2011-02-10T14:59:26.717 | 2015-07-14T19:46:48.197 | 2015-07-14T19:46:48.197 | 34826 | 3136 | [
"distributions",
"regression",
"poisson-distribution",
"continuous-data"
]
|
7050 | 2 | null | 7048 | 8 | null | Dynamic linking of graphics is natural and effective for exploratory spatial data analysis, or [ESDA](http://www.ncgia.ucsb.edu/giscc/units/u128/u128_f.html). ESDA systems typically link one or more quantitative maps (such as [choropleth maps](http://en.wikipedia.org/wiki/Choropleth_map)) with tabular views and statistical graphics of the underlying data. Some such capabilities have been a part of a few desktop GIS systems for about 15 years, particularly [ArcView 3](http://en.wikipedia.org/wiki/ArcView_3.x) (a discontinued commercial product). The free [GeoDa](http://geodacenter.asu.edu/software/downloads) software provides some of these capabilities within an environment designed for spatial data exploration and statistical analysis. It's clunky, with an idiosyncratic interface and unpolished graphics, but fairly bug free.
This use of EDA circumvents the objection that statistical testing may be better than interactive exploration because in many (most?) situations there is no clear statistical model, there is no obvious (or even appropriate) statistical test, and hypothesis testing is often irrelevant: people need to see what occurs, where it occurs, and to observe the statistical relationships among variables in a spatial context. Not all data analysis is, or should even consist of, formal procedures!
| null | CC BY-SA 2.5 | null | 2011-02-10T15:13:17.287 | 2011-02-10T15:13:17.287 | null | null | 919 | null |
7051 | 2 | null | 7049 | 9 | null | If you're talking about using a Poisson response in a generalized linear model, then yes, if you are willing to make the assumption that the variance of each observation is equal to its mean.
If you don't want to do that, another alternative may be to transform the response (e.g. take logs).
| null | CC BY-SA 2.5 | null | 2011-02-10T15:15:28.673 | 2011-02-10T15:15:28.673 | null | null | 495 | null |
7052 | 2 | null | 6225 | 2 | null | Technically, no, a null hypothesis cannot be proven. For any fixed, finite sample size, there will always be some small but nonzero effect size for which your statistical test has virtually no power. More practically, though, you can prove that you're within some small epsilon of the null hypothesis, such that deviations less than this epsilon are not practically significant.
| null | CC BY-SA 2.5 | null | 2011-02-10T15:29:44.943 | 2011-02-10T15:29:44.943 | null | null | 1347 | null |
7053 | 1 | 7064 | null | 6 | 540 | I’m reviewing an article, and can’t give details but here is the situation, and it’s got me puzzled
Patients were divided into 4 categories (call them A B C and D), which were exhaustive and exclusive. Adjusted hazard ratios were computed for these four groups for all patients and for two subgroups of patients (call them Lung and Heart). The two subgroups were also exhaustive and exclusive. For the full sample, the hazard ratios for B C and D (compared to A) were 5.08, 4.39, and 1.81. For the lung group, they were 1.15, 1.16 and 1.07. For the heart group they were 2.55, 1.69 and 1.28. That is, they were lower for both subgroups than for the whole population. The lung group and heart group are not the same size; from the info provided, it’s not clear whether A B C and D are equally divided across heart and lung groups.
So …. This seems like Simpson’s paradox. But I’ve not seen that term used for survival analysis. I don’t see why it could NOT be so applied.
My feeling here is to suggest that heart and lung should ONLY be presented separately, but I’m not completely sure.
| Question about combining hazard ratios - Maybe Simpson's paradox? | CC BY-SA 3.0 | null | 2011-02-10T15:35:57.853 | 2017-03-06T18:43:56.290 | 2017-03-06T18:43:56.290 | -1 | 686 | [
"survival",
"simpsons-paradox"
]
|
7054 | 1 | 7131 | null | 5 | 146 | Can we do a meta-analysis of data of 3 regions. A particular disease was treated with the same treatment but implemented thoroughly in 2 regions and not so thoroughly in 1 region. How to proceed with analysis? What is the best way to analyse these data. The data are of a disease (cases and deaths) as follows:
```
Region 1 Region 2 Region 3
Cases Dead Cases Dead Cases Dead Total cases Total dead cases
2006 2320 528 1484 108 73 3 3877 639
2007 3024 645 1592 75 32 1 4648 721
2008 3012 537 1920 53 3 0 4935 590
2009 3073 556 1477 40 246 8 4796 604
2010 3540 494 1460 26 138 1 5138 521
Total 14969 2760 7933 302 492 13 23394 3075
```
Updated These are unpublished data.
| Meta-analysis of 3 regions' data for 5 years | CC BY-SA 2.5 | null | 2011-02-10T15:38:40.207 | 2011-02-12T16:39:30.410 | 2011-02-12T05:52:05.690 | 183 | 2956 | [
"meta-analysis",
"panel-data"
]
|
7056 | 1 | 7075 | null | 3 | 524 | readHTMLTable seems pretty robust, but when I try to use it on this page, I get an error.
Any ideas what it means or how I could get around it? The page's biggest table is nested inside another table... is that a problem?
```
> tables<-readHTMLTable(myURL, header=NA,a.data.frame=TRUE)
Error in htmlParse(doc) :
error in creating parser for http://www.ncbi.nlm.nih.gov/genomes/lproks.cgi
```
| Error creating parser in readHTMLTable | CC BY-SA 2.5 | null | 2011-02-10T16:11:00.267 | 2011-02-10T23:28:32.947 | 2011-02-10T17:53:56.070 | null | 1463 | [
"r",
"dataset"
]
|
7057 | 1 | null | null | 14 | 9313 | I would like to get the coefficients for the LASSO problem
$$||Y-X\beta||+\lambda ||\beta||_1.$$
The problem is that glmnet and lars functions give different answers. For the glmnet function I ask for the coefficients of $\lambda/||Y||$ instead of just $\lambda$, but I still get different answers.
Is this expected? What is the relationship between the lars $\lambda$ and glmnet $\lambda$? I understand that glmnet is faster for LASSO problems but I would like to know which method is more powerful?
---
deps_stats I am afraid that the size of my dataset is so large that LARS can not handle it, whereas on the other hand glmnet can handle my large dataset.
mpiktas I want to find the solution of (Y-Xb)^2+L\sum|b_j|
but when I ask from the two algorithms(lars & glmnet) for their calculated coefficients for that particular L, I get different answers...and I wondering is that correct/ expected? or I am just using a wrong lambda for the two functions.
| GLMNET or LARS for computing LASSO solutions? | CC BY-SA 3.0 | null | 2011-02-10T16:23:48.187 | 2021-09-24T05:02:58.787 | 2016-08-08T02:43:39.467 | 805 | null | [
"r",
"machine-learning",
"regression",
"lasso",
"regularization"
]
|
7058 | 1 | 7169 | null | 11 | 428 | I have three features that I use to solve a classification problem. Originally, these features produced boolean values, so I could evaluate their redundancy by looking at how much the sets of positive and negative classifications overlap. Now I have extended the features to produce real values (scores) instead, and I would like to analyze their redundancy again, but I am at a complete loss on how to do that. Can anyone provide me with a pointer or idea on how to go about that?
I know this question is very vague, that is because I do not have a very strong grasp of statistics. So, if you do not have an answer for me, maybe you have some questions that can help me understand better myself.
Edit: I am currently browsing Wikipedia on the subject, I have the feeling that what I want is a correlation coefficient, but I am still unsure if this is the right approach, and which of the many available coefficients is appropriate.
Edit 2: In the boolean case, I first created for each feature the set of samples for which it was true. Then, the correlation between two features was the size of intersection of these sets over the size of the union of these sets. If this value is 1, they are completely redundant, because always the same. If it is 0, they are never the same.
| How to quantify redundancy of features? | CC BY-SA 2.5 | null | 2011-02-10T16:35:26.570 | 2011-02-13T22:04:12.390 | 2011-02-10T17:33:23.853 | 977 | 977 | [
"correlation",
"feature-selection"
]
|
7059 | 1 | null | null | 5 | 1320 | I am managing many people entering data into a database. I have a log of user, date, time, table, and action that each person makes:
```
records <- data.frame(user = c('bob', 'bob', 'jane', 'jane', 'bob', 'bob', 'bob', 'jane', 'jane', 'bob'),
date = c("2010-06-24", "2010-06-28", "2010-06-29", "2010-06-30", "2010-07-01", "2010-07-02", "2010-07-05", "2010-07-06", "2010-07-07", "2010-07-09"),
time = c("01:40:08", "01:40:18", "01:40:28", "01:40:37", "01:40:44", "01:40:52", "01:40:59", "01:56:26", "02:16:37", "03:55:06"),
table = c(rep('table1',5), rep('table2',5)),
action = c('create', 'create', 'create', 'update', 'create', 'update', 'update', 'create', 'create', 'create'))
```
For a non-trivial example, the actual `records` dataframe with 10,000 entries can be downloaded as an .Rdata file [here](http://dl.dropbox.com/u/18092793/records.Rdata), and then:
```
load('records.Rdata')
library(ggplot2)
qplot(date, table, data = records, color = user, geom='jitter')
```
How can I visualize, overall and for each table:
- the amount of time each person works per week
- the type and number or frequency of actions that they made.
?
| How can I effectively summarize and visualize time series of employee activities? | CC BY-SA 2.5 | null | 2011-02-10T18:02:16.093 | 2011-02-12T21:34:48.187 | 2011-02-11T08:08:51.310 | 2116 | 1381 | [
"r",
"time-series",
"data-visualization"
]
|
7061 | 1 | 7078 | null | 3 | 500 | Given two basketball players.
John made 38/50 free throws.
Mike made 80/100 free throws.
What is probability that Mike is better at free throws than John?
| Binomial Probability Question | CC BY-SA 2.5 | null | 2011-02-10T19:10:16.860 | 2011-02-11T04:02:45.067 | null | null | 3143 | [
"probability",
"binomial-distribution"
]
|
7062 | 2 | null | 7057 | 1 | null | LASSO is non-unique in the case where multiple features have perfect collinearity. Here's a simple thought experiment to prove it.
Let's say you have three random vectors $y$, $x_1$, $x_2$. You're trying to predict $y$ from $x_1$, $x_2$. Now assume $y$ = $x1$ = $x2$. An optimal LASSO solution would be $\beta_1 = 1 - P$, $\beta_2 = 0$, where $P$ is the effect of LASSO penalty. However, also optimal would be $\beta_1 = 0$, $\beta_2 - 1 - P$.
| null | CC BY-SA 2.5 | null | 2011-02-10T19:19:24.763 | 2011-02-10T20:30:18.040 | 2011-02-10T20:30:18.040 | 1347 | 1347 | null |
7063 | 2 | null | 7053 | 2 | null | Yes. It is certainly possible that this is due to something like Simpson's paradox. If the data looked like
$$\begin{array}{rrrrrr}
\textit{Organ}&\textit{Outcome}&A&B&C&D\\
\textrm{Lung}&\textrm{Bad}&371&2727&2374&418\\
\textrm{Lung}&\textrm{Good}&556&3199&2740&558\\
\textrm{Heart}&\textrm{Bad}&214&245&195&273\\
\textrm{Heart}&\textrm{Good}&8859&3828&4691&8752\\
\end{array}$$
then I think you would get something like your hazard ratios (if that means ratios of bad outcomes/totals fractions). Many other patterns of numbers would too.
If you are reviewing the article, it seems reasonable to ask for the underlying numbers to be presented. If they look anything like mine, then it does seem a little strange to add Lung and Heart numbers without a good reason.
| null | CC BY-SA 2.5 | null | 2011-02-10T20:13:28.700 | 2011-02-10T20:13:28.700 | null | null | 2958 | null |
7064 | 2 | null | 7053 | 5 | null | Strictly, [Simpson's paradox](http://en.wikipedia.org/wiki/Simpson%27s_paradox) refers to a reversal in the direction of effect, which hasn't happened here as all the hazard ratios are above 1, so I'd refer to this by the more general term [confounding](http://en.wikipedia.org/wiki/Confounding). You can certainly have confounding in survival analysis. I agree it appears sensible to only present the heart and lung results separately.
| null | CC BY-SA 2.5 | null | 2011-02-10T20:17:34.793 | 2011-02-10T20:17:34.793 | null | null | 449 | null |
7066 | 2 | null | 7061 | 1 | null | I think what you want to do is compare the predictive distributions of both players. The predictive distribution describes the probability that Mike/John will make his next shot given the data (integrating out the parameters).
Here is some Matlab code you can play with:
```
clear;
clc;
rng = linspace(0, 1, 100);
% data
j = [38 50];
m = [80 100];
% priors (I assume uniform, but you can integrate external knowledge here)
a0 = 1;
b0 = 1;
% posterior distribution for each player
post_j = [a0 + j(1), b0 + j(2) - j(1)];
post_m = [a0 + m(1), b0 + m(2) - m(1)];
% visualize
postj = betapdf(rng, a0 + j(1), b0 + j(2) - j(1));
postm = betapdf(rng, a0 + m(1), b0 + m(2) - m(1));
figure(1); plot(rng, postj, 'r',...
rng, postm, 'k');
title('Posterior Distributions');
legend('Jon', 'Mike');
xlabel('Theta');
%% SAMPLING FROM PREDICTIVE
SAMPLES = 5000;
predj = zeros(SAMPLES, 1);
predm = zeros(SAMPLES, 1);
for i = 1:SAMPLES
pj = betarnd(post_j(1), post_j(2));
lj = binornd(1, pj);
predj(i) = lj;
pm = betarnd(post_m(1), post_m(2));
lm = binornd(1, pm);
predm(i) = lm;
end
% Comparison of (sampled) predictive distributions:
fprintf('P(d_john | Dj) > P(d_mike | Dm) = %.3f\n', sum(predj > predm) / SAMPLES);
fprintf('P(d_john | Dj) < P(d_mike | Dm) = %.3f\n', sum(predj < predm) / SAMPLES);
fprintf('P(d_john | Dj) = P(d_mike | Dm) = %.3f\n', sum(predj == predm) / SAMPLES);
pred_j = sum(predj) / SAMPLES;
pred_m = sum(predm) / SAMPLES;
fprintf('Probability that John will make his next shot: %.4f\n', pred_j);
fprintf('Probability that Mike will make his next shot: %.4f\n', pred_m);
```
| null | CC BY-SA 2.5 | null | 2011-02-10T21:11:36.403 | 2011-02-10T21:11:36.403 | null | null | 1913 | null |
7067 | 2 | null | 7047 | 3 | null | I don't know algorithms off the top of my head, but I would start by having a look at [Survey of image registration techniques](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.8364&rep=rep1&type=pdf)
| null | CC BY-SA 2.5 | null | 2011-02-10T21:15:49.847 | 2011-02-10T21:15:49.847 | null | null | 1913 | null |
7068 | 2 | null | 7059 | 2 | null | Below the code to plot the numbers of actions per week/per user:
```
load("records.Rdata")
library(ggplot2)
records$posdate <- as.POSIXlt(records$date,format="%Y-%m-%d")
records$week <- as.numeric(format(records$posdate,"%W")) #changed from previous hack!
numberActions <- by(records$action,records[,c("user","week")],function(x) length(x[x!="Login"]))
numberActions <- melt(t(numberActions[1:7,]),"week")
colnames(numberActions)[2] <- "user"
ggplot(numberActions,aes(week,value,group=user,col=user))+geom_line()+geom_point(size=3)
```
I hope this helps.
EDIT:
For the second part you can use `plyr`:
```
numberActions <- ddply(records,c("user","week"),function(x) table(x$action))
numberActions <- melt(numberActions,c("user","week"))
numberActions <- numberActions[numberActions$value!=0 & numberActions$variable!="Login",]
```
and then the usual `ggplot`...
| null | CC BY-SA 2.5 | null | 2011-02-10T21:27:30.477 | 2011-02-11T09:08:07.167 | 2011-02-11T09:08:07.167 | 1443 | 1443 | null |
7069 | 1 | null | null | 3 | 327 | Is it possible to correct for violating the assumption of independence for nonparametric tests?
I have a categorical independent variable and a categorical and binary dependent variable, and each subject was exposed to multiple levels of treatment. I have done some preliminary analysis with binary logistic regression and post-hoc chi-square tests, but I think these are inappropriate unless I can correct for my non-independent data. Info on this has been surprisingly difficult to find, and help is much appreciated.
| Is it possible to correct for violating the assumption of independence for nonparametric tests? | CC BY-SA 2.5 | null | 2011-02-10T21:33:27.913 | 2011-02-10T22:48:44.833 | null | null | null | [
"nonparametric",
"non-independent"
]
|
7070 | 1 | 7071 | null | 27 | 115599 | What is the best way of defining white noise process so it is intuitive and easy to understand?
| What is a white noise process? | CC BY-SA 3.0 | null | 2011-02-10T22:13:25.977 | 2016-06-03T08:02:12.637 | 2012-08-20T06:18:15.407 | 2116 | 333 | [
"time-series"
]
|
7071 | 2 | null | 7070 | 18 | null | A white noise process is one with a mean zero and no correlation between its values at different times. See the ['white random process' section of Wikipedia's article on white noise](http://en.wikipedia.org/wiki/White_noise#White_random_process_.28white_noise.29).
| null | CC BY-SA 2.5 | null | 2011-02-10T22:23:25.730 | 2011-02-10T22:23:25.730 | null | null | 449 | null |
7072 | 2 | null | 7069 | 2 | null | You may be having trouble finding info on this because you're looking for it under 'nonparametric tests'. This wouldn't usually be included under the (rather confusing) 'nonparametric' heading.
Logistic regression is sensible, but you'll need one of its extensions to deal with correlated outcomes, such as conditional (fixed-effects) logistic regression, random-effects logistic regression, or mixed-effects logistic regression. It's hard to say which is appropriate without knowing more about your data and the question(s) you wish to address.
| null | CC BY-SA 2.5 | null | 2011-02-10T22:30:01.883 | 2011-02-10T22:48:44.833 | 2011-02-10T22:48:44.833 | 449 | 449 | null |
7074 | 1 | 7083 | null | 10 | 14623 | Is there a formal statistical test to test if process is a white noise?
| Formal statistical test for whether a process is a white noise | CC BY-SA 2.5 | null | 2011-02-10T23:25:06.553 | 2018-12-02T08:21:33.307 | 2011-02-12T15:39:03.473 | 1036 | 333 | [
"time-series",
"white-noise"
]
|
7075 | 2 | null | 7056 | 2 | null | The table in question seems to need use of a Javascript button to download. If you just want this particular data then save it as the offered untitle.txt or something else (it is basically tab-delimited) and then
```
tables <- read.delim("untitle.txt", skip=1)
colnames(tables) <- c(colnames(tables)[-1],"junk")
tables$junk <- NULL
```
seems to give you what I believe you want. The second and third lines of the R code are to deal with "## Columns:" which starts the second line of the file and upsets read.delim() when trying to get the column names.
| null | CC BY-SA 2.5 | null | 2011-02-10T23:28:32.947 | 2011-02-10T23:28:32.947 | null | null | 2958 | null |
7076 | 2 | null | 7049 | 13 | null | The key assumption of a generalized linear model that's relevant here is the relationship between the variance and mean of the response, given the values of the predictors. When you specify a Poisson distribution, what this implies is that you are assuming the conditional variance is equal to the conditional mean.* The actual shape of the distribution doesn't matter as much: it could be Poisson, or gamma, or normal, or anything else as long as that mean-variance relationship holds.
* You can relax the assumption that the variance equals the mean to one of proportionality, and still usually get good results.
| null | CC BY-SA 2.5 | null | 2011-02-11T01:29:36.383 | 2011-02-11T01:29:36.383 | null | null | 1569 | null |
7077 | 2 | null | 7061 | 2 | null | This is very much a problem in Bayesian inference. You appear to have taken the first step in realizing that the sample is not the same as the underlying probability distribution. Even though Mike had a higher mean in his sample, John might have a higher mean for his true talent distribution. When applying Bayesian inference, we start with a prior distribution, use the evidence to update our knowledge, and come out with a posterior distribution.
The point I alluded to in my comment is that the prior knowledge can matter a great deal. If we happen to know that the "John" happens to be basketball Hall of Famer John Stockton and the "Mike" happens to be me, then 100 shots are not going to be nearly enough to convince you that I'm better. You prior distribution for John Stockton's true talent would probably be somewhat tight with a mean near .8 or so (he was .8261 for his career), whereas who knows for me...it might be .3 for all I know.
If you are estimating the true talent probabilities of making a free throw, you at least know that the values must lie in (0, 1), so a minimal prior distribution for each would be the uniform probability distribution on (0, 1). This is equivalent to using the Beta distribution with parameters $\alpha = \beta = 1$. Through some calculations I won't get into (along with Bayes theorem), it turns out that the posterior distribution is also a Beta distribution with parameters $\alpha + i$ and $\beta + j$ (assuming the player made $i$ shots in $i + j$ attempts).
So our posterior distribution for John is $Beta(39, 13)$ and for Mike it is $Beta(81, 21)$. So with these priors our posterior estimate of John's mean talent is $\frac{39}{52} = .75$ and for Mike it is $\frac{81}{102} \approx .794.$ I believe finding the answer to how likely Mike is better in a nice closed form is messy, and I don't have the tools handy to calculate the exact answer, but you would integrate/sum the portions of those curves where Mike is better to find the answer. Nick's code looks like it will probably get you a nice approximation. I can tell you that with the prior I chose Mike will be higher (since they had the same prior and Mike had the higher sample mean, it does pass the smell test that Mike will have a higher posterior mean).
| null | CC BY-SA 2.5 | null | 2011-02-11T03:47:17.390 | 2011-02-11T03:47:17.390 | null | null | 2485 | null |
7078 | 2 | null | 7061 | 2 | null | Okay. I think I figured out the general answer:
Given a sample of n/N made, the probability that the population success rate is greater than x is defined by the posterior probability distribution:
>
$\frac{\int_{x}^1 r^n(1-r)^{N-n}\ dr}{\int_{0}^1 r^n(1-r)^{N-n}\ dr}$
So for my example the chance that Mike's free throw percentage is greater than x is:
>
$\frac{\int_{x}^1 r^{80}(1-r)^{20}\ dr}{\int_{0}^1 r^{80}(1-r)^{20}\ dr}$
We then need to integrate that over the probability distribution for John's free throw percentages:
>
$\frac{\int_{0}^1 \int_{x}^1 x^{38}(1-x)^{12} r^{80}(1-r)^{20}\ dr\ dx}{\int_{0}^1 x^{38}(1-x)^{12}\ dx \int_{0}^1 r^{80}(1-r)^{20}\ dr}$
Which as whuber said is ~ 72.66%
For the WolframAlpha solution:
>
Integrate[y^38 (1 - y)^12 x^80 (1 - x)^20, {x, 0, 1}, {y, 0, x}]/(Integrate[y^38 (1 - y)^12, {y, 0, 1}] Integrate[x^80 (1 - x)^20, {x, 0, 1}])
| null | CC BY-SA 2.5 | null | 2011-02-11T03:55:13.450 | 2011-02-11T04:02:45.067 | 2011-02-11T04:02:45.067 | 3143 | 3143 | null |
7079 | 1 | null | null | 7 | 23409 |
- How do you carry out analysis of covariance using R?
- How do you interpret the results?
- A practical example will be highly appreciated.
| Analysis of covariance in R | CC BY-SA 2.5 | null | 2011-02-11T06:27:08.743 | 2011-11-05T02:58:25.970 | 2011-02-12T05:49:02.013 | 183 | 3107 | [
"r",
"ancova"
]
|
7081 | 2 | null | 7070 | 12 | null | I myself usually think of white noise as an iid sequence with zero mean. At different times values of the process are then independent of each other, which is much stronger requirement than correlation zero. What is the best with this definition that it works in any context.
Side note. I only explained my intuition, the correct definition of white noise is given by @onestop. The definition I gave is mathematically defined as white noise in strict sense.
| null | CC BY-SA 2.5 | null | 2011-02-11T07:35:13.443 | 2011-02-11T09:13:59.350 | 2011-02-11T09:13:59.350 | 2116 | 2116 | null |
7082 | 2 | null | 7079 | 7 | null | Here is a [detailed presentation](http://www3.imperial.ac.uk/pls/portallive/docs/1/1171922.PDF)
| null | CC BY-SA 2.5 | null | 2011-02-11T07:44:39.227 | 2011-02-11T07:44:39.227 | null | null | 339 | null |
7083 | 2 | null | 7074 | 14 | null | In time-series analysis usually [Ljung-Box test](http://en.wikipedia.org/wiki/Ljung-Box_test) is used. Note though that it tests the correlations. If the correlations are zero, but variance varies, then the process is not white noise, but Ljung-Box test will fail to reject the null-hypothesis. Here is an example in R:
```
> Box.test(c(rnorm(100,0,1),rnorm(100,0,10)),type="Ljung-Box")
Box-Ljung test
data: c(rnorm(100, 0, 1), rnorm(100, 0, 10))
X-squared = 0.4771, df = 1, p-value = 0.4898
```
Here is the plot of the process:

Here is more discussion [about this test](https://stats.stackexchange.com/questions/6455/how-many-lags-to-use-in-the-ljung-box-test-of-a-time-series).
| null | CC BY-SA 2.5 | null | 2011-02-11T07:47:54.650 | 2011-02-11T07:47:54.650 | 2017-04-13T12:44:56.303 | -1 | 2116 | null |
7084 | 1 | 7543 | null | 3 | 2933 |
### Background:
In connection with the question [here](https://stats.stackexchange.com/questions/7022/parameter-estimation-for-normal-distribution-in-java) I came upon a more interesting question. I believe the question is large and distinct enough to have it's own thread. Of course I might be mistaken, in that case I apologize for the clutter...
I have a dataset of experimental values, coming from an unknown distribution. The data is then used in an analysis, and as a measure of significance of the results of the analysis, I would like to sample random data from the same distribution and run my analysis on these datasets (say 1000 times the size of the original dataset) to see if the results from experimental values show significant deviation from randomized data.
I was thinking about drawing samples from a normal distribution, as it feels most natural that the samples come from some normal distribution. However I need to back up my assumption of normality somehow. I was originally thinking of using some sort of a normality test, but after some reading on the matter such as [What is 'normality?'](https://stats.stackexchange.com/questions/2/what-is-normality) and [Normality testing: 'Essentially useless?'](https://stats.stackexchange.com/questions/2492/normality-testing-essentially-useless) threads, and of course the Wikipedia article on [normality tests](http://en.wikipedia.org/wiki/Normality_test); I feel like these tests are not an accepted way of validating normality assumptions.
### Question:
- How can normality be validated without using visual cues such as QQ plots? (the validation will be a part of larger software)
- Can a "goodness of fit" score be calculated?
---
EDIT: As requested, I'll try and add some details about the data. The data at hand are from a biological experiment, however the instrumentation has high variation between the runs. The analysis I've mentioned takes the measured values and using a mathematical model evaluates functional meaning of the measured data. To do so, I need to see how unrelated/uncorrelated, made-up data rates in the same analysis, hence the intent to model by randomized values. I hope this clarifies my point of view.
---
EDIT2: There has been a series of questions, asking for clarification on the question. Both here and in the comments below I tried to explain my situation to my best ability. It seems like we are suffering from a communication mismatch... I dunno how I can give an example without writing up a long table and complicating things further OR brutally simplifying the bigger picture.
I have no doubt that everyone who took their time and supply a reply has the best intentions, but I really appreciate if you could focus on the question at hand instead of inquiring further and further into the motivations behind why I need to do things this way and not another way.
| Justifying normal approximation of experimental data | CC BY-SA 2.5 | null | 2011-02-11T09:09:36.410 | 2011-02-23T18:11:56.453 | 2017-04-13T12:44:20.840 | -1 | 3014 | [
"normality-assumption",
"hypothesis-testing",
"goodness-of-fit",
"approximation"
]
|
7085 | 2 | null | 6225 | 2 | null | There is a case where a proof is possible. Suppose you have a school and your null hypothesis is that the numbers of boys and of girls is equal. As the sample size increases, the uncertainty in the ratio of boys to girls tends to reduce, eventually reaching certainty (which is what I assume you mean by proof) when the whole pupil population is sampled.
But if you do not have a finite population, or if you are sampling with replacement and cannot spot resampled individuals, then you cannot reduce the uncertainty to zero with a finite sample.
| null | CC BY-SA 2.5 | null | 2011-02-11T09:27:37.847 | 2011-02-11T09:27:37.847 | null | null | 2958 | null |
7086 | 2 | null | 7084 | 5 | null | If your ultimate aim is, as you say, "to see if the results from experimental values show significant deviation from randomized data" then you'd be better to directly use the observed distribution by performing a [permutation (re-randomization) test](http://en.wikipedia.org/wiki/Resampling_%28statistics%29#Permutation_tests). This avoids making any assumptions about the 'true' distribution from which the data are drawn.
| null | CC BY-SA 2.5 | null | 2011-02-11T09:31:33.927 | 2011-02-11T09:31:33.927 | null | null | 449 | null |
7088 | 1 | 7119 | null | 3 | 1261 | I am doing some class work and I was wondering if there is a universally accepted format for analyzing and presenting the data in a Word document for data gathered from questionnaires.
| Is there a standard format for presenting a data analysis report based on a questionnaire? | CC BY-SA 3.0 | null | 2011-02-11T11:41:17.083 | 2011-10-04T21:54:54.830 | 2011-10-04T21:54:54.830 | 183 | 3150 | [
"survey",
"reporting"
]
|
7089 | 1 | null | null | 22 | 18742 | I need a formula for the probability of an event in a n-variate Bernoulli distribution $X\in\{0,1\}^n$ with given $P(X_i=1)=p_i$ probabilities for a single element and for pairs of elements $P(X_i=1 \wedge X_j=1)=p_{ij}$. Equivalently I could give mean and covariance of $X$.
I already learned that there exist many $\{0,1\}^n$ distributions having the properties just as there are many distributions having a given mean and covariance. I am looking for a canonical one on $\{0,1\}^n$, just as the Gaussian is a canonical distribution for $R^n$ and a given mean and covariance.
| Probability formula for a multivariate-bernoulli distribution | CC BY-SA 2.5 | null | 2011-02-11T12:30:24.763 | 2022-01-18T05:14:29.933 | 2011-02-11T12:51:48.473 | 2116 | null | [
"multivariate-analysis",
"discrete-data"
]
|
7090 | 1 | 14308 | null | 4 | 400 | When generating survival times to simulate Cox proportional hazards models, does it matter to generate them in days or in years?
In theory, I guess it does not matter. But in practice? Is there a preference regarding computational issues?
Thank you!
Marco
| Generating survival times in days or in years | CC BY-SA 2.5 | null | 2011-02-11T13:11:47.690 | 2011-08-17T16:51:46.080 | 2011-02-13T14:35:26.377 | null | 3019 | [
"survival",
"hazard",
"units"
]
|
7091 | 2 | null | 7089 | 14 | null | The random variable taking values in $\{0,1\}^n$ is a discrete random variable. Its distribution is fully described by probabilities
$p_{\mathbf{i}}=P(X=\mathbf{i})$ with $\mathbf{i}\in\{0,1\}^n$. The probabilities $p_{i}$ and $p_{ij}$ you give are sums of $p_{\mathbf{i}}$ for certain indexes $\mathbf{i}$.
Now it seems that you want to describe $p_{\mathbf{i}}$ by only using $p_i$ and $p_{ij}$. It is not possible without assuming certain properties on $p_{\mathbf{i}}$. To see that try to derive [characteristic function](http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)) of $X$. If we take $n=3$ we get
\begin{align}
Ee^{i(t_1X_1+t_2X_2+t_3X_3)}&=p_{000}+p_{100}e^{it_1}+p_{010}e^{it_2}+p_{001}e^{it_3}\\\\
&+p_{110}e^{i(t_1+t_2)}+p_{101}e^{i(t_1+t_3)}+p_{011}e^{i(t_2+t_3)}+p_{111}e^{i(t_1+t_2+t_3)}
\end{align}
It is not possible rearrange this expression so that $p_{\mathbf{i}}$ dissapear. For the gaussian random variable the characteristic function depends only on mean and covariance parameters. Characteristic functions uniquely define distributions, so this is why Gaussian can be described uniquely by using only mean and covariance. As we see for random variable $X$ this is not the case.
| null | CC BY-SA 3.0 | null | 2011-02-11T13:19:26.237 | 2017-12-02T19:35:33.767 | 2017-12-02T19:35:33.767 | 69834 | 2116 | null |
7093 | 2 | null | 5465 | 4 | null | How will you count the number of sandal wood trees in Bangalore ?
| null | CC BY-SA 2.5 | null | 2011-02-11T13:31:52.880 | 2011-02-11T13:31:52.880 | null | null | null | null |
7094 | 2 | null | 7089 | 1 | null | I don't know what the resulting distribution is called, or if it even has a name, but it strikes me the obvious way to set this up is to think of the model you'd use to model a 2×2×2×…×2 table using a log-linear (Poisson regression) model. As you know the 1st-order interactions only, it's then natural to assume that all higher-order interactions are zero.
Using the questioner's notation, this gives the model:
$$P(X_1=x_1, X_2=x_2,\ldots,X_n=x_n) =
\prod_i \left[ p_i^{x_i}(1-p_i)^{1-x_i}
\prod_{j<i} \left(\frac{p_{ij}}{p_i p_j}\right)^{x_i x_j} \right] $$
| null | CC BY-SA 2.5 | null | 2011-02-11T13:39:43.833 | 2011-02-14T12:48:07.520 | 2011-02-14T12:48:07.520 | 449 | 449 | null |
7095 | 2 | null | 5465 | 3 | null | We are running a customer service centre. We are getting 1 million calls per month. How do we reduce it to ten thousand ?
| null | CC BY-SA 2.5 | null | 2011-02-11T13:40:00.073 | 2011-02-11T13:40:00.073 | null | null | null | null |
7096 | 1 | null | null | 1 | 529 | Does anybody know whether R has a package/routine for estimating spline function with unknown number of knots?
| Spline function with unknown knots | CC BY-SA 4.0 | null | 2011-02-11T13:48:51.977 | 2019-10-31T13:56:58.320 | 2019-10-31T13:56:58.320 | 92235 | null | [
"r",
"splines"
]
|
7097 | 2 | null | 7096 | 3 | null | Have a look at the facilities for fitting smoothing splines in the `gam` package.
| null | CC BY-SA 2.5 | null | 2011-02-11T13:56:45.177 | 2011-02-11T13:56:45.177 | null | null | 449 | null |
7098 | 2 | null | 7048 | 7 | null | To me interactive visualization is useful only for my own exploration, or when working with a very hands-on client. When dealing with a final presentation, I prefer to choose the static graph that best makes my point. Otherwise clients can get totally distracted by the gee-whiz factor.
The biggest benefit I get from it is a level of speed that frees me to examine far more than I would have if I stopped to program a solution. JMP is one of my favorite tools for this since it integrates so much of what I want into a single interface. I think most people who are good statistical programmers try something like JMP (or GGobi, etc.) for far too short a period to really get good at it. JMP in particular will give you the impression you know it by just looking over the menus. However, working through the manual is really required to uncover all its power.
You mentioned my main concern about this level of speed though: you end up with absolutely no idea of what your p-values mean. In just a few minutes you can examine hundreds of relationships visually. Doing hypothesis testing after all that is totally misleading, but I see people do that all the time.
A feature I love in GGobi is its projection pursuit, in which you specify what type of pattern you seek in a high dimensional space and then you sit back and watch it "pursue" that goal. Great stuff!
| null | CC BY-SA 3.0 | null | 2011-02-11T14:21:54.300 | 2013-02-22T00:04:37.230 | 2013-02-22T00:04:37.230 | -1 | 3155 | null |
7099 | 2 | null | 5465 | 16 | null | Two questions I've been asked:
1) You fit a multiple regression to examine the effect of a particular variable a worker in another department is interested in. The variable comes back insignificant, but your co-worker says that this is impossible as it is known to have an effect. What would you say/do?
2) You have 1000 variables and 100 observations. You would like to find the significant variables for a particular response. What would you do?
| null | CC BY-SA 2.5 | null | 2011-02-11T15:01:37.667 | 2011-02-11T15:01:37.667 | null | null | 2310 | null |
7100 | 1 | null | null | 7 | 3245 | I am doing research and trying to use a questionnaire that is available in English in another language.
- Is there a systematic approach for validating a questionnaire in another language?
- What statistical test should I be undertake and why?
| Validating an existing questionnaire into another language | CC BY-SA 2.5 | null | 2011-02-11T15:52:57.197 | 2015-11-09T12:13:56.343 | 2012-06-20T17:28:51.590 | 930 | null | [
"correlation",
"factor-analysis",
"reliability",
"psychometrics"
]
|
7101 | 1 | null | null | 4 | 2838 | I am interested in applying Bayesian additive regression trees (BART) for classification analysis of gene expression data. I am relatively new to R (and Bioconductor packages) and I am unable to find some code or vignette that I can use to learn from. I will be thankful if someone can point me in a good direction.
| Bayesian additive regression trees (BART) for classification analysis of gene expression data | CC BY-SA 2.5 | null | 2011-02-11T08:01:40.737 | 2022-12-17T03:34:55.783 | 2022-12-17T03:34:55.783 | 11852 | 4045 | [
"r",
"regression",
"bayesian",
"classification",
"bart"
]
|
7102 | 1 | 7104 | null | 4 | 1748 | I'm fitting a 4 parameter nonlinear regression model to multiple datasets, some of which fail to converge, however, the parameters output after a failure provide a fit that looks good, if not exceptional to my (and other's) eyes.
I've explored convergence criteria and they do converge eventually but the visual fit is terrible.
Is there any precedent for taking the visual fit ad ignoring convergence or are there some other things I can try?
I'm fitting a model of the form $\sqrt{c_1 x+c_2\exp(x)^y}/\sqrt{\exp(x)^y}+c_3$ where $y$ is a known constant, using nlminb in R.
| Non-linear regression fails to converge, but fit appears good | CC BY-SA 3.0 | null | 2011-02-11T16:05:49.630 | 2011-12-27T15:21:40.740 | 2011-12-27T15:21:40.740 | 919 | 3158 | [
"r",
"curve-fitting",
"nonlinear-regression"
]
|
7103 | 1 | null | null | 7 | 273 | Example of problem: Part of our research team is working on providing operationally wind power forecast. Usually, since there are different time scalse that interest forecast user, a forecast is issued every 15 min (it has even happened that 5 seconds was requirer) for every 15 minutes ahead up to serveral days. Obviously when you contract with a wind farm owner he has many wind farms, can be 10 or even 1000 wind farms depending of who you are working with. It is likelly that this number is going to be bigger in the near future. Now if the forecast user wants not only the point forecast but the forecast of 100 intesreting quantile (i.e. the whole marginal distribution) and if he wants you to store everything in a data base year after year, then you start tinking ..... ok that makes a lot of data. I'm not an international meteorological institute and cannot afford that.
Edit (according to @whuber's comment): to make it critical it can be tough of with a 5 second time frame. But anyway this is not meant to be the question, just an example (people coule provide other examples)
My question is : is there a compression theory/best practice for time series of distributions (i.e function of time with values in a space of distribution). I agree that you could work quantile by quantile and apply simple compression algorithm dedicated to signals (could be based on wavelet) but I am searhing for something more dedicated. For example if you know everything is well approximated by gaussians, the quantile by quantile approach would be stupid !
Edit (according to @whuber's comment): The question is really how to integrate the temporal dimension in a compression scheme, hence nI don't want to make a choice of a subset of quantiles that would be the same for ALL time.
The remark with gaussian is not dedicated to the example, it is an example to illustrate a case when very good compression is required. An other trivial remark (from the theoretical side) would be: imagine you have a very complicated distribution that cannot be summarized with few quantiles and that you observe it for 1000 successive times. It would be sad to store it for each of these times.
| Compression theory, practice, for time series with values in a space of distributions (say of a real random variable) | CC BY-SA 2.5 | null | 2011-02-11T17:29:15.263 | 2011-03-14T12:55:11.880 | 2011-02-12T07:23:26.243 | 223 | 223 | [
"distributions",
"time-series",
"signal-processing",
"quantiles",
"compression"
]
|
7104 | 2 | null | 7102 | 7 | null | I will assume the values of all the variables and constants are such that there won't be problems with obtaining square roots of negative numbers. Then
$$\frac{\sqrt{c_1 x + c_2 \exp(x)^y}}{\sqrt{\exp(x)^y}} + c_3 =\sqrt{c_2 + c_1 x \exp(-y x)} + c_3.$$
When $y \gt 0$ then eventually, for sufficiently large $x$, $\frac{c_1}{c_2} x \exp(-y x)$ gets small and the fit can be closely approximated as
$$(\sqrt{c_2} + c_3) + \frac{c_1}{2 c_2} x \exp(-y x).$$
Otherwise, when $y \lt 0$ and $c_1 \ne 0$, eventually (for largish positive $x$) the value of $c_2$ is inconsequential compared to the exponential and the fit simplifies to
$$\sqrt{c_1 x} \exp(-y x / 2) + c_3.$$
In either case there are redundant parameters: $\sqrt{c_2} + c_3$ can be kept constant while varying $c_2$ because $c_1$ can also be varied to keep $\frac{c_1}{2 c_2}$ constant. In the second case $c_2$ can be freely varied throughout a wide range.
Therefore, in many circumstances you should expect not to be able to identify at least one of these parameters. It sounds like your data are in one of these cases. If so, you might consider eliminating one of the $c_i$ from the model altogether.
Alternatively, consider a reparameterization of the function in the form
$$\alpha \sqrt{1 + \beta y x \exp(-y x)} + \gamma - \alpha$$
($\alpha = \sqrt{c_2}$, $\beta = c_1/(c_2 y)$, $\gamma = c_3+\sqrt{c_2}$).
This might provide enough clues to the minimizer to help it converge.
| null | CC BY-SA 2.5 | null | 2011-02-11T17:44:46.580 | 2011-02-11T21:06:12.897 | 2011-02-11T21:06:12.897 | 919 | 919 | null |
7105 | 2 | null | 3 | 4 | null | This falls on the outer limits of 'statistical analysis', but [Eureqa](https://web.archive.org/web/20110201181704/http://ccsl.mae.cornell.edu/eureqa) is a very user friendly program for data-mining nonlinear relationships in data via genetic programming. Eureqa is not as general purpose, but it does what it does fairly well, and the GUI is quite intuitive. It can also take advantage of the available computing power via the eureqa server.
| null | CC BY-SA 4.0 | null | 2011-02-11T17:52:56.833 | 2022-11-27T23:12:34.187 | 2022-11-27T23:12:34.187 | 362671 | 795 | null |
7106 | 2 | null | 3 | 8 | null | Colin Gillespie mentioned BUGS, but a better option for Gibbs Sampling, etc, is [JAGS](http://mcmc-jags.sourceforge.net/).
If all you want to do is ARIMA, you can't beat [X12-ARIMA](https://web.archive.org/web/20120120205049/http://www.census.gov/srd/www/x12a/), which is a gold-standard in the field and open source. It doesn't do real graphs (I use R to do that), but the diagnostics are a lesson on their own.
Venturing a bit farther afield to something I recently discovered and have just begun to learn...
ADMB (AD Model Builder), which does non-linear modeling based on the AUTODIF library, with MCMC and a few other features thrown in. It preprocesses and compiles the model down to a C++ executable and compiles it as a standalone app, which is supposed to be way faster than equivalent models implemented in R, MATLAB, etc. [ADMB Project](http://admb-project.org/).
It started and is still most popular in the fisheries world, but looks quite interesting for other purposes. It does not have graphing or other features of R, and would most likely be used in conjunction with R.
If you want to work with Bayesian Networks in a GUI: [SamIam](http://reasoning.cs.ucla.edu/samiam/) is a nice tool. R has a couple of packages that also do this, but SamIam is very nice.
| null | CC BY-SA 4.0 | null | 2011-02-11T18:44:38.463 | 2022-11-27T23:22:26.533 | 2022-11-27T23:22:26.533 | 362671 | 1764 | null |
7107 | 2 | null | 7100 | 2 | null | I found some good ideas in the short Sage book, Translating Questionnaires and Other Research Instruments, at
[http://www.uk.sagepub.com/books/Book5861](http://www.uk.sagepub.com/books/Book5861) . I wouldn't call it the most systematic or the most entertaining read, but it was helpful and it's fairly up-to-date and inexpensive.
| null | CC BY-SA 2.5 | null | 2011-02-11T18:46:41.513 | 2011-02-11T18:46:41.513 | null | null | 2669 | null |
7109 | 2 | null | 7101 | 2 | null | I would suggest looking at the [BayesTree](http://cran.r-project.org/web/packages/BayesTree/index.html) package, from CRAN. I have no experience with it, so I cannot say if there are better option from there. Try looking at the [Machine Learning](http://cran.r-project.org/web/views/MachineLearning.html) Task View, or directly through [www.rseek.org](http://www.rseek.org/).
I don't know anything approaching in Bioconductor, but if the above package suits your needs I guess you won't have any problem with gene expression data. I know the [CMA](http://www.bioconductor.org/help/bioc-views/release/bioc/html/CMA.html) package offer a full-featured pipeline for supervised classification (see e.g., [CMA - A comprehensive Bioconductor package for super- vised classification with high dimensional data](http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/020_professuren/boulesteix/pdf/cma.pdf), by Slawski et al.). Maybe you can plug BART method in addition to the available methods?
| null | CC BY-SA 2.5 | null | 2011-02-11T20:04:18.110 | 2011-02-11T20:04:18.110 | null | null | 930 | null |
7110 | 1 | 7147 | null | 39 | 30084 | What is/are the difference(s) between a longitudinal design and a time series?
| Difference between longitudinal design and time series | CC BY-SA 2.5 | null | 2011-02-11T22:51:58.117 | 2013-08-18T22:42:37.990 | null | null | 2956 | [
"time-series",
"panel-data"
]
|
7111 | 1 | 7114 | null | 18 | 14667 | To perform principal component analysis (PCA), you have to subtract the means of each column from the data, compute the correlation coefficient matrix and then find the eigenvectors and eigenvalues. Well, rather, this is what I did to implement it in Python, except it only works with small matrices because the method to find the correlation coefficient matrix (corrcoef) doesn't let me use an array with high dimensionality. Since I have to use it for images, my current implementation doesn't really help me.
I've read that it's possible to just take your data matrix $D$ and compute $DD^\top/n$ instead of $D^\top D/n$, but that doesn't work for me. Well, I'm not exactly sure I understand what it means, besides the fact that it's supposed to be a $n \times n$ matrix instead of $p\times p$ (in my case $p\gg n$). I read up about those in the eigenfaces tutorials but none of them seemed to explain it in such a way I could really get it.
In short, is there a simple algorithmic description of this method so that I can follow it?
| How to perform PCA for data of very high dimensionality? | CC BY-SA 3.0 | null | 2011-02-11T22:56:39.283 | 2021-08-09T13:53:56.283 | 2015-02-07T22:09:00.433 | 28666 | null | [
"pca",
"python"
]
|
7112 | 1 | 7118 | null | 45 | 47014 | I have 2 simple questions about linear regression:
- When is it advised to standardize the explanatory variables?
- Once estimation is carried out with standardized values, how can one predict with new values (how one should standardize the new values)?
Some references would be helpful.
| When and how to use standardized explanatory variables in linear regression | CC BY-SA 4.0 | null | 2011-02-11T23:09:54.510 | 2018-06-26T16:38:23.270 | 2018-06-26T16:38:23.270 | 11887 | 1443 | [
"regression",
"predictive-models",
"references",
"standardization",
"predictor"
]
|
7113 | 2 | null | 7110 | 13 | null | A [time series](http://en.wikipedia.org/wiki/Time_series) is simple a sequence of data points spaced out over time, usually with regular time intervals. A [longitudinal design](http://en.wikipedia.org/wiki/Longitudinal_study) is rather more specific, keeping the same sample for each observation over time.
An example of a time series might be unemployment measured every month using a labour force survey with a new sample each time; this would be a sequence of cross-sectional designs. But it could be anything such as your personal savings each year, which would also be longitudinal. Or it might simply follow a particular cohort of people growing older, such as the television documentary [Seven Up!](http://en.wikipedia.org/wiki/Seven_Up!) and the sequels every seven years after that - the latest was 49 Up in 2005, so there should be another edition next year. Longitudinal designs tend to tell you more about ways in which typical individuals change over time, but might (depending on the details of the design and whether the sample is refreshed) say less about how the population as a whole changes.
| null | CC BY-SA 2.5 | null | 2011-02-12T00:20:17.573 | 2011-02-12T00:20:17.573 | null | null | 2958 | null |
7114 | 2 | null | 7111 | 11 | null | The easiest way to do standard PCA is to center the columns of your data matrix (assuming the columns correspond to different variables) by subtracting the column means, and then perform an SVD. The left singular vectors, multiplied by the corresponding singular value, correspond to the (estimated) principal components. The right singular vectors correspond to the (estimated) principal component directions — these are the same as the eigenvectors given by PCA. The singular values correspond to the standard deviations of the principal components (multiplied by a factor of root n, where n is the number of rows in your data matrix) — the same as the square root of the eigenvalues given by PCA.
If you want to do PCA on the correlation matrix, you will need to standardize the columns of your data matrix before applying the SVD. This amounts to subtracting the means (centering) and then dividing by the standard deviations (scaling).
This will be the most efficient approach if you want the full PCA. You can verify with some algebra that this gives you the same answer as doing the spectral decomposition of the sample covariance matrix.
There are also efficient methods for computing a partial SVD, when you only need a few of the PCs. Some of these are variants of the power iteration. The [Lanczos algorithm](http://en.wikipedia.org/wiki/Lanczos_algorithm) is one example that is also related to partial least squares. If your matrix is huge, you may be better off with an approximate method. There are also statistical reasons for regularizing PCA when this is the case.
| null | CC BY-SA 3.0 | null | 2011-02-12T01:19:41.250 | 2015-02-07T22:09:44.477 | 2015-02-07T22:09:44.477 | 28666 | 1670 | null |
7115 | 1 | 7188 | null | 6 | 567 | I'm wondering how far along the natural language processing is in determining the semantic distance between two excerpts of text.
For instance, consider the following phrases
- Early today, I got up and washed my car.
- I cleaned my truck up this morning.
- Bananas are an excellent source of potassium.
Clearly (to the human reader) the first two statements are much more similar to one another than they are to the third statement. Does there exist any methodology that would allow a computer to draw the same conclusion?
(Ultimately I'm trying to group Twitter tweets according to semantic content.)
| Semantic distance between excerpts of text | CC BY-SA 2.5 | null | 2011-02-12T03:28:54.630 | 2022-06-20T14:19:06.153 | 2011-02-12T12:46:47.723 | null | 1026 | [
"clustering",
"classification",
"text-mining"
]
|
7116 | 2 | null | 7111 | 11 | null | It sounds like what you want is the NIPALS algorithm for performing PCA. It's a very popular algorithm among statisticians. It has many advantages:
- Computationally less expensive than SVD or eigenvalue decomposition methods if only the first few components are required.
- Has more modest storage requirements in general because the covariance matrix is never formed. This is a very important property for very large datasets.
- Can handle missing data in the dataset (though that's not an issue in your problem, since you're dealing with images).
Description
[http://en.wikipedia.org/wiki/Non-linear_iterative_partial_least_squares](http://en.wikipedia.org/wiki/Non-linear_iterative_partial_least_squares)
Algorithm
Here's a simple and excellent description of the algorithm (in section 1.2)
[http://stats4.eng.mcmaster.ca/w/mediafiles/mediawiki/f/f7/Section-Extra-Class-1.pdf](http://stats4.eng.mcmaster.ca/w/mediafiles/mediawiki/f/f7/Section-Extra-Class-1.pdf)
Remember to mean-center-scale first before doing PCA as it is scale-sensitive.
| null | CC BY-SA 2.5 | null | 2011-02-12T03:43:19.633 | 2011-02-12T03:43:19.633 | null | null | 2833 | null |
7117 | 2 | null | 7115 | 2 | null | Check out the work by [Jones & Mewhort (2007)](https://doi.org/10.1037/0033-295X.114.1.1). This [more recent work](http://www.springerlink.com/content/qr50708142958x51/) may also be of interest, particularly their [online tool](http://www.casstools.org).
| null | CC BY-SA 4.0 | null | 2011-02-12T03:50:03.520 | 2022-06-20T14:19:06.153 | 2022-06-20T14:19:06.153 | 361019 | 364 | null |
7118 | 2 | null | 7112 | 31 | null | Although terminology is a contentious topic, I prefer to call "explanatory" variables, "predictor" variables.
### When to standardise the predictors:
- A lot of software for performing multiple linear regression will provide standardised coefficients which are equivalent to unstandardised coefficients where you manually standardise predictors and the response variable (of course, it sounds like you are talking about only standardising predictors).
- My opinion is that standardisation is a useful tool for making regression equations more meaningful.
This is particularly true in cases where the metric of the variable lacks meaning to the person interpreting the regression equation (e.g., a psychological scale on an arbitrary metric).
It can also be used to facilitate comparability of the relative importance of predictor variables (although other more sophisticated approaches exist for assessing relative importance; see my post for a discussion).
In cases where the metric does have meaning to the person interpreting the regression equation, unstandardised coefficients are often more informative.
- I also think that relying on standardised variables may take attention away from the fact that we have not thought about how to make the metric of a variable more meaningful to the reader.
- Andrew Gelman has a fair bit to say on the topic.
See his page on standardisation for example and Gelman (2008, Stats Med, FREE PDF) in particular.
### Prediction based on standarisation:
- I would not use standardised regression coefficients for prediction.
- You can always convert standardised coefficients to unstandardised coefficients if you know the mean and standard deviation of the predictor variable in the original sample.
| null | CC BY-SA 2.5 | null | 2011-02-12T04:27:47.997 | 2011-02-12T11:13:03.750 | 2011-02-12T11:13:03.750 | 183 | 183 | null |
7119 | 2 | null | 7088 | 4 | null | I'll do my best to interpret your question.
### Style Rules
- Many journals and disciplines adopt a style guide (e.g., APA, Chicago, MLA, etc.).
If one applies to you, then you are likely to have many relevant rules to guide you in the presentation of tables, figures, and much more.
### Table and Figure Design
- There are many common tables and figures that I've seen for summarising information about responses of a sample to a questionnaire.
Much of it depends on the nature of the questionnaire items.
- If you have a bunch of likert items, it's common to present a table where for each item percentages for each response option is given along with means.
- If you have categorical predictors (e.g., gender, race, income category, etc.) you might want to present a table comparing means for the groups along with tests of statistical significance.
- If you have scales (i.e., items are combined to form a composite), you may want to present a table of these means.
- If you have benchmarks or norms, you may want to incorporate these into the table to aid in interpretation.
- Much more could be said, but it depends a lot on your design, your research question, and other considerations.
### File Formats and Document Creation Tools
- Your question uses the term "word" document.
Arguably, this confuses two issues.
First, there is the issue of file formats. You could create your document using many file formats. Word is only one of a class of word processor file formats. I prefer to use LaTeX, Markdown, or HTML depending on my purpose and then, often, generate a PDF for sharing. Sweave (or some equivalent weaving format) in particular is worth considering if you want to make your report more reproducible.
Second, there is the issue of the document creation tool.
MS Word is only one tool. I prefer to use a good text editor to write one of the above mentioned plain text formats.
- The main point is that regardless of your creation tool or file format, you should be able to produce a report that looks a certain way (although some tools and file formats facilitate given aims better than others; LaTeX in particular produces more attractive word spacing than MS Word, for example).
| null | CC BY-SA 2.5 | null | 2011-02-12T05:00:37.070 | 2011-02-12T05:00:37.070 | null | null | 183 | null |
7120 | 2 | null | 7088 | 2 | null | Jeremy has offered many constructive suggestions. I'll add the point that
>
"If there were one approach that was
clearly superior, then the Law of the
Statistical Jungle would dictate that
it would survive, and all of the other
techniques would exist only as
historical footnotes. The continued
survival of all of them, and the
constant introduction of new ones,
indicates that no clearly superior
solution exists; hence, the
unfortunate need to think about what
we're doing." --- Geoffrey Norman and David Streiner
| null | CC BY-SA 2.5 | null | 2011-02-12T05:00:38.817 | 2011-02-12T05:18:18.587 | 2011-02-12T05:18:18.587 | 183 | 2669 | null |
7121 | 2 | null | 7032 | 1 | null | An graphical example on outliers that requires no software and can be read in 2 minutes is [Wikipedia on Anscombe's quartet](http://en.wikipedia.org/wiki/Anscombe%27s_quartet)
| null | CC BY-SA 3.0 | null | 2011-02-12T05:12:06.707 | 2012-07-24T10:41:12.550 | 2012-07-24T10:41:12.550 | 2795 | 2669 | null |
7122 | 2 | null | 6913 | 4 | null | Just a quick thought:
If you assume that each rating is drawn from a latent continuous variable, then you could define the median of this underlying continuous variable of interest as your value of interest, rather than the mean of this underlying distribution. Where the distribution is symmetric, then the mean and the median would ultimately be estimating the same quantities. Where the distribution is skewed, the median would differ from the mean.
In this case, to my mind, the median would correspond more with what we think of as the typical value. This goes some way to understanding why median income and median house prices are typically reported rather than the mean.
However, when you have a small number of discrete values, the median performs poorly.
Perhaps, you could use some density estimation procedure and then take the median of that, or use some interpolated median.
| null | CC BY-SA 2.5 | null | 2011-02-12T05:31:00.017 | 2011-02-12T05:31:00.017 | null | null | 183 | null |
7123 | 2 | null | 7110 | 25 | null | If we think of designs made up of $n$ cases measured on $k$ occasions, then the following loose definition seems to me to be descriptive of the distinction:
- longitudinal designs: high $n$, low $k$
- time series: low $n$, high $k$
Of course, this raises the question of what is high and what is low.
Summarising my own rough sense of these fuzzy definitions, prototypical examples of:
- time series might have $n$ = 1, 2, or 5 and $k$ = 20, 50, 100, or 1000, and
- longitudinal designs might have $n$ = 10, 50, 100, 1000 and $k$ = 2, 3, 5, 10, 20
Update:
Following up on Dr Who's question about what is the purpose of the distinction, I don't have an authoritative answer, but here are a few thoughts:
- terminology evolves in disciplines concerned with particular substantive problems
- time series
often concerned with forecasting future time points
often concerned with modelling various cyclical and trend processes
often concerned with describing temporal dynamics in great detail
often studies phenomena where the particular thing measured is of specific interest (e.g., unemployment rate, stock market indices, etc.)
temporal indices are often pre-existing
- longitudinal designs:
often use samples of cases as exemplars of a population in order to make inferences about the population (e.g., sample of children to study how children change in general)
often concerned with fairly general temporal processes like growth, variability and relatively simple functional change models
study is often specifically designed to have a given number of time points.
often interested in variation in change processes
Given the differences in the actual temporal dynamics, and the particular combination of $k$ and $n$ this creates different statistical modelling challenges.
For example, with high $n$ and low $k$ multilevel models are often used that borrow strength from the typical change process to describe the individual change process.
These different disciplines, modelling challenges, and literatures encourage the creation of distinct terminology.
Anyway, that's my impression. Perhaps others have greater insight.
| null | CC BY-SA 2.5 | null | 2011-02-12T05:41:19.063 | 2011-02-13T11:51:25.673 | 2011-02-13T11:51:25.673 | 183 | 183 | null |
7124 | 2 | null | 7100 | 0 | null | The procedure i have normally seen followed is to translate the questionnaire from english, and then have it back translated by someone else. If the two english translations match up, then you are good to go, otherwise repeat until they do.
| null | CC BY-SA 2.5 | null | 2011-02-12T08:45:04.460 | 2011-02-12T08:45:04.460 | null | null | 656 | null |
7125 | 2 | null | 7100 | 7 | null | I don't know what your questionnaire aims to assess. In Health-related Quality-of-Life studies, for example, there are a certain number of recommendations for translation issues that were discussed in the following papers (among others):
- Marquis et al., Translating and evaluating questionnaires: Cultural issues for international research, in Fayers & Hays (eds.), Assessing Quality of Life in Clinical Trials (2nd ed.), Oxford, pp. 77-93 (2005).
- Hui and Triandis, Measurement in cross-cultural psychology: A review and comparison of strategies. Journal of Cross-Cultural Psychology, 16(2), 131-152 (1985).
- Mathias et al., Rapid translation of quality of life measures for international clinical trials: Avoiding errors in the minimalist approach. Quality of Life Research, 3, 403-412 (1994).
Translation can be done simultaneously in several languages, as was the case for the WHOQOL questionnaire, or in a primary language (e.g., English) for the SF-36 followed by translation in other target languages. There were many papers related to translation issues in either case that you will probably find on [Pubmed](http://www.ncbi.nlm.nih.gov/pubmed/). Various procedures have been developed to ensure consistent translation, but forward/backward translation is most commonly found in the above studies.
In any case, most common issues when translating items (as single entities, and as a whole, that is at the level of the questionnaire) have to do with the equivalence of the hypothetical concept(s) that is/are supposed to be covered.
Otherwise, the very first things I would look at would be:
- Am I measuring the same concepts? -- this is a question related to validity;
- Are the scores delivered in the foreign language reliable enough? -- this has to do with scores reliability;
- Are the items behaving as expected for everybody, i.e. without differential effects depending on country or native language? -- this is merely related to differential item functioning (DIF), which is said to occur when the probability of endorsing a particular item differs according to a subject-specific covariate (e.g., age, gender, country), when holding subject trait constant.
Some of the common techniques used to assess those properties were discussed in an earlier related question, [Validating questionnaires](https://stats.stackexchange.com/questions/4239/validating-questionnaires).
About DIF specifically, here are some examples of subtle variation across subject-specific characteristics:
- In psychiatric studies, "I feel sad" / "Able to enjoy life" have been shown to highlight gender-related DIF (Crane et al., 2007).
- In personality assessment, there are well-known age and gender-effect on the NEO-PI questionnaire (reviewed in Kulas et al., 2008).
- In Health-related Quality-of-Life, items like "Did you worry?" / "Did you feel depressed?" have been shown to exhibit country-related DIF effect (Petersen
et al. 2003).
A good starting point is this review paper by Jeanne Teresi in 2004: [Differential Item Functioning and Health Assessment](http://outcomes.cancer.gov/conference/irt/teresi.pdf).
| null | CC BY-SA 2.5 | null | 2011-02-12T09:00:13.563 | 2011-02-12T09:00:13.563 | 2017-04-13T12:44:37.583 | -1 | 930 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.