Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6243 | 1 | null | null | 2 | 322 | When computing the difference between means I can resolve a new mean, variance, standard error of the mean, and margin of error, but is there a way to compute a new/composite sample size value?
I ask because I use the difference between means as a way to calibrate the mean and later when comparing other calibrated means sample size comes into the formulas.
([Source](https://stattrek.com/sampling/difference-in-means))
### Update
In the code example below I subtract the calibrations mean from the benchmarks mean time to execute a test. The calibration is an empty test to clock the cost of the benchmarks overhead.
The only part I am missing is how to create a composite sample size (if that is possible).
```
mean -= cs.mean;
variance += cs.variance;
sd = sqrt(variance);
moe = sd * getCriticalValue(getDegreesOfFreedom(me, cal));
rme = (moe / mean) * 100;
extend(me.stats, {
'ME': moe,
'RME': rme,
'deviation': sd,
'mean': mean,
'size': ???,
'variance': variance
});
```
| Computing the difference between means and resolving a new/composite sample size | CC BY-SA 4.0 | null | 2011-01-14T08:09:04.757 | 2023-03-03T10:40:58.100 | 2023-03-03T10:39:58.870 | 362671 | 2634 | [
"sample-size",
"mean"
]
|
6244 | 2 | null | 6239 | 8 | null | Also useful, if you are combining multiple time series and don't want to have to have to `window` every one to get them to match, `ts.union` and `ts.intersect`.
| null | CC BY-SA 2.5 | null | 2011-01-14T09:35:20.220 | 2011-01-14T09:35:20.220 | null | null | 1195 | null |
6245 | 1 | null | null | 4 | 273 | In a field experiment involving crops, what is the difference in considering block as random or otherwise as fixed factor?
As far as I understood, random means that conclusion can be extended to other levels not included in the study; fixed factor on the contrary restricts the analysis only to a specific experiment.
| How can I treat blocks in a split plot design? | CC BY-SA 2.5 | null | 2011-01-14T10:56:04.463 | 2011-01-14T19:32:48.250 | 2011-01-14T19:32:48.250 | 449 | 2779 | [
"anova",
"mixed-model",
"random-variable",
"experiment-design",
"split-plot"
]
|
6246 | 2 | null | 1455 | 3 | null | Generally, with a large sample size it is assumed as reasonable approximation that all estimators (or some opportune functions of them) have a normal distribution. So, if you only need the p-value corresponding to the given confidence interval, you can simply proceed as follows:
- transform $OR$ and the corresponding $(c1,c2)$ CI to $\ln(OR)$ and
$(\ln(c1),\ln(c2))$
[The $OR$ domain is $(0,+\infty)$ while $\ln(OR)$ domain is $(-\infty,+\infty)$]
- since the length of every CI depends on its level alpha
and on estimator standard deviation, calculate
$$
d(OR)=\frac{\ln(c2)-\ln(c1)}{z_{\alpha/2}*2}
$$
$[\text{Pr}(Z>z_{\alpha/2})=\alpha/2; z_{0.05/2}=1.96]$
- calculate the p-value corresponding to the (standardized normal) test statistic $z=\frac{\ln(OR)}{sd(OR)}$
| null | CC BY-SA 3.0 | null | 2011-01-14T12:10:06.210 | 2012-10-18T17:42:31.303 | 2012-10-18T17:42:31.303 | 7290 | 1219 | null |
6247 | 1 | 6251 | null | 3 | 2469 | I have a problem. I am creating a wide barplot (50x grouped 4 columns) and printing it to ps. However the figure does not fit to printed ps - the x axis goes out of a page. It would fit if only I could remove a large left margin on yaxis. But I have no idea how to move picture left and reduce the margin. I have spend some time on asking uncle google but I have not found anything. Could somebody help me?
Here is my R script:
```
mx <- matrix( c(5,3,8,9,5,3,8,9),nr=4)
postscript(file="<fileName>.ps");
barplot(mx, beside=T, col=gray.colors(4), cex.axis = 1.4, cex.names=1.2, xlim=c(1,60), width=0.264, xaxs = "r", yaxs = "r")
legend("topleft", c( "A1", "A2","B1","B2"), pch=15,
cex=1.1,
col=gray.colors(4),
bty="n")
```
| Postscript in R: How to remove a large left margin (on y axis) | CC BY-SA 2.5 | null | 2011-01-14T13:47:37.963 | 2011-01-14T15:39:25.757 | 2011-01-14T15:39:25.757 | 1389 | 1389 | [
"r",
"data-visualization",
"barplot"
]
|
6248 | 2 | null | 6247 | 2 | null | Excerpt from the help page of function par:
`mar`: A numerical vector of the form `c(bottom, left, top, right)`
which gives the number of lines of margin to be specified on
the four sides of the plot. The default is `c(5, 4, 4, 2) +
0.1`.
So try something like
```
par(mar=c(5,1,4,2))
```
before calling barplot.
| null | CC BY-SA 2.5 | null | 2011-01-14T13:52:32.007 | 2011-01-14T13:52:32.007 | null | null | 2116 | null |
6249 | 2 | null | 6247 | 2 | null | You could look after the par() function's mar(gin) paramater. A nice brief can be found in [efg's Research Notes](http://research.stowers-institute.org/efg/R/Graphics/Basics/mar-oma/index.htm).
| null | CC BY-SA 2.5 | null | 2011-01-14T13:53:55.810 | 2011-01-14T13:53:55.810 | null | null | 2714 | null |
6251 | 2 | null | 6247 | 4 | null | I quite like the suggestion made by [Andrew Gelman](http://www.stat.columbia.edu/~cook/movabletype/archives/2010/10/could_someone_p.html) for the default setting of `par`. Namely,
```
par(mar=c(3,3,2,1), mgp=c(2,.7,0), tck=-.01)
```
| null | CC BY-SA 2.5 | null | 2011-01-14T13:59:06.693 | 2011-01-14T13:59:06.693 | null | null | 8 | null |
6252 | 1 | null | null | 26 | 33963 | I have a clustering algorithm (not k-means) with input parameter $k$ (number of clusters). After performing clustering I'd like to get some quantitative measure of quality of this clustering.
The clustering algorithm has one important property. For $k=2$ if I feed $N$ data points without any significant distinction among them to this algorithm as a result I will get one cluster containing $N-1$ data points and one cluster with $1$ data point. Obviously this is not what I want. So I want to calculate this quality measure to estimate reasonability of this clustering. Ideally I will be able to compare this measures for different $k$. So I will run clustering in the range of $k$ and choose the one with the best quality.
How do I calculate such quality measure?
UPDATE:
Here's an example when $(N-1, 1)$ is a bad clustering. Let's say there are 3 points on a plane forming equilateral triangle. Splitting these points into 2 clusters is obviously worse than splitting them into 1 or 3 clusters.
| Clustering quality measure | CC BY-SA 2.5 | null | 2011-01-14T14:06:06.030 | 2022-05-12T13:08:10.357 | 2011-01-14T19:33:53.787 | 255 | 255 | [
"clustering"
]
|
6253 | 1 | null | null | 5 | 360 | You are in an exam, and are presented with the following question:
>
Write down what mark do you expect to
take in this exam... If you get it
right in range of +/-10 % then you
will take 10% bonus... if wrong (or
not answered) you will lose 5%
Assume that you have no idea of how you are going to perform in this exam. How would you choose a mark that maximizes your expected return mark?
In other words, I need to help deriving an equation for the expected mark given a decided mark, if that is possible...
| Increasing Exam Expected Mark | CC BY-SA 2.5 | null | 2011-01-14T15:02:59.787 | 2011-01-15T21:41:48.193 | 2011-01-14T15:24:06.853 | 8 | 2599 | [
"probability",
"expected-value"
]
|
6254 | 1 | null | null | 7 | 1158 | I have a very unbalanced sample set, e.g. 99% true and 1% false.
Is it reasonable to select a balanced subset with a 50/50-distribution for neural network training? The reason for this is, that I guess training on the original data set may induce a bias on the true-samples.
Can you suggest me some literature that covers this topic especially for neural networks?
| Balanced sampling for network training? | CC BY-SA 3.0 | null | 2011-01-14T15:13:26.037 | 2018-01-09T08:52:39.037 | 2018-01-09T08:52:39.037 | 128677 | null | [
"neural-networks",
"sampling",
"references"
]
|
6255 | 2 | null | 6253 | 7 | null | First a couple of assumptions:
1. All marks are equally likely.
1. If you guess your mark to be 95 and you get 95, your return mark is 100 not
105.
1. Similarly, if your exam mark is 1 and you guess 50 (say), then your return
mark is 0 not -4.
1. I'm only considering discrete marks, that is, values 0, ..., 100.
Suppose your guessed mark is $g=50$. Then your expected return mark is:
$$
\frac{\sum_{i=0}^{34} + \sum_{i=50}^{70} + \sum_{i=56}^{95}}{101} = 49.15482
$$
This is for a particularly $g$. We need to repeat this for all $g$. Using the R
code at the end, we get the following plot:

Since all marks are equally likely you get a plateau with edge effects. If you really have absolutely no idea of what mark you will get, then a sensible strategy would be to maximise the chance of passing the exam. If the pass mark is 40%, then set your guess mark at 35%. This now means that to pass the exam, you only need to get above 35% but more importantly your strategy for sitting the exam is to answer every question to the best of your ability.
If your guess mark was 30%, and towards the end of the exam you thought that you would score 42%, you are now in the strange position of deciding whether to intentionally make an error (as 42% results in a return mark of 37%).
Note: I think in most real life situations you would have some idea of how you would get on. For example, do you really think that you have equal probability of getting between 0-10%, 11-20%, ..., 90-100% in your exam.
R code
```
f =function(s) {
mark = 0
for(i in 0:100){
if(i < (s-10) | i > (s + 10))
mark = mark + max(0, i-5)
else
mark = mark + min(i+10, 100)
}
return(mark/101)
}
s = 0:100
y = sapply(s, f)
plot(s, y)
```
| null | CC BY-SA 2.5 | null | 2011-01-14T15:17:31.510 | 2011-01-15T21:41:48.193 | 2011-01-15T21:41:48.193 | 8 | 8 | null |
6256 | 2 | null | 6254 | 7 | null | Yes, it is reasonable to select a balanced dataset, however if you do your model will probably over-predict the minority class in operation (or on the test set). This is easily overcome by using a threshold probability that is not 0.5. The best way to choose the new threshold is to optimise on a validation sample that has the same class frequencies as encountered in operation (or in the test set).
Rather than re-sample the data, a better thing to do would be to give different weights to the positive and negative examples in the training criterion. This has the advantage that you use all of the available training data. The reason that a class imbalance leads to difficulties is not the imbalance per se. It is more that you just don't have enough examples from the minority class to adequately represent its underlying distribution. Therefore if you resample rather than re-weight, you are solving the problem by making the distribution of the majority class badly represented as well.
Some may advise simply using a different threshold rather than reweighting or resampling. The problem with that approach is that with ANN the hidden layer units are optimised to minimise the training criterion, but the training criterion (e.g. sum-of-squares or cross-entropy) depends on how the behaviour of the model away from the decision boundary rather than only near the decision boundary. As as result hidden layer units may be assigned to tasks that reduce the value of the training criterion, but do not help in accurate classification. Using re-weighted training patterns helps here as it tends to focus attention more on the decision boundary, and so the allocation of hidden layer resources may be better.
For references, a google scholar search for "Nitesh Chawla" would be a good start, he has done a fair amount of very solid work on this.
| null | CC BY-SA 2.5 | null | 2011-01-14T15:41:24.823 | 2011-01-14T15:41:24.823 | null | null | 887 | null |
6257 | 2 | null | 6252 | 5 | null | Since clustering is unsupervised, it's hard to know a priori what the best clustering is. This is research topic. Gary King, a well-known quantitative social scientist, has a [forthcoming article](http://gking.harvard.edu/publications/general-purpose-computer-assisted-clustering-methodology) on this topic.
| null | CC BY-SA 2.5 | null | 2011-01-14T16:47:59.270 | 2011-01-14T16:47:59.270 | null | null | null | null |
6258 | 2 | null | 6245 | 3 | null | The way you are thinking is one of the ways most people interpret blocks. But the bigger picture which sometimes people don't notice is: Blocks are a way to model a correlation structure. They let us "eliminate" or control for factors which we know influence the outcomes but are not really of interest. However, your conclusion about fixed factors may not be true in general. If the sampling is random and the effects can be considered fixed (or the individual variances are not very large), fixed factor analysis can be generalized to the population.
On a different note, if you read Casella's Statistical Design, he points out that blocks need not be treated as random. It does makes sense to treat them as random, but not all the time. In most of the cases, thinking about blocks as a tool to impose a correlation structure or control for the "unknown" factors helps.
| null | CC BY-SA 2.5 | null | 2011-01-14T17:22:49.567 | 2011-01-14T17:30:57.563 | 2011-01-14T17:30:57.563 | 1307 | 1307 | null |
6260 | 2 | null | 6253 | 6 | null | I'm not sure if this would be a funny game or your professor is mildly sadistic. It would be torturous for students who are right on the edge of passing (which we may expect them to be the worst guessers!) Sorry not an answer but I couldn't help myself.

| null | CC BY-SA 2.5 | null | 2011-01-14T18:37:22.030 | 2011-01-14T18:37:22.030 | null | null | 1036 | null |
6261 | 2 | null | 6232 | 2 | null | Sounds like a mixed effects ANOVA. If you have a continuous treatment variable (i.e., harvesting intensity), then an ANCOVA is warranted (or, really, just a mixed effects/hierarchical general linear model) (or generalized linear model if your response variable better fits that framework).
| null | CC BY-SA 2.5 | null | 2011-01-14T19:57:45.567 | 2011-01-14T19:57:45.567 | null | null | 101 | null |
6262 | 2 | null | 6074 | 10 | null | This game looks similar to 20 questions at [http://20q.net](http://20q.net), which the creator reports is based on a neural network.
Here's one way to structure such network, similar to the neural network described in [Concept description vectors and the 20 question game](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.60.3182).
You'd have
- A fixed number of questions, with some questions marked as "final" questions.
- One input unit per question, where 0/1 represents no/yes answer. Initially set to 0.5
- One output unit per question, sigmoid squished into 0..1 range
- Hidden layer connecting all input units to all output units.
Input units for questions that have been answered are set to 0 or 1, and the assumption is that neural network has been trained to make output units output values close to 1 for questions that have "Yes" answer given a set of existing answers.
At each stage you would pick the question which `NN` is the least sure about, ie, corresponding output unit is close to `0.5`, ask the question, and set corresponding input unit to the answer. At the last stage you pick an output unit from the "final question" list with value closest to `1`.
Each game of 20 questions gives 20 datapoints which you can use to update `NN`'s weights with back-propagation, ie, you update the weights to make the outputs of current neural network match the true answer given all the previous questions asked.
| null | CC BY-SA 2.5 | null | 2011-01-14T21:57:51.143 | 2011-01-15T19:41:21.880 | 2011-01-15T19:41:21.880 | 511 | 511 | null |
6265 | 1 | null | null | 3 | 2861 | I would like some help with the following problem.
I have 40 subjects. On each subject I take a measurement at 25 body
sites. The measurement is a continuous variable that varies between 0 and 1 and appears to be normally distributed. I want do a test to see if body site statistically significantly affects the measurement. I also have demographic data on the subjects but haven't tried to analyze that yet.
I think the correct thing to do would be to use repeated measures ANOVA or MANOVA to do the analysis because the data is paired. The problem is that I have missing data and my basic understanding is that I would have to exclude any subject that has data missing for even one body site. If I did this, I would only have 6 subjects left. If I excluded any body site with data missing then I would only have 8 body sites left.
So my thought was that I could use regular ANOVA but just lose some power to see a difference. However, this seems like it violates the assumption of independence for ANOVA. So my main questions are as follows:
- Does using Regular ANOVA on matched data violate the independence assumption, and if so is the only consequence a decreased likelihood of rejecting the null hypothesis or is there
more to it than that? Also does anyone know of a reference I could site regarding this?
- Does anyone have any suggestions for a better way to analyze my data? By the way I usually use JMP or SPSS for analysis.
Thanks!
Billy
| Consequence of violating independence assumption of ANOVA | CC BY-SA 2.5 | null | 2011-01-15T00:02:52.923 | 2011-01-16T10:24:07.540 | 2011-01-15T07:57:33.163 | 449 | 2788 | [
"anova",
"repeated-measures",
"missing-data",
"manova"
]
|
6266 | 2 | null | 6253 | 2 | null | Use the bootstrap! Take lots of practice exams and estimate what your score will be on the real exam. If it does not improve your estimate, it will probably be good preparation!
| null | CC BY-SA 2.5 | null | 2011-01-15T03:50:50.530 | 2011-01-15T03:50:50.530 | null | null | 795 | null |
6267 | 2 | null | 6252 | 5 | null | Here you have a couple of measures, but there are many more:
SSE: sum of the square error from the items of each cluster.
Inter cluster distance: sum of the square distance between each cluster centroid.
Intra cluster distance for each cluster: sum of the square distance from the items of each cluster to its centroid.
Maximum Radius: largest distance from an instance to its cluster centroid.
Average Radius: sum of the largest distance from an instance to its cluster centroid divided by the number of clusters.
| null | CC BY-SA 2.5 | null | 2011-01-15T04:15:14.430 | 2011-01-15T04:15:14.430 | null | null | 1808 | null |
6268 | 1 | 6398 | null | 11 | 810 | I'm searching how to (visually) explain simple linear correlation to first year students.
The classical way to visualize would be to give an Y~X scatter plot with a straight regression line.
Recently, I came by the idea of extending this type of graphics by adding to the plot 3 more images, leaving me with: the scatter plot of y~1, then of y~x, resid(y~x)~x and lastly of residuals(y~x)~1 (centered to the mean)
Here is an example of such a visualization:

And the R code to produce it:
```
set.seed(345)
x <- runif(50) * 10
y <- x +rnorm(50)
layout(matrix(c(1,2,2,2,2,3 ,3,3,3,4), 1,10))
plot(y~rep(1, length(y)), axes = F, xlab = "", ylim = range(y))
points(1,mean(y), col = 2, pch = 19, cex = 2)
plot(y~x, ylab = "", )
abline(lm(y~x), col = 2, lwd = 2)
plot(c(residuals(lm(y~x)) + mean(y))~x, ylab = "", ylim = range(y))
abline(h =mean(y), col = 2, lwd = 2)
plot(c(residuals(lm(y~x)) + mean(y))~rep(1, length(y)), axes = F, xlab = "", ylab = "", ylim = range(y))
points(1,mean(y), col = 2, pch = 19, cex = 2)
```
Which leads me to my question: I would appreciate any suggestions on how this graph can be enhanced (either with text, marks, or any other type of relevant visualizations). Adding relevant R code will also be nice.
One direction is to add some information of the R^2 (either by text, or by somehow adding lines presenting the magnitude of the variance before and after the introduction of x)
Another option is to highlight one point and showing how it is "better explained" thanks to the regression line. Any input will be appreciated.
| How to present the gain in explained variance thanks to the correlation of Y and X? | CC BY-SA 3.0 | null | 2011-01-15T06:36:34.710 | 2016-04-07T10:50:34.987 | 2016-04-07T10:50:34.987 | 2910 | 253 | [
"r",
"data-visualization",
"regression",
"correlation"
]
|
6269 | 2 | null | 6268 | 1 | null | Not answering to your exact question, but the followings could be interesting by visualizing one possible pitfall of linear correlations based on an [answer](https://stackoverflow.com/questions/4666590/remove-outliers-from-correlation-coefficient-calculation/4668720#4668720) from [stackoveflow](https://stackoverflow.com/q/4666590/564164):
```
par(mfrow=c(2,1))
set.seed(1)
x <- rnorm(1000)
y <- rnorm(1000)
plot(y~x, ylab = "", main=paste('1000 random values (r=', round(cor(x,y), 4), ')', sep=''))
abline(lm(y~x), col = 2, lwd = 2)
x <- c(x, 500)
y <- c(y, 500)
cor(x,y)
plot(y~x, ylab = "", main=paste('1000 random values and (500, 500) (r=', round(cor(x,y), 4), ')', sep=''))
abline(lm(y~x), col = 2, lwd = 2)
```

[@Gavin Simpson's](https://stackoverflow.com/questions/4666590/remove-outliers-from-correlation-coefficient-calculation/4667300#4667300) and [@bill_080's answer](https://stackoverflow.com/questions/4666590/remove-outliers-from-correlation-coefficient-calculation/4675675#4675675) also includes nice plots of correlation in the same topic.
| null | CC BY-SA 2.5 | null | 2011-01-15T09:00:39.090 | 2011-01-15T09:00:39.090 | 2017-05-23T12:39:26.203 | -1 | 2714 | null |
6270 | 2 | null | 6252 | 17 | null | The choice of metric rather depends on what you consider the purpose of clustering to be. Personally I think clustering ought to be about identifying different groups of observations that were each generated by a different data generating process. So I would test the quality of a clustering by generating data from known data generating processes and then calculate how often patterns are misclassified by the clustering. Of course this involved making assumtions about the distribution of patterns from each generating process, but you can use datasets designed for supervised classification.
Others view clustering as attempting to group together points with similar attribute values, in which case measures such as SSE etc are applicable. However I find this definition of clustering rather unsatisfactory, as it only tells you something about the particular sample of data, rather than something generalisable about the underlying distributions. How methods deal with overlapping clusters is a particular problem with this view (for the "data generating process" view it causes no real problem, you just get probabilities of cluster membership).
| null | CC BY-SA 2.5 | null | 2011-01-15T10:16:53.073 | 2011-01-15T10:16:53.073 | null | null | 887 | null |
6271 | 2 | null | 2691 | 9 | null | Why so eigenvalues/eigenvectors ?
When doing PCA, you want to compute some orthogonal basis by maximizing the projected variance on each basis vector.
Having computed previous basis vectors, you want the next one to be:
- orthogonal to the previous
- norm 1
- maximizing projected variance, i.e with maximal covariance norm
This is a constrained optimization problem, and the Lagrange multipliers (here's for the geometric intuition, see wikipedia page) tell you that the gradients of the objective (projected variance) and the constraint (unit norm) should be "parallel" at the optimium.
This is the same as saying that the next basis vector should be an eigenvector of the covariance matrix. The best choice at each step is to pick the one with the largest eigenvalue among the remaining ones.
| null | CC BY-SA 2.5 | null | 2011-01-15T12:25:12.660 | 2011-01-15T12:31:57.020 | 2011-01-15T12:31:57.020 | null | null | null |
6273 | 2 | null | 6243 | 1 | null | I just think you are making it just too complex -- it is a JS benchmark for other programmers, not a clinical trial or Higgs boson search that will be peer reviewed by bloodthirsty referees and later have a great impact.
Just make a non-small number of repetitions (30), of both test and empty test, subtract the means, calculate error of this mean difference as I wrote you before and publish just this -- adding dozens of other measures will only confuse the readers.
Then, for each comparisons, you will have two means $X$ and $Y$ and two errors of that means, respectively $\Delta X$ and $\Delta Y$, so calculate $T$ as
$$T=\frac{X-Y}{\sqrt{(\Delta X)^2+(\Delta Y)^2}},$$
assume $\infty$ degrees of freedom for t-test and you're done. (If you don't believe that $30=\infty$, see [this](https://en.wikipedia.org/wiki/Student%27s_t-distribution).)
| null | CC BY-SA 4.0 | null | 2011-01-15T13:46:04.417 | 2023-03-03T10:40:58.100 | 2023-03-03T10:40:58.100 | 362671 | null | null |
6274 | 2 | null | 6243 | 1 | null | How are you comparing the calibrated means? If you're looking at differences between them, and testing if that's zero using a $t$-test, then surely the mean of the empty loop ($\bar{x}_0$, say) will simply cancel out:
$$(\bar{x}_1 - \bar{x}_0) - (\bar{x}_2 - \bar{x}_0) = \bar{x}_1 - \bar{x}_2$$
| null | CC BY-SA 2.5 | null | 2011-01-15T13:58:40.187 | 2011-01-15T13:58:40.187 | null | null | 449 | null |
6275 | 1 | 6278 | null | 17 | 4962 | I am currently collecting data for an experiment into psychosocial characteristics associated with the experience of pain. As part of this, I am collecting GSR and BP measurements electronically from my participants, along with various self-report and implicit measures. I have a psychological background and am comfortable with factor analysis, linear models and experimental analysis.
My question is what are good (preferably free) resources available for learning about time series analysis. I am a total newb when it comes to this area, so any help would be greatly appreciated. I have some pilot data to practice on, but would like to have my analysis plan worked out in detail before I finish collected data.
If the provided references were also R related, that would be wonderful.
Edited: to change grammar and to add 'self report and implicit measures'
| Good introductions to time series (with R) | CC BY-SA 3.0 | null | 2011-01-15T14:01:55.437 | 2018-05-04T12:11:19.117 | 2018-05-04T12:11:19.117 | 53690 | 656 | [
"r",
"time-series",
"references"
]
|
6276 | 2 | null | 6275 | 6 | null | [Time Series Analysis and Its Applications: With R Examples](http://www.stat.pitt.edu/stoffer/tsa3/) by Robert H. Shumway and David S. Stoffer would be a great resource for the subject, but you may find a lot of useful blog entries (e.g. my favorite one: [learnr](http://learnr.wordpress.com/)) and tutorials (e.g. [from the linked homepage](http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm)) also on the Internet freely available .
On David Stoffer's homepage (linked above) you can find the example datasets used in the book's chapters, and others from the first and second editions with even sample chapters also.
| null | CC BY-SA 2.5 | null | 2011-01-15T14:14:37.693 | 2011-01-15T14:20:01.030 | 2011-01-15T14:20:01.030 | 2714 | 2714 | null |
6277 | 2 | null | 6268 | 1 | null | I'd have two two-panel plots, both have the xy plot on the left, and a histogram on the right. In the first plot, a horizontal line is placed at the mean of y and lines extend from this to each point, representing the residuals of y values from the mean. The histogram with this simply plots these residuals. Then in the next pair, the xy plot contains a line representing the linear fit and again vertical lines representing the residuals, which are represented in a histogram to the right. Keep x axis of the histograms constant to highlight the shift to lower values in the linear fit relative to the mean "fit".
| null | CC BY-SA 2.5 | null | 2011-01-15T14:27:25.763 | 2011-01-15T14:27:25.763 | null | null | 364 | null |
6278 | 2 | null | 6275 | 25 | null | This is a very large subject and there are many good books that cover it. These are both good, but Cryer is my favorite of the two:
- Cryer. "Time Series Analysis: With Applications in R" is a classic on the subject, updated to include R code.
- Shumway and Stoffer. "Time Series Analysis and Its Applications: With R Examples".
A good free resource is [Zoonekynd's ebook, especially the time series section](http://zoonek2.free.fr/UNIX/48_R/15.html).
My first suggestion for seeing the R packages would be the free ebook ["A Discussion of Time Series Objects for R in Finance"](https://www.rmetrics.org/ebooks-tseries) from Rmetrics. It gives lots of examples comparing the different time series packages and discusses some of the considerations, but it doesn't provide any theory.
Eric Zivot's ["Modeling financial time series with S-PLUS"](http://books.google.com/books?id=sxODP2l1mX8C) and Ruey Tsay's "[Analysis of Financial Time Series](http://rads.stackoverflow.com/amzn/click/0471690740)" (available in the TSA package on CRAN) are directed and financial time series but both provide good general references. I strongly recommend looking at [Ruey Tsay's homepage](http://faculty.chicagobooth.edu/ruey.tsay/teaching/) because it covers all these topics, and provides the necessary R code. In particular, look at the ["Analysis of Financial Time Series"](http://faculty.chicagobooth.edu/ruey.tsay/teaching/bs41202/sp2009/), and ["Multivariate Time Series Analysis"](http://faculty.chicagobooth.edu/ruey.tsay/teaching/mts/sp2009/) courses.
| null | CC BY-SA 2.5 | null | 2011-01-15T14:46:59.737 | 2011-01-15T14:46:59.737 | null | null | 5 | null |
6279 | 1 | null | null | 2 | 4859 | I am a very beginner of statistic. Recently a project require me to analyse data using logistic regression & SPSS within a specific time frame. Although I have read few books, but still very blur on how to start off. Can someone guide me through? What is the 1st ste and what next?
Anyway, I have started some. Once entered the data into SPSS, I have done crosstab (categorical IV), descriptive (continuous IV) and spearman correlation.
Then, I proceed to test for nonlinearity by transforming into Ln which give me some problems. I have re-coded all zero cells to a small value (0.0001) to enable the Ln transformation. Then, I re-test the nonlinearity.
Question:
1) The only solution for violation is to transform the variable from continuous to categorical? I got one violation.
2) One Exp(B) is extremely large (15203.835). What does this means? Why?
3) There is one interaction has Exp(B) = 0.00. Why?
Many thanks.
| Steps of data analysis using logistic regression | CC BY-SA 2.5 | null | 2011-01-15T14:48:52.947 | 2011-01-16T04:48:53.307 | null | null | 2793 | [
"logistic"
]
|
6280 | 2 | null | 6180 | 4 | null | You ask how to 'formally and usefully' present your conclusions
Formally: Your answer is an accurate summary of some of the results from Brown et al. as I understand them. (I note you do not offer their preferred small n method).
Usefully: I wonder who you audience is. For professional statisticians, you could state your two intervals directly with only citations to the original papers - no further exposition needed. For an applied audience, you would surely rather pick an interval on whatever substantive grounds you (or they) have, e.g. a preference for conservative coverage or good behavior for small proportions, etc., and just present that interval alone, noting its nominal and perhaps also its actual coverage much as you do above, perhaps with a footnote to the effect that other intervals are possible.
As it stands you offer a choice of intervals but not much explicit guidance for an applied audience to make use of that information. In short, for that sort of audience I would suggest either more information about the implications of choice of interval. Or less!
| null | CC BY-SA 2.5 | null | 2011-01-15T15:02:55.903 | 2011-01-15T15:02:55.903 | null | null | 1739 | null |
6281 | 1 | 6284 | null | 6 | 19662 | I'm working through a practice problem for my Stats homework. We're using Confidence Intervals to find a range that the true mean lies within. I'm having trouble understanding how to find the required sample size to estimate the true mean within something like +- 0.5%.
I understand how to work the problem when the range is given as a number, such as +- 0.5 mm. How do I handle percentages?
| How to use Confidence Intervals to find the true mean within a percentage | CC BY-SA 2.5 | null | 2011-01-15T15:03:39.233 | 2015-09-24T12:12:15.880 | 2011-01-15T18:10:18.550 | 2129 | 2794 | [
"confidence-interval",
"self-study"
]
|
6284 | 2 | null | 6281 | 6 | null | I am not sure what kind of variable is being audited, so I give 2 alternatives:
- To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confidence interval) you have to know a few parameters: mean, standard deviation (and to be precise: population size). If you do not know these, you have to be able to give an accurate estimate to those (based on e.g. researches in the past).
$$n=\left(\frac{Z_{c}\sigma}{E}\right)^2,$$
where $n$ is sample size, $Z_{c}$ is choosen from standard normal distribution table based on $\alpha$ and $\sigma$ is the standard deviation.
- I could image that the variable being examined is a discrete one, and the confidence interval shows that how many percent of the population is about to choose one category based on the sample (proportion). That way the required sample size could be computed easily with:$$n=p(1-p)\left(\frac{Z_{c}}{E}\right)^2$$ where $n$ is sample size, $p$ is proportion in population, $Z_{c}$ is choosen from standard normal distribution table based on $\alpha$, and $E$ is the margin of error.
Note: you can find a lot of online calculators also ([e.g.](http://www.rad.jhmi.edu/jeng/javarad/samplesize/)). Worth reading [this article](http://www.ltcconline.net/greenl/courses/201/estimation/ciprop.htm) also.
| null | CC BY-SA 3.0 | null | 2011-01-15T18:37:14.230 | 2015-09-24T12:12:15.880 | 2015-09-24T12:12:15.880 | 22228 | 2714 | null |
6285 | 2 | null | 6243 | 0 | null | Thanks @mbq and @onestop. After running some tests the calibration was a wash. Subtracting the means raised the margin or error to the point that the calibrated and none calibrated test results where indeterminately different.
@mbq I will take your advice and reduce the critical value lookup to 30 (as infinity).
When comparing against other benchmarks I think I can even avoid the walsh t-test because the variances are so low (all below 1). Some examples are the variances of benchmarks are:
`1.1733873442408789e-16`, `0.000012223589489868368`, `3.772214786601029e-19`, and `6.607725046958527e-16`.
| null | CC BY-SA 2.5 | null | 2011-01-15T20:28:57.713 | 2011-01-15T20:28:57.713 | null | null | 2634 | null |
6286 | 2 | null | 6265 | 1 | null | You should consider using mixed-effect / multi-level models. The techniques used to fit these models work fine with unbalanced design, which is how it will interpret your missing data. As long as the data is missing at random, this is a reasonable way to proceed. SPSS is able to fit linear mixed-effect models.
Mixed-effect models also allow continuous covariates, so you could add, e.g., age as a person-level predictor.
| null | CC BY-SA 2.5 | null | 2011-01-15T21:30:57.260 | 2011-01-15T21:30:57.260 | null | null | 2739 | null |
6287 | 1 | 6289 | null | 6 | 838 | I'm fooling around with threshold time series models. While I was digging through what others have done, I ran across the CDC's site for flu data.
[http://www.cdc.gov/flu/weekly/](http://www.cdc.gov/flu/weekly/)
About 1/3 of the way down the page is a graph titled "Pneumonia and Influenza Mortality....". It shows the actuals in red, and two black seasonal series. The top seasonal series is labeled "Epidemic Threshold" and appears to be some constant percent/amount above the "Seasonal Baseline" series.
My first question is: Is that really how they determine when to publicly say we're in an epidemic (some percent above baseline)? It looks to me like they're in the noise range, not to mention the "other factors" influence that is obviously not accounted for in that baseline series. To me, there are way too many false positives.
My second question is: Can you point me to any real world examples/publications of threshold models (hopefully in R)?
| Threshold models and flu epidemic recognition | CC BY-SA 2.5 | null | 2011-01-15T23:09:06.293 | 2022-05-30T12:30:56.767 | 2011-01-16T12:23:18.243 | null | 2775 | [
"r",
"time-series",
"epidemiology",
"threshold"
]
|
6288 | 2 | null | 6279 | 1 | null | Generally large beta coefficients signal multi-collinearity. You should look for marginals that are zero in your cross-tabulations. You should also pay attention to mpiktas's comment. Testing for linearity (and transforming to categorical) is not generally needed if you have been setting up your data correctly.
| null | CC BY-SA 2.5 | null | 2011-01-15T23:48:40.987 | 2011-01-15T23:48:40.987 | null | null | 2129 | null |
6289 | 2 | null | 6287 | 6 | null | The CDC uses the epidemic threshold of
>
1.645 standard deviations above the baseline for that time of year.
The definition may have multiple sorts of detection or mortality endpoints. (The one you are pointing to is pneumonia and influenza mortality. The lower black curve is not really a series, but rather a modeled seasonal mean, and the upper black curve is 1.645 sd's above that mean).
[http://www.cdc.gov/mmwr/PDF/ss/ss5107.pdf](http://www.cdc.gov/mmwr/PDF/ss/ss5107.pdf)
[http://www.cdc.gov/flu/weekly/pdf/overview.pdf](http://www.cdc.gov/flu/weekly/pdf/overview.pdf)
```
> pnorm(1.645)
[1] 0.950015
```
So it's a 95% threshold. (And it does look as though about 1 out of 20 weeks are over the threshold. You pick your thresholds, not to be perfect, but to have the sensitivity you deem necessary.) The seasonal adjustment model appears to be sinusoidal. There is an [R "flubase" package](http://search.r-project.org/cgi-bin/namazu.cgi?query=seasonal+model+epidemic&max=100&result=normal&sort=score&idxname=functions&idxname=vignettes&idxname=views) that should be consulted.
| null | CC BY-SA 4.0 | null | 2011-01-16T01:18:19.567 | 2019-02-02T02:23:34.853 | 2019-02-02T02:23:34.853 | 11887 | 2129 | null |
6290 | 2 | null | 6268 | 1 | null | I think what you propose is good, but I would do it in three different examples
1) X and Y are completely unrelated. Simply remove "x" from the r code that generates y (y1<-rnorm(50))
2) The example you posted (y2 <- x+rnorm(50))
3) The X are Y are the same variable. Simply remove "rnorm(50)" from the r code that generates y (y3<-x)
This would more explicitly show how increasing the correlation decreases the variability in the residuals. You would just need to make sure that the vertical axis doesn't change with each plot, which may happen if you're using default scaling.
So you could compare three plots r1 vs x, r2 vs x and r3 vs x. I am using "r" to indicate the residuals from the fit using y1, y2, and y3 respectively.
My R skills in plotting are quite hopeless, so I can't offer much help here.
| null | CC BY-SA 2.5 | null | 2011-01-16T04:00:33.937 | 2011-01-16T04:00:33.937 | null | null | 2392 | null |
6291 | 2 | null | 6279 | 1 | null | The large value of "B" would be a coefficient of a variable usually called "X" in your model. Usually, "X" has a real world meaning (could be income, could be a measured volume of something, etc.). So the job is to interpret this "B" in terms of "X". The usual definition (in ordinary least squares) is that a ONE UNIT increase in "X" corresponds to a "B" increase in "Y" (where "Y" is the dependent variable which you are modeling). It is similar (but not exact) when interpreting for a logistic regression: a ONE UNIT increase in "X" corresponds to a B increase in the log-odds (which is "Y" in this case). Therefore EXP(B) tells you the proportional increase in the odds for a ONE UNIT increase in "X". So the question which may help is "What does a ONE UNIT increase in X mean in the real world?" This may make the apparent extreme value seem more sensible. Another thing to ask is what is the range of X values in your data? for if this is much smaller than 1 then EXP(B) is effectively extrapolating well beyond what you have observed in the "X space" (something which is generally not recommended because the relationship may be different).
Put more succinctly the numerical value of your betas is related to the scale at which you measure your X variables (or independent variables).
The easiest mathematical way to see this is the form of the simple ordinary least squares estimate which is
B=(standard deviation of Y) / (standard deviation of X) * (correlation between Y and X)
This result is not exact in logistic regression, but it does approximately happen.
I would also recommend that replacing the observed proportions with (r_i+1)/(n_i+2) is not a bad way to go before plugging them into the logistic function, because it can help guard against creating extreme logit values (which are only an artifact of your choice of "small number"). Extreme logit values can create outliers and influential points in the regression, and this can make your regression coefficients highly sensitive to these observations, or to be more precise, to your particular choice of "small number".
This has been called "Laplace's rule of succession" (add 1 success and 1 failure to what was observed) and it effectively "pulls" the proportion back towards 1/2, with less "pulling" the greater n_i is. In Bayesian language it corresponds to the prior information that it is possible for both of the binary outcomes to occur.
| null | CC BY-SA 2.5 | null | 2011-01-16T04:48:53.307 | 2011-01-16T04:48:53.307 | null | null | 2392 | null |
6292 | 2 | null | 6265 | 3 | null | Billy,
From the comment about the data being rejected if the light is below a certain intensity, does this correspond to why you have missing records? Because if you do, then you do not have total "missingness", but rather "censoring" because you know that the response is below a certain threshold. This does have an impact because the likelihood function becomes a product of the density functions for those observed value, and a product of the cumulative density functions for those values which were "rejected". I would say that this is a case of the data being "Not Missing At Random" (NMAR). Basically this means the cause of the missingness is related to the actual value that is missing, which in this case, the value is below a certain threshold.
One thing I find interesting about NMAR is it often comes across as a "bad" thing. My view is that it is actually a good thing (from a statistical point of view) because you haven't completely lost the record, it is still giving you some information. The only bad thing is that the "standard" mathematics usually don't work as elegantly, and standard software can't be used as easily. You just need to work harder to extract the information.
Another thing you make reference to is the "mean intensity" of the light. It seems odd to talk of "mean" when referring to a single measurement. Or is this the mean of the light over the area of the spot?
Survival analysis with censored data may be a useful place to start, since this is a similar problem mathematically. It may be a bit of a hurdle to "translate" the survival application into a relevant one for your analysis.
My advice would be to start simple, and build complexity as you go. It seems as though you have already done this a bit (by excluding the demographic data initially).
One way to do this is to do a simple one-way ANOVA only using complete cases with the classification over the 25 sites. one-way unbalanced ANOVA is quite easy to implement (but not as a regression! lack of balance makes ANOVA different from its standard regression representation!). It resembles the two sample t-test with different sample sizes.
Another option is to analyse the data 2 body sites at a time using complete "paired" cases. This would give you a kind of "ordering" of the body types.
This analysis is all of the "exploratory" type, because it essentially "throws away" part of the information in exchange for a more simple view of the data.
It does seem as though this would be quite an involved analysis to take all of the information into account (particularly the censoring). The multi-level approach suggested by @BR would be a useful approximation to this (by "throwing away" the threshold information). The multiple imputation approach suggested by Christopher Aden is another way to go, where you get the advantage of the easier to understand ANOVA, while properly taking account of the uncertainty due to missing values.
| null | CC BY-SA 2.5 | null | 2011-01-16T10:24:07.540 | 2011-01-16T10:24:07.540 | null | null | 2392 | null |
6293 | 2 | null | 6281 | 2 | null | It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic.
Here's why I think this is the case.
The problem of estimating the population mean, say $\mu$, to within $\pm $ 0.5% obviously depends on the value of $\mu$ (a pivotal statistic would NOT depend on $\mu$). To estimate $\mu$ within an absolute amount, say $\pm $1, is independent of the actual value of $\mu$ (in the normally distributed case). To put it another way, the width of the standard "Z" confidence interval does not depend on $\mu$, it only depends on the population standard deviation, say $\sigma$, the sample size n, and the level of confidence, expressed by the value Z. You can call the length of this interval $ L=L(\sigma,n,Z)=\frac{2 \sigma Z}{\sqrt{n}} $
Now we want an interval which is $0.01 \mu $ wide (equal length either side of $\mu$). So the required equation that we need to solve is:
$ L=0.01 \mu=\frac{2 \sigma Z}{\sqrt{n}} $
Re-arranging for n gives
$ n = (\frac{2 \sigma Z}{0.01 \mu})^2 = 40,000 Z^2 (\frac{\sigma}{\mu})^2 $
Using Z=1.96 to have a 95% CI gives
$ n = 153,664 * (\frac{\sigma}{\mu})^2 $
So that you need some prior information about the ratio $\frac{\sigma}{\mu}$ (by "prior information" I mean you need to know something about the ratio $\frac{\sigma}{\mu}$ in order to solve the problem). If $\frac{\sigma}{\mu}$ is not known with certainty, then the "optimal sample size" also cannot be known with certainty. The best way to go from here is to specify a probability distribution for $\frac{\sigma}{\mu}$ and then take the expected value of $(\frac{\sigma}{\mu})^2$ and put this into the above equation.
What happens if we only require $\pm 0.005 $ (rather than $\pm 0.005 \mu$) is that $\mu$ in the above equations for n disappears.
| null | CC BY-SA 2.5 | null | 2011-01-16T11:31:53.067 | 2011-01-18T10:51:39.927 | 2011-01-18T10:51:39.927 | 2392 | 2392 | null |
6294 | 1 | 6295 | null | 6 | 3316 | What analyses can be used to find an interaction effect in a 2-factor design, with one ordinal and one categorical factor, with binary-valued data?
Specifically, are there any types of analyses that are capable of dealing with a 2 factor design 5(ordinal) x 2(categorical), where the outcomes are either true or false?
One could do a 5x2 chi square analysis, but it loses the power of the ordinality of the one factor.
Alternatively, one could run independent logistic/probit regressions, but then there is the question of testing for the interaction effect.
Any thoughts or suggestions that would put me in the right direction would be helpful.
| Interaction between ordinal and categorical factor | CC BY-SA 2.5 | null | 2011-01-16T11:51:10.763 | 2011-01-16T13:54:38.253 | 2011-01-16T12:26:17.087 | null | 2800 | [
"interaction"
]
|
6295 | 2 | null | 6294 | 3 | null | I'd stick with logistic or probit regression, enter both factors as covariates, but enter the ordinal factor as if it was continuous. To test for interaction, do a [likelihood-ratio test](http://en.wikipedia.org/wiki/Likelihood-ratio_test) comparing models with and without an interaction between the two factors. This test will have a single degree of freedom and therefore retain good power.
After using this to decide whether or not you want to include an interaction between the two factors, you can then move on to decide how best to code the 5-level factor in your final model. It could make sense to keep treating it as if it were continuous, or you might wish to code it as four dummy (indicator) variables, or you choose to collapse it into fewer levels, or use some other type of contrast. The choice probably depends on the scientific meaning of the model and its intended use, as well as the fit of the various models.
| null | CC BY-SA 2.5 | null | 2011-01-16T12:43:29.283 | 2011-01-16T12:43:29.283 | null | null | 449 | null |
6297 | 2 | null | 2715 | 18 | null | Always ask yourself "what do these results mean and how will they be used?"
Usually the purpose of using statistics is to assist in making decisions under uncertainty. So it is important to have at the front of your mind "What decisions will be made as a result of this analysis and how will this analysis influence these decisions?" (e.g. publish an article, recommend a new method be used, provide $X in funding to Y, get more data, report an estimated quantity as E, etc.etc.....)
If you don't feel that there is any decision to be made, then one wonders why you are doing the analysis in the first place (as it is quite expensive to do analysis). I think of statistics as a "nuisance" in that it is a means to an end, rather than an end itself. In my view we only quantify uncertainty so that we can use this to make decisions which account for this uncertainty in a precise way.
I think this is one reason why keeping things simple is a good policy in general, because it is usually much easier to relate a simple solution to the real world (and hence to the environment in which the decision is being made) than the complex solution. It is also usually easier to understand the limitations of the simple answer. You then move to the more complex solutions when you understand the limitations of the simple solution, and how the complex one addresses them.
| null | CC BY-SA 2.5 | null | 2011-01-16T13:48:53.937 | 2011-01-16T13:48:53.937 | null | null | 2392 | null |
6298 | 1 | 10760 | null | 28 | 5942 | [Google Prediction API](https://cloud.google.com/prediction/docs) is a cloud service where user can submit some training data to train some mysterious classifier and later ask it to classify incoming data, for instance to implement spam filters or predict user preferences.
But what is behind the scenes?
| What is behind Google Prediction API? | CC BY-SA 3.0 | null | 2011-01-16T14:01:00.537 | 2016-01-25T14:09:42.297 | 2016-01-25T14:09:42.297 | null | null | [
"machine-learning"
]
|
6299 | 2 | null | 5903 | 3 | null | Given that simple linear regression is analytically identical between classical and Bayesian analysis with Jeffrey's prior, both of which are analytic, it seems a bit odd to resort to a numerical method such as MCMC to do the Bayesian analysis. MCMC is just a numerical integration tool, which allows Bayesian methods to be used in more complicated problems which are analytically intractable, just the same as Newton-Rhapson or Fisher Scoring are numerical methods for solving classical problems which are intractable.
The posterior distribution p(b|y) using the Jeffrey's prior p(a,b,s) proportional to 1/s (where s is the standard deviation of the error) is a student t distribution with location b_ols, scale se_b_ols ("ols" for "ordinary least squares" estimate), and n-2 degrees of freedom. But the sampling distribution of b_ols is also a student t with location b, scale se_b_ols, and n-2 degrees of freedom. Thus they are identical except that b and b_ols have been swapped, so when it comes to creating the interval, the "est +- bound" of the confidence interval gets reversed to a "est -+ bound" in the credible interval.
So the confidence interval and credible interval are analytically identical, and it matters not which method is used (provided there is no additional prior information) - so take the method which is computationally cheaper (e.g. the one with fewer matrix inversions). What your result with MCMC shows is that the particular approximation used with MCMC gives a credible interval which is too wide compared to the exact analytic credible interval. This is probably a good thing (although we would want the approximation to be better) that the approximate Bayesian solution appears more conservative than the exact Bayesian solution.
| null | CC BY-SA 2.5 | null | 2011-01-16T14:47:21.463 | 2011-01-16T14:52:45.937 | 2011-01-16T14:52:45.937 | 2392 | 2392 | null |
6300 | 2 | null | 6225 | 5 | null | For me, the decision theoretical framework presents the easiest way to understand the "null hypothesis". It basically says that there must be at least two alternatives: the Null hypothesis, and at least one alternative. Then the "decision problem" is to accept one of the alternatives, and reject the others (although we need to be precise about what we mean by "accepting" and "rejecting" the hypothesis). I see the question of "can we prove the null hypothesis?" as analogous to "can we always make the correct decision?". From a decision theory perspective the answer is clearly yes if
1)there is no uncertainty in the decision making process, for then it is a mathematical exercise to work out what the correct decision is.
2)we accept all the other premises/assumptions of the problem. The most critical one (I think) is that the hypothesis we are deciding between are exhaustive, and one (and only one) of them must be true, and the others must be false.
From a more philosophical standpoint, it is not possible to "prove" anything, in the sense that the "proof" depends entirely on the assumptions / axioms which lead to that "proof". I see proof as a kind of logical equivalence rather than a "fact" or "truth" in the sense that if the proof is wrong, the assumptions which led to it are also wrong.
Applying this to the "proving the null hypothesis" I can "prove" it to be true by simply assuming that it is true, or by assuming that it is true if certain conditions are meet (such as the value of a statistic).
| null | CC BY-SA 2.5 | null | 2011-01-16T15:35:37.147 | 2011-01-16T15:35:37.147 | null | null | 2392 | null |
6301 | 2 | null | 4663 | 41 | null | When used in stage-wise mode, the LARS algorithm is a greedy method that does not yield a provably consistent estimator (in other words, it does not converge to a stable result when you increase the number of samples).
Conversely, the LASSO (and thus the LARS algorithm when used in LASSO mode) solves a convex data fitting problem. In particular, this problem (the L1 penalized linear estimator) has plenty of nice proved properties (consistency, sparsistency).
I would thus try to always use the LARS in LASSO mode (or use another solver for LASSO), unless you have very good reasons to prefer stage-wise.
| null | CC BY-SA 3.0 | null | 2011-01-16T17:42:01.520 | 2012-11-06T18:12:52.440 | 2012-11-06T18:12:52.440 | 16049 | 1265 | null |
6302 | 1 | 6303 | null | 8 | 4407 | NOTE: I purposely did not label the axis due to pending publications. The line colors represent the same data in all three plots.
I fitted my data using a negative binomial distribution to generate a pdf. I am happy with the pdf and meets my research needs. PDF plot:

---
For when reporting the CDF, should I use the empirical or fitted CDF? There are slight differences between the empirical and fitted CDF, specifically at x = 40, the yellow and cyan lines intersect in the empirical distribution, but not the fitted.
Empirical:

Negative Binomial CDF:

| Use Empirical CDF vs Distribution CDF? | CC BY-SA 2.5 | null | 2011-01-16T20:56:43.303 | 2011-01-18T12:42:18.770 | 2011-01-16T23:09:09.193 | 449 | 559 | [
"distributions",
"data-visualization",
"density-function",
"cumulative-distribution-function"
]
|
6303 | 2 | null | 6302 | 5 | null | Personally, I'd favour instead showing the fit of the theoretical to the empirical distribution using a set of [P-P plots](http://en.wikipedia.org/wiki/P-P_plot) or [Q-Q plots](http://en.wikipedia.org/wiki/Q-Q_plot).
| null | CC BY-SA 2.5 | null | 2011-01-16T23:07:41.130 | 2011-01-17T11:19:01.070 | 2011-01-17T11:19:01.070 | 449 | 449 | null |
6304 | 1 | 6307 | null | 22 | 5530 | Let $t_i$ be drawn i.i.d from a Student t distribution with $n$ degrees of freedom, for moderately sized $n$ (say less than 100). Define
$$T = \sum_{1\le i \le k} t_i^2$$
Is $T$ distributed nearly as a chi-square with $k$ degrees of freedom? Is there something like the Central Limit Theorem for the sum of squared random variables?
| What is the sum of squared t variates? | CC BY-SA 2.5 | null | 2011-01-17T03:34:38.283 | 2021-02-07T03:23:59.257 | 2021-02-07T03:23:59.257 | 11887 | 795 | [
"central-limit-theorem",
"t-distribution",
"chi-squared-distribution",
"sums-of-squares"
]
|
6305 | 2 | null | 6304 | 8 | null | I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have
$\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\sim N(0,1)$
where $Et_1^2$ and $Var(t_1^2)$ is respectively the mean and variance of squared Student t distribution with $n$ degrees of freedom. Note that $t_1^2$ is distributed as F distribution with $1$ and $n$ degrees of freedom. So we can grab the formulas for mean and variance from [wikipedia page](http://en.wikipedia.org/wiki/F-distribution). The final result then is:
$\dfrac{T-k\frac{n}{n-2}}{\sqrt{k\frac{2n^2(n-1)}{(n-2)^2(n-4)}}}\sim N(0,1)$
| null | CC BY-SA 2.5 | null | 2011-01-17T04:07:32.863 | 2011-01-17T04:07:32.863 | null | null | 2116 | null |
6306 | 1 | null | null | 41 | 1112 | Although this question is somewhat subjective, I hope it
qualifies
as a good subjective question according to the [faq guidelines](http://blog.stackoverflow.com/2010/09/good-subjective-bad-subjective/).
It is based on a question that Olle Häggström asked me a year ago
and although I have some thoughts about it I do not have a
definite answer and I would appreciate some help from others.
## Background:
A paper entitled "Equidistant letter sequences in the book of Genesis," by
D. Witztum, E. Rips and Y. Rosenberg
made the extraordinary claim that the Hebrew text of the
Book of Genesis encodes events which
did not occur until millennia after the text was written. The paper was
published by "Statistical Science" in 1994 (Vol. 9 429-438), and was offered
as a "challenging puzzle" whose solution may contribute to the field of statistics.
In reply, [another paper](http://cs.anu.edu.au/%7Ebdm/dilugim/StatSci/StatSci.pdf) entitled "Solving the Bible code puzzle"
by B. McKay, D. Bar-Natan, M. Bar-Hillel and G. Kalai appeared in
Statistical science in 1999 (Vol. 14 (1999) 150-173). The new paper
argues that Witztum,
Rips and Rosenberg's case is fatally defective,
indeed that their result merely reflects on
the choices made in designing their experiment and
collecting the data for it. The paper presents
extensive evidence in support of that conclusion.
(My own interests which are summarized in Section 8
of our paper are detailed
in another [technical report](http://www.ma.huji.ac.il/%7Ekalai/ratio.ps) with Bar Hillel and Mckay
entitled "The two famous rabbis experiments: how
similar is too similar?" See also [this site](http://www.ma.huji.ac.il/%7Ekalai/bc.html).)
## The questions:
Olle Häggström's specific question was:
>
"I once suggested that your paper
might be useful in a statistics course
on advanced undergraduate level, for
the purpose of illustrating the
pitfalls of data mining and related
techniques. Would you agree?"
In addition to Olle's question let me ask a more general question.
>
Is there something related to
statistics that we have learned,
(including perhaps some interesting
questions to ask) from the Bible Code
episode.
Just to make it clear, my question is restricted to
insights related to statistics and not to any other aspect of this episode.
| Are there statistical lessons from the "Bible Code" episode | CC BY-SA 2.5 | null | 2011-01-17T09:18:09.130 | 2022-04-16T00:02:12.807 | 2020-06-11T14:32:37.003 | -1 | 1148 | [
"hypothesis-testing",
"data-mining"
]
|
6307 | 2 | null | 6304 | 15 | null | Answering the first question.
We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random variables distributed by $F(1,n)$. This could be done either by calculating the convolution of two random variables, or calculating the product of their characteristic functions.
The [article](http://cowles.econ.yale.edu/P/cp/p05b/p0560.pdf) by P.C.B. Phillips shows that my first guess about "[confluent] hypergeometric functions involved" was indeed true. It means that the solution will be not trivial, and the brute-force is complicated, but necessary condition to answer your question. So since $n$ is fixed and you sum up t-distributions, we can't say for sure what the final result will be. Unless someone has a good skill playing with products of confluent hypergeometric functions.
| null | CC BY-SA 2.5 | null | 2011-01-17T10:44:06.310 | 2011-01-17T10:44:06.310 | null | null | 2645 | null |
6308 | 1 | 6310 | null | 20 | 2690 | I am referring to this article: [http://www.nytimes.com/2011/01/11/science/11esp.html](http://www.nytimes.com/2011/01/11/science/11esp.html)
>
Consider the following experiment. Suppose there was reason to believe that a coin was slightly weighted toward heads. In a test, the coin comes up heads 527 times out of 1,000.
Is this significant evidence that the
coin is weighted?
Classical analysis says yes. With a
fair coin, the chances of getting 527
or more heads in 1,000 flips is less
than 1 in 20, or 5 percent, the
conventional cutoff. To put it another
way: the experiment finds evidence of
a weighted coin “with 95 percent
confidence.”
Yet many statisticians do not buy it.
One in 20 is the probability of
getting any number of heads above 526
in 1,000 throws. That is, it is the
sum of the probability of flipping
527, the probability of flipping 528,
529 and so on.
But the experiment did not find all of
the numbers in that range; it found
just one — 527. It is thus more
accurate, these experts say, to
calculate the probability of getting
that one number — 527 — if the coin is
weighted, and compare it with the
probability of getting the same number
if the coin is fair.
Statisticians can show that this ratio
cannot be higher than about 4 to 1,
according to Paul Speckman, a
statistician, who, with Jeff Rouder, a
psychologist, provided the example.
First question: This is new to me. Has anybody a reference where I can find the exact calculation and/or can YOU help me by giving me the exact calculation yourself and/or can you point me to some material where I can find similar examples?
>
Bayes devised a way to update the
probability for a hypothesis as new
evidence comes in.
So in evaluating the strength of a
given finding, Bayesian (pronounced
BAYZ-ee-un) analysis incorporates
known probabilities, if available,
from outside the study.
It might be called the “Yeah, right”
effect. If a study finds that kumquats
reduce the risk of heart disease by 90
percent, that a treatment cures
alcohol addiction in a week, that
sensitive parents are twice as likely
to give birth to a girl as to a boy,
the Bayesian response matches that of
the native skeptic: Yeah, right. The
study findings are weighed against
what is observable out in the world.
In at least one area of medicine —
diagnostic screening tests —
researchers already use known
probabilities to evaluate new
findings. For instance, a new
lie-detection test may be 90 percent
accurate, correctly flagging 9 out of
10 liars. But if it is given to a
population of 100 people already known
to include 10 liars, the test is a lot
less impressive.
It correctly identifies 9 of the 10
liars and misses one; but it
incorrectly identifies 9 of the other
90 as lying. Dividing the so-called
true positives (9) by the total number
of people the test flagged (18) gives
an accuracy rate of 50 percent. The
“false positives” and “false
negatives” depend on the known rates
in the population.
Second question: How do you exactly judge if a new finding is "real" or not with this method? And: Isn't this as arbitrary as the 5%-barrier because of the use of some preset prior probability?
| Article about misuse of statistical method in NYTimes | CC BY-SA 2.5 | null | 2011-01-17T11:11:10.560 | 2016-09-15T00:28:09.350 | 2016-09-15T00:28:09.350 | 28666 | 230 | [
"hypothesis-testing",
"bayesian",
"statistics-in-media"
]
|
6309 | 1 | 6311 | null | 12 | 10028 | I was reading through a paper and I saw a table with a comparison between PPV (Positive Predictive Value) and NPV (Negative Predictive Value). They did some kind of statistical test for them, this is a sketch of the table:
```
PPV NPV p-value
65.9 100 < 0.00001
...
```
Every rows refers to a particular contingency table.
What kind of hypothesis test did they do?
Thanks!
| Statistical test for positive and negative predictive value | CC BY-SA 3.0 | null | 2011-01-17T11:25:59.540 | 2012-08-26T22:55:52.843 | 2012-08-26T22:55:52.843 | null | 2719 | [
"epidemiology",
"contingency-tables",
"p-value"
]
|
6310 | 2 | null | 6308 | 31 | null | I will answer the first question in detail.
>
With a fair coin, the chances of
getting 527 or more heads in 1,000
flips is less than 1 in 20, or 5
percent, the conventional cutoff.
For a fair coin the number of heads in 1000 trials follows the [binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution) with number of trials $n=1000$ and probability $p=1/2$. The probability of getting more than 527 heads is then
$$P(B(1000,1/2)>=527)$$
This can be calculated with any statistical software package. R gives us
```
> pbinom(526,1000,1/2,lower.tail=FALSE)
0.04684365
```
So the probability that with fair coin we will get more than 526 heads is approximately 0.047, which is close to 5% cuttoff mentioned in the article.
The following statement
>
To put it another way: the experiment
finds evidence of a weighted coin
“with 95 percent confidence.”
is debatable. I would be reluctant to say it, since 95% confidence can be interpreted in several ways.
Next we turn to
>
But the experiment did not find all of
the numbers in that range; it found
just one — 527. It is thus more
accurate, these experts say, to
calculate the probability of getting
that one number — 527 — if the coin is
weighted, and compare it with the
probability of getting the same number
if the coin is fair.
Here we compare two events $B(1000,1/2)=527$ -- fair coin, and $B(1000,p)=527$ -- weighted coin. Substituting the [formulas](http://en.wikipedia.org/wiki/Binomial_distribution) for probabilities of these events and noting that the binomial coefficient cancels out we get
$$\frac{P(B(1000,p)=527)}{P(B(1000,1/2)=527)}=\frac{p^{527}(1-p)^{473}}{(1/2)^{1000}}.$$
This is a function of $p$, thus we cand find minima or maxima of it. From the article we may infer that we need maxima:
>
Statisticians can show that this ratio
cannot be higher than about 4 to 1,
according to Paul Speckman, a
statistician, who, with Jeff Rouder, a
psychologist, provided the example.
To make maximisation easier take logarithm of ratio, calculate the derivative with respect to $p$ and equate it to zero. The solution will be
$$p=\frac{527}{1000}.$$
We can check that it is really a maximum using [second derivative test](http://en.wikipedia.org/wiki/Second_derivative_test) for example. Substituting it to the formula we get
$$\frac{(527/1000)^{527}(473/1000)^{473}}{(1/2)^{1000}}\approx 4.3$$
So the ratio is 4.3 to 1, which agrees with the article.
| null | CC BY-SA 2.5 | null | 2011-01-17T13:22:38.390 | 2011-01-17T19:48:13.903 | 2011-01-17T19:48:13.903 | 2116 | 2116 | null |
6311 | 2 | null | 6309 | 17 | null | Assuming a cross-classification like the one shown below (here, for a screening instrument)

we can define four measures of screening accuracy and predictive power:
- Sensitivity (se), a/(a + c), i.e. the probability of the screen providing a positive result given that disease is present;
- Specificity (sp), d/(b + d), i.e. the probability of the screen providing a negative result given that disease is absent;
- Positive predictive value (PPV), a/(a+b), i.e. the probability of patients with positive test results who are correctly diagnosed (as positive);
- Negative predictive value (NPV), d/(c+d), i.e. the probability of patients with negative test results who are correctly diagnosed (as negative).
Each four measures are simple proportions computed from the observed data. A suitable statistical test would thus be a [binomial (exact) test](http://en.wikipedia.org/wiki/Binomial_test), which should be available in most statistical packages, or many online calculators. The tested hypothesis is whether the observed proportions significantly differ from 0.5 or not. I found, however, more interesting to provide confidence intervals rather than a single significance test, since it gives an information about the precision of measurement. Anyway, for reproducing the results you shown, you need to know the total margins of your two-way table (you only gave the PPV and NPV as %).
As an example, suppose that we observe the following data (the CAGE questionnaire is a screening questionnaire for alcohol):

then in R the PPV would be computed as follows:
```
> binom.test(99, 142)
Exact binomial test
data: 99 and 142
number of successes = 99, number of trials = 142, p-value = 2.958e-06
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.6145213 0.7714116
sample estimates:
probability of success
0.6971831
```
If you are using SAS, then you can look at the Usage Note 24170: [How can I estimate sensitivity, specificity, positive and negative predictive values, false positive and negative probabilities, and the likelihood ratios?](http://support.sas.com/kb/24/170.html).
To compute confidence intervals, the gaussian approximation, $p \pm 1.96 \times \sqrt{p(1-p)/n}$ (1.96 being the quantile of the standard normal distribution at $p=0.975$ or $1-\alpha/2$ with $\alpha=5$%), is used in practice, especially when the proportions are quite small or large (which is often the case here).
For further reference, you can look at
>
Newcombe, RG. Two-Sided Confidence
Intervals for the Single Proportion:
Comparison of Seven Methods.
Statistics in Medicine, 17, 857-872 (1998).
| null | CC BY-SA 2.5 | null | 2011-01-17T13:57:31.230 | 2011-01-17T14:10:13.687 | 2011-01-17T14:10:13.687 | 930 | 930 | null |
6312 | 1 | null | null | 14 | 1335 | A signal detection experiment typically presents the observer (or diagnostic system) with either a signal or a non-signal, and the observer is asked to report whether they think the presented item is a signal or non-signal. Such experiments yield data that fill a 2x2 matrix:

Signal detection theory represents such data as representing a scenario where the "signal/non-signal" decision is based on a continuum of signal-ness on which the signal trials generally have a higher value than the non-signal trials, and the observer simply chooses a criterion value above which they will report "signal":

In the diagram above, the green and red distributions represents the "signal" and "non-signal" distributions, respectively and the grey line represents a given observer's chosen criterion. To the right of the grey line, the area under the green curve represents the hits and the area under the red curve represents the false alarms; to the left of the grey line, the area under the green curve represens misses and the area under the red curve represents correct rejections.
As may be imagined, according to this model, the proportion of responses that fall into each cell of the 2x2 table above is determined by:
- The relative proportion of trials sampled from the green and red distributions (base rate)
- The criterion chosen by the observer
- The separation between the distributions
- The variance of each distribution
- Any departure from equality of variance between distributions (equality of variance is depicted above)
- The shape of each distribution (both are Gaussian above)
Often the influence of #5 and #6 can only be assessed by getting the observer to make decisions across a number of different criteria levels, so we'll ignore that for now. Additionally, #3 and #4 only make sense relative to one another (eg. how big is the separation relative to the variability of the distributions?), summarized by a measure of "discriminability" (also known as d'). Thus, signal detection theory proscribes estimation of two properties from signal detection data: criterion & discriminability.
However, I have often noticed that research reports (particularly from the medical field) fail to apply the signal detection framework and instead attempt to analyze quantities such as "Positive predictive value", "Negative predictive value", "Sensitivity", and "Specificity", all of which represent different marginal values from the 2x2 table above ([see here for elaboration](http://en.wikipedia.org/wiki/Positive_predictive_value)).
What utility do these marginal properties provide? My inclination is to disregard them completely because they confound the theoretically independent influences of criterion and discriminability, but possibly I simply lack the imagination to consider their benefits.
| Is it valid to analyze signal detection data without employing metrics derived from signal detection theory? | CC BY-SA 2.5 | null | 2011-01-17T14:15:04.320 | 2014-06-01T02:11:56.710 | null | null | 364 | [
"diagnostic",
"signal-detection"
]
|
6314 | 1 | 10976 | null | 23 | 2050 | In 1999, Beyer et al. asked,
[When is "Nearest Neighbor" meaningful?](http://www.cis.temple.edu/~vasilis/Courses/CIS750/Papers/beyer99when_17.pdf)
Are there better ways of analyzing and visualizing
the effect of distance flatness on NN search since 1999?
>
Does [a given] data set provide meaningful answers to the 1-NN problem?
The 10-NN problem? The 100-NN problem?
How would you experts approach this question today?
---
Edits Monday 24 Jan:
How about "distance whiteout" as a shorter name for "distance flatness
with increasing dimension" ?
An easy way to look at "distance whiteout"
is to run 2-NN,
and plot distances to the nearest neighbor and second-nearest neighbors.
The plot below shows dist1 and dist2
for a range of nclusters and dimensions, by Monte Carlo.
This example shows pretty good distance contrast
for the scaled absolute difference |dist2 - dist1|.
(The relative differences |dist2 / dist1|
→ 1 as dimension → ∞, so become useless.)
Whether absolute errors or relative errors should be used
in a given context
depends of course on the "real" noise present: difficult.
Suggestion: always run 2-NN;
2 neighbors are useful when they're close, and useful when not.

| When is "Nearest Neighbor" meaningful, today? | CC BY-SA 2.5 | null | 2011-01-17T15:12:50.590 | 2011-05-19T18:10:44.623 | 2011-01-24T17:13:36.567 | 557 | 557 | [
"machine-learning",
"k-nearest-neighbour"
]
|
6315 | 2 | null | 6312 | 3 | null | The Positive Predictive Influence (PPV) is not a good measure, not only because it confounds both mechanisms (discriminability and response bias), but also because of item base-rates. It is preferable to use the posterior probabilities, like P(signal|"yes"), which account for item base-rates:
$P(signal|yes) = \frac{P(signal)P(Hit)}{P(signal)P(Hit)+P(noise)P(False Alarm)}$
but... what is it good for?? well, it is useful for adjusting the response criteria in order to maximize/minimize a the probability of a specific outcome. So, it is complementary to the sensitivity and response bias measures in the sense that it helps to summarize the outcomes of changes in response bias.
A word of advice: if you are sticking with 2x2 outcome matrix that basically only allows you to get a sensitivity measure like d', don't even bother with SDT and just use Hits-False Alarms. Both measures (d' and (H-F)) have a correlation of .96 (no matter what BS detection theorists might come up with)
hope this helps
cheers
| null | CC BY-SA 2.5 | null | 2011-01-17T15:24:33.570 | 2011-01-17T15:38:40.497 | 2011-01-17T15:38:40.497 | 447 | 447 | null |
6316 | 2 | null | 256 | 7 | null | Boosting employs shrinkage through the learning rate parameter, which, coupled with k-fold cross validation, "out-of-bag" (OOB) predictions or independent test set, determine the number of trees one should keep in the ensemble.
We want a model that learns slowly, hence there is a trade-off in terms of the complexity of each individual model and the number of models to include. The guidance I have seen suggests you should set the learning rate as low as is feasibly possible (given compute time and storage space requirements), whilst the complexity of each tree should be selected on basis of whether interactions are allowed, and to what degree, the more complex the tree, the more complex the interactions that can be represented.
The learning rate is chosen in the range $[0,1]$. Smaller values ($<0.01$) preferred. This is a weighting applied to each tree to down weight the contribution of each model to the fitted values.
k-fold CV (or OOB predictions or independent test set) is used to decide when the boosted model has started to overfit. Essentially it is this that stops us boosting to the perfect model, but it is better to learn slowly so we have a large ensemble of models contributing to the fitted model.
| null | CC BY-SA 3.0 | null | 2011-01-17T15:36:54.080 | 2015-08-25T12:17:32.040 | 2015-08-25T12:17:32.040 | 71672 | 1390 | null |
6317 | 1 | 6323 | null | 7 | 1120 | In boosting, each additional tree is fitted to the unexplained variation in the response that is currently un-modelled. If we are using squared-error loss, this amounts to fitting on the residuals from the aggregation of the trees fitted up to this point. I am not clear on whether it is at this point that the shrinkage (learning rate) is applied? E.g. we fit the first tree, and compute fitted values. Do we now shrink (down weight) these fitted values by the learning rate yielding a new set of fitted values, from which we compute the residuals that are used as the responses for the second tree in the ensemble, and iterate through many trees?
| When is the shrinkage applied in Friedman's stochastic gradient boosting machine? | CC BY-SA 2.5 | null | 2011-01-17T15:44:45.267 | 2011-01-18T00:42:38.193 | 2011-01-18T00:42:38.193 | null | 1390 | [
"machine-learning",
"boosting"
]
|
6318 | 1 | null | null | 13 | 972 | In response to a growing body of statisticians and researchers that criticize the utility of null-hypothesis testing (NHT) for science as a cumulative endeavour, the American Psychological Association Task Force on Statistical Inference avoided an outright ban on NHT, but instead suggested that researchers report effect sizes in addition to p-values derived from NHT.
However, effect sizes are not easily accumulated across studies. Meta-analytic approaches can accumulate distributions of effect sizes, but effect sizes are typically computed as a ratio between raw effect magnitude and unexplained "noise" in the data of a given experiment, meaning that the distribution of effect sizes is affected not only by the variability in the raw magnitude of the effect across studies, but also variability in the manifestation of noise across studies.
In contrast, an alternative measure of effect strength, likelihood ratios, permit both intuitive interpretation on a study-by-study basis, and can be easily aggregated across studies for meta-analysis. Within each study, the likelihood represents the weight of evidence for a model containing a given effect relative to a model that does not contain the effect, and could typically be reported as, for example, "Computation of a likelihood ratio for the effect of X revealed 8 times more evidence for the effect than for its respective null". Furthermore, the likelihood ratio also permits intuitive representation of the strength of null findings insofar as likelihood ratios below 1 represent scenarios where the null is favoured and taking the reciprocal of this value represents the weight of evidence for the null over the effect. Notably, the likelihood ratio is represented mathematically as the ratio of unexplained variances of the two models, which differ only in the variance explained by the effect and thus is not a huge conceptual departure from an effect size. On the other hand, computation of a meta-analytic likelihood ratio, representing the weight of evidence for an effect across studies, is simply a matter of taking the product of likelihood ratios across studies.
Thus, I argue that for science seeking to establish the degree of gross evidence in favour of a effect/model, likelihood ratios are the way to go.
There are more nuanced cases where models are differentiable only in the specific size of an effect, in which case some sort of representation of the interval over which we believe the data are consistent with effect parameter values might be preferred. Indeed, the APA task force also recommends reporting confidence intervals, which can be used to this end, but I suspect that this is also an ill-considered approach.
Confidence intervals are lamentably often misinterpreted ([by students and researchers alike](http://www.ncbi.nlm.nih.gov/pubmed/16392994)). I also fear that their ability for use in NHT (by assessment of inclusion of zero within the CI) will only serve to further delay the extinction of NHT as an inferential practice.
Instead, when theories are differentiable only by the size of effects, I suggest that Bayesian approach would be more appropriate, where the prior distribution of each effect is defined by each model separately, and the resulting posterior distributions are compared.
Does this approach, replacing p-values, effect sizes and confidence intervals with likelihood ratios and, if necessary, Bayesian model comparison, seem sufficient? Does it miss out on some necessary inferential feature that the here-maligned alternatives provide?
| Do likelihood ratios and Bayesian model comparison provide superior & sufficient alternatives to null-hypothesis testing? | CC BY-SA 2.5 | null | 2011-01-17T16:07:47.570 | 2011-01-18T15:31:45.893 | 2011-01-17T17:45:06.630 | 364 | 364 | [
"bayesian",
"confidence-interval",
"effect-size",
"inference"
]
|
6319 | 2 | null | 1964 | 2 | null | On Matlab File Exchange, there is a kde function that provides the optimal bandwidth with the assumption that a Gaussian kernel is used: [Kernel Density Estimator](http://www.mathworks.com/matlabcentral/fileexchange/14034-kernel-density-estimator).
Even if you don't use Matlab, you can parse through this code for its method of calculating the optimal bandwidth. This is a highly rated function on file exchange and I have used it many times.
| null | CC BY-SA 3.0 | null | 2011-01-17T17:18:11.883 | 2016-04-12T16:53:54.970 | 2016-04-12T16:53:54.970 | 22047 | 559 | null |
6320 | 1 | null | null | 5 | 1057 | I have a very large number of observations. Observations arrive sequentially. Each observation is an $n$-dimensional vector (with $n \ge 100$), is independent from the others and is drawn from the same unknown distribution. Is there an optimal policy to estimate the unknown distribution, given some space bounds on the number of observations that can be stored? I would leave the estimation criteria open-ended, (in terms of expected or minimax error, asymptotic consistency, aysmptotic efficiency etc.).
| Estimating probability distribution function of a data stream | CC BY-SA 2.5 | null | 2011-01-17T18:32:55.550 | 2011-02-06T01:23:06.167 | 2011-01-17T18:50:47.337 | 2116 | 30 | [
"estimation",
"multivariable"
]
|
6321 | 1 | 6322 | null | 9 | 1145 | I have two variables, and I can calculate e.g. the Pearson correlation between them, but I would like to know something analogous to what a t-test would give me (i.e. some notion of how significant the correlation is).
Does such a thing exist?
| Assessing significance of correlation | CC BY-SA 2.5 | null | 2011-01-17T19:06:08.327 | 2011-01-21T10:05:07.917 | 2011-01-18T00:39:18.570 | null | 900 | [
"correlation",
"statistical-significance"
]
|
6322 | 2 | null | 6321 | 8 | null | Yes, you can get a $p$-value for testing the null hypothesis that the Pearson correlation is zero. See [http://en.wikipedia.org/wiki/Pearson%27s_correlation#Inference](http://en.wikipedia.org/wiki/Pearson%27s_correlation#Inference).
| null | CC BY-SA 2.5 | null | 2011-01-17T19:09:49.817 | 2011-01-17T19:09:49.817 | null | null | 449 | null |
6323 | 2 | null | 6317 | 4 | null | Using trees, the shrinkage takes place at the update stage of the algorithm, when the new function $f(x)_k$ is created as the function prior step ($f(x)_{k-1}$) + the new decision tree output ($p(x)_k$). This new tree output ($p(x)_k$) is scaled by the learning rate parameter.
See for example the implementation in R [GBM](http://cran.r-project.org/web/packages/gbm/vignettes/gbm.pdf) on page 6.
| null | CC BY-SA 2.5 | null | 2011-01-17T19:35:52.177 | 2011-01-18T00:42:11.000 | 2011-01-18T00:42:11.000 | null | 2040 | null |
6324 | 2 | null | 6312 | 2 | null | This might be an over-simplification, but specificity and sensitivity are measures of performance, and are used when there isn't any objective knowledge of the nature of the signal. I mean your density vs. signalness plot assumes one variable that quantifies signalness. For very high dimensional, or infinite-dimensional data, and without a rigorous, provable theory of the mechanism of the generation of the signal, the selection of the variable is non-trivial. The question then arises, is why, after selecting such a variable, are its statistical properties, like the mean and variance for the signal and non-signal not quantified. In many cases, the variable may not be just normal, Poisson, or exponentially distributed. It may even be non-parametric, in which case quantifying the separation as mean difference over variance etc., does not make much sense. Also, a lot of literature in the biomedical field is focussed on applications, and ROC, specificity-sensitivity etc., can be used as objective criteria for comparing the approaches in terms of the limited nature of the problem, and basically that is all that is required. Sometimes people may not be interested in describing, say the actual discrete version log-gamma distribution of the ratio of gene1 vs gene2 transcript abundance in diseased vs control subjects, rather the only thing of importance is whether this is elevated and how much variance of the phenotype or probability of disease it explains.
| null | CC BY-SA 2.5 | null | 2011-01-17T19:36:23.677 | 2011-01-18T17:29:07.060 | 2011-01-18T17:29:07.060 | 2728 | 2728 | null |
6325 | 2 | null | 6304 | 16 | null | It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histograms of $\log(T)$ and of $\log(\chi^2(k))$ don't even have the same shape, indicating that shifting and rescaling $T$ still won't work.
Intuitively, for small degrees of freedom Student's $t$ is heavy tailed. Squaring it emphasizes that heaviness. The sums therefore will be more skewed--usually much more skewed--than sums of squared normals (the $\chi^2$ distribution). Calculations and simulations bear this out.
---
### Illustration (as requested)

Each histogram depicts an independent simulation of 100,000 trials with the specified degrees of freedom ($n$) and summands ($k$), standardized as described by @mpiktas. The value of $n=9999$ on the bottom row approximates the $\chi^2$ case. Thus you can compare $T$ to $\chi^2$ by scanning down each column.
Note that standardization is not possible for $n \lt 5$ because the appropriate moments do not even exist. The lack of stability of shape (as you scan from left to right across any row or from top to bottom down any column) is even more marked for $n \le 4$.
---
Finally, let's address the question about a central limit theorem. Since the square of a random variable is a random variable, the usual Central Limit Theorem automatically applies to sequences of independent squared random variables like those in the question. For its conclusion (convergence of the standardized sum to Normality) to hold, we need the squared random variable to have finite variance.
Which Student t variables, when squared, have finite variance? When $X$ is any random variable, by one standard definition the variance of its square $Y=X^2$ is
$$\operatorname{Var}(Y) = E[Y^2] - E[Y]^2 = E[X^4] - E[X^2]^2.$$
Finiteness of $E[X^4]$ will assure finiteness of $E[X^2].$ Because the student $t$ density with $\nu$ degrees of freedom is (up to a rescaling of $X$) proportional to $f_{\nu}(x)=(1+x^2)^{-(\nu+1)/2},$ the question comes down to the finiteness of the integral of $x^4$ times this. Because the product is bounded, we are concerned with the behavior as $|x|\to\infty,$ where the integrand is asymptotically
$$x^4 f_{\nu}(x) \sim x^4 (x^2)^{-(\nu+1)/2} = x^{3 - \nu}.$$
Its integral diverges when the exponent exceeds $-1$ and otherwise converges; that is,
>
The standardized version of $T$ converges to a standard Normal distribution if and only if $\nu \gt 4.$
| null | CC BY-SA 4.0 | null | 2011-01-17T20:28:09.393 | 2020-01-16T16:09:19.657 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
6326 | 1 | 6335 | null | 5 | 587 | I have two treatments A & B. Here are my groups, where X represents the appropriate control for that particular treatment:
Group 1: XX
Group 2: AX
Group 3: XB
Group 4: AB
The hypothesis is that the treatment B will have an effect, but that that effect will no longer be apparent when combined with treatment A.
So, if run my experiment and run an ANOVA on the data, and the results of the analysis show that only Group 3 was significantly different than the others, is it correct to say that "treatment B had an effect and that effect was lost when combined with treatment A"? Or, do would I also need to show a significant different between XB and AB?
| Interpreting interactions between two treatments | CC BY-SA 3.0 | null | 2011-01-18T00:45:45.837 | 2012-08-03T16:37:10.230 | 2012-08-03T16:37:10.230 | 2816 | 2816 | [
"anova"
]
|
6327 | 2 | null | 6312 | 2 | null | You're comparing "What is the probability that a positive test outcome is correct given a known prevalence and test criterion?" with "What is the sensitivity and bias of an unknown system to various signals of this type?"
It seems to me that the two both use some similar theory but they really have very different purposes. With the medical tests criterion is irrelevant. It can be set to a known value in many cases. So, determining the criterion of the test is pointless afterwards. Signal detection theory is best for systems where criterion is unknown. Furthermore, prevalence, or signal, tends to be a fixed (and often very small) value. With SDT you often work out a mean d' over varying signals modelling a very complex situation as a few simple descriptors. When both the criterion and signal are fixed known quantities can SDT tell you anything interesting? It seems like a lot of mathematical sophistication to deal with a fundamentally simpler problem.
| null | CC BY-SA 2.5 | null | 2011-01-18T01:23:18.533 | 2011-01-18T01:23:18.533 | null | null | 601 | null |
6328 | 2 | null | 3328 | 4 | null | If you keep the log likelihoods, you can just select the one with the highest value. Also, if your interest is primarily the mode, just doing an optimization to find the point with the highest log likelihood would suffice.
| null | CC BY-SA 2.5 | null | 2011-01-18T01:42:16.450 | 2011-01-18T01:42:16.450 | null | null | 1146 | null |
6329 | 1 | 6332 | null | 8 | 3682 | I've been using the ets() and auto.arima() functions from the [forecast package](http://robjhyndman.com/software/forecast/) to forecast a large number of univariate time series. I've been using the following function to choose between the 2 methods, but I was wondering if CrossValidated had any better (or less naive) ideas for automatic forecasting.
```
auto.ts <- function(x,ic="aic") {
XP=ets(x, ic=ic)
AR=auto.arima(x, ic=ic)
if (get(ic,AR)<get(ic,XP)) {
model<-AR
}
else {
model<-XP
}
model
}
```
/edit: What about this function?
```
auto.ts <- function(x,ic="aic",holdout=0) {
S<-start(x)[1]+(start(x)[2]-1)/frequency(x) #Convert YM vector to decimal year
E<-end(x)[1]+(end(x)[2]-1)/frequency(x)
holdout<-holdout/frequency(x) #Convert holdout in months to decimal year
fitperiod<-window(x,S,E-holdout) #Determine fit window
if (holdout==0) {
testperiod<-fitperiod
}
else {
testperiod<-window(x,E-holdout+1/frequency(x),E) #Determine test window
}
XP=ets(fitperiod, ic=ic)
AR=auto.arima(fitperiod, ic=ic)
if (holdout==0) {
AR_acc<-accuracy(AR)
XP_acc<-accuracy(XP)
}
else {
AR_acc<-accuracy(forecast(AR,holdout*frequency(x)),testperiod)
XP_acc<-accuracy(forecast(XP,holdout*frequency(x)),testperiod)
}
if (AR_acc[3]<XP_acc[3]) { #Use MAE
model<-AR
}
else {
model<-XP
}
model
}
```
The "holdout" is the number of periods you wish to use as an out of sample test. The function then calculates a fit window and a test window based on this parameter. Then it runs the auto.arima and ets functions on the fit window, and chooses the one with the lowest MAE in the test window. If the holdout is equal to 0, it tests the in-sample fit.
Is there a way to automatically update the chosen model with the complete dataset, once it has been selected?
| Combining auto.arima() and ets() from the forecast package | CC BY-SA 2.5 | null | 2011-01-18T02:20:54.247 | 2011-01-19T18:39:32.660 | 2011-01-19T18:39:32.660 | 2817 | 2817 | [
"r",
"time-series",
"forecasting",
"exponential-distribution",
"arima"
]
|
6330 | 1 | 6348 | null | 36 | 69958 | I have previously used [forecast pro](http://www.forecastpro.com/) to forecast univariate time series, but am switching my workflow over to R. The forecast package for R contains a lot of useful functions, but one thing it doesn't do is any kind of data transformation before running auto.arima(). In some cases forecast pro decides to log transform data before doing forecasts, but I haven't yet figured out why.
So my question is: when should I log-transform my time series before trying ARIMA methods on it?
/edit: after reading your answers, I'm going to use something like this, where x is my time series:
```
library(lmtest)
if ((gqtest(x~1)$p.value < 0.10) {
x<-log(x)
}
```
Does this make sense?
| When to log transform a time series before fitting an ARIMA model | CC BY-SA 2.5 | null | 2011-01-18T02:50:01.373 | 2018-06-19T11:57:38.263 | 2011-01-19T20:07:15.060 | 2817 | 2817 | [
"r",
"time-series",
"data-transformation",
"forecasting",
"arima"
]
|
6331 | 2 | null | 6326 | 3 | null | If in post hoc testing Group 3's mean was significantly different from all the others' then you've already shown that XB is different from AB. Am I missing something? Your statement about B's effect (and its being lost when combined with A's) would be correct.
| null | CC BY-SA 2.5 | null | 2011-01-18T03:26:40.937 | 2011-01-18T03:26:40.937 | null | null | 2669 | null |
6332 | 2 | null | 6329 | 16 | null | The likelihoods from the two model classes, and hence the AIC values, are not comparable due to different initialization assumptions. So your function is not valid. I suggest you try out the two model classes on your series and see which gives the best out-of-sample forecasts.
| null | CC BY-SA 2.5 | null | 2011-01-18T03:35:29.263 | 2011-01-18T03:35:29.263 | null | null | 159 | null |
6333 | 2 | null | 6330 | 41 | null | Plot a graph of the data against time. If it looks like the variation increases with the level of the series, take logs. Otherwise model the original data.
| null | CC BY-SA 2.5 | null | 2011-01-18T03:41:29.677 | 2011-01-18T03:41:29.677 | null | null | 159 | null |
6334 | 2 | null | 6330 | 1 | null | You might want to log-transform series when they are somehow naturally geometric or where the time value of an investment implies that you will be comparing to a minimal risk bond that has a positive return. This will make them more "linearizable", and therefore suitable for a simple differencing recurrence relationship.
| null | CC BY-SA 2.5 | null | 2011-01-18T03:45:46.210 | 2011-01-18T03:45:46.210 | null | null | 2129 | null |
6335 | 2 | null | 6326 | 6 | null | If I understand you correctly, your design is:
$\begin{array}{rcccl}
~ & B_{X} & B_{B} & M \\\hline
A_{X} & \mu_{11} & \mu_{12} & \mu_{1.} \\
A_{A} & \mu_{21} & \mu_{22} & \mu_{2.} \\\hline
M & \mu_{.1} & \mu_{.2} & \mu
\end{array}$
The first part of your hypothesis (effect of treatment B within control group of A) then means that $H_{1}^{1}: \mu_{12} - \mu_{11} > 0$.
The second part of your hypothesis (no effect of treatment B within treatment A) would then be $H_{1}^{2}: \mu_{22} - \mu_{21} = 0$.
So your composite hypothesis is $H_{1}: H_{1}^{1} \wedge H_{1}^{2}$. The problem is with the second part because a non-significant post-hoc test for $H_{0}: \mu_{22} - \mu_{11} = 0$ doesn't mean that there is no effect - your test simply might not have enough power to detect the difference.
You could still test the hypothesis $H_{1}': (\mu_{12} - \mu_{11}) > (\mu_{22} - \mu_{21})$, i.e., an interaction contrast. However, this tests the weaker hypothesis that B has a bigger effect within A's control group than within treatment A.
I'm not sure what you mean by "the results of the analysis show that only Group 3 was significantly different than the others". I don't understand how exactly you would test that. You could test $\mu_{12} \neq \frac{1}{3} (\mu_{11} + \mu_{21} + \mu_{22})$, but that is a weaker hypothesis (Group 3 is different from the average of the remaining groups).
| null | CC BY-SA 2.5 | null | 2011-01-18T11:28:01.773 | 2011-01-18T11:28:01.773 | null | null | 1909 | null |
6336 | 2 | null | 6302 | 3 | null | The empirical CDF needs to be treated with care at the end points of the data, and in other places where there is "sparse" data. This is because they tend to make weak structural assumptions about what goes on "in between" each data point. It would also be a good idea to have "dots" for the empirical CDF plot rather than lines, or have the dots superimposed over the lines, so that it is easier to see where most of the data actually is. Another alternative is to put the "dots" for the data over the fitted CDF plot, although there may be too much going on in the plot.
Maybe its a plotting difficulty, but the empirical CDF should look like a staircase or step function (horizontal lines with "jumps" at the observed values). The empirical plots above do not look this way, they appear "smoothed". Maybe they are a "non-parametric" CDF using some kind of plot smoother?
If it is a "non-parametric" CDF then you are basically comparing between to models: the negative binomial and the non-parametric one.
My advice: have a separate plot for each data (each colour on a new graph), and then put the empirical CDF as "dots" where the data was observed, and the fitted negative binomial CDF as a smooth line on the same plot. This would look similar to a regression-style scatter plot with a fitted line. An example of the kind of plot I am talking (which has R-code to create it) is here [How to present the gain in explained variance thanks to the correlation of Y and X?](https://stats.stackexchange.com/questions/6268/how-to-present-the-gain-in-explained-variance-thanks-to-the-correlation-of-y-and/6290#6290))
| null | CC BY-SA 2.5 | null | 2011-01-18T12:42:18.770 | 2011-01-18T12:42:18.770 | 2017-04-13T12:44:54.643 | -1 | 2392 | null |
6337 | 1 | 6338 | null | 5 | 890 | I have various clinical data on participants in a study. I'm looking at a continuous variable ("A") and a (binary) categorical variable (group) ("O"). I used a Wilcoxon test in R (the data are not normally distributed) to see if "A" is significantly different between the two groups. I got a borderline p-value of 0.054.
If I run the Wilcoxon again but include only the males (30 of 72), the p-value is ~0.3; for females only it's ~0.25.
How is it possible that there is no difference in "A" between the groups for males and females separately, but when combined there is a difference?
| Subsets not significantly different but superset is | CC BY-SA 2.5 | null | 2011-01-18T13:04:50.173 | 2017-04-03T15:48:27.513 | 2017-04-03T15:48:27.513 | 101426 | 2824 | [
"r",
"statistical-significance",
"nonparametric",
"wilcoxon-mann-whitney-test"
]
|
6338 | 2 | null | 6337 | 7 | null | It seems to be a question of [test power](http://en.wikipedia.org/wiki/Statistical_power). If you only look at a subset you have a lot less participants and therefore a lot less power to find an effect of similar size.
With a reduced sample size you can only find a much bigger effect. So it is NOT recommended to only look at the subsets in this case. Unless there is an interaction (i.e., do the results point into the same direction for both men and women?).
Furthermore, there is no need to use a Wilcoxon test only because your data is not normally distributed (unless it heavily deviates). Probably you can still use the t.test (for example one of the user here, whuber, recently advocated the t.test in a similar case, because the normally assumption does not necessarily hold for the data but for the sampling distribution. quoting him: "The reason is that the sampling distributions of the means are approximately normal, even though the distributions of the data are not").
However, if you still don't want to use the t.test there are more powerful 'assumption free' parametric alternatives, especially permutation tests. See the answer to my question here (whubers quote is also from there): [Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?](https://stats.stackexchange.com/questions/6127/which-permutation-test-implementation-in-r-to-use-instead-of-t-tests-paired-and)
In my case the results were even a little bit better (i.e., smaller p) than when using the t.test. So I would recommend this permutation test based on the `coin` package. I could provide you with the necessary r-commands if you provide some sample data in your question.
Update: The effect of outliers on the t-test
If you look at the help of t.test in R `?t.test`, you will find the following example:
```
t.test(1:10,y=c(7:20)) # P = .00001855
t.test(1:10,y=c(7:20, 200)) # P = .1245 -- NOT significant anymore
```
Although in the second case you have a much extremer difference in the means, the outlier leads to the counterintuitive finding that the data is not significant anymore. Hence, a method to deal with outliers (e.g. Winsorizing, [here](http://www.unt.edu/rss/class/mike/5700/articles/robustAmerPsyc.pdf)) is recommended for parametric tests as the t if the data permits.
| null | CC BY-SA 2.5 | null | 2011-01-18T13:45:26.073 | 2011-01-18T14:49:41.990 | 2017-04-13T12:44:28.813 | -1 | 442 | null |
6339 | 2 | null | 6337 | 6 | null | This is not necessarily an issue of statistical power; it could also be an example of [confounding](http://en.wikipedia.org/wiki/Confounding).
Example:
- One category of $O$ is more common in males but the other is more common in females
- The distribution of $A$ differs between males and females
- Within each sex separately, the distribution of $A$ is exactly the same for both $O$ categories
Then there will still be an overall difference in the distribution of $A$ between the $O$ categories.
| null | CC BY-SA 2.5 | null | 2011-01-18T14:24:30.237 | 2011-01-18T14:24:30.237 | null | null | 449 | null |
6340 | 2 | null | 6146 | 3 | null | From the statement of the question, it seems as though you don't require conjugacy per se, rather you would like an analytical solution to your integration. From the form of the distribution, it would appear at first glance that the analytics of most solutions would be rather messy and difficult to interpret. The "analytic" solution would likely involve non-analytic functions (such as gamma function, beta function, confluent hypergeometric function), which may require numerical evaluation anyway.
May be quicker to use MCMC, rejection sampling or some other numerical technique to do the integration. But this means you need to choose a prior for you parameters (ideally, one which describes what you know about them).
One choice which comes to my mind, is the product of the beta binomial prior for the "central hypergeometric" part, and a beta distribution of the second kind for the odds ratio parameter. (beta distribution of the second kind is the distribution of the odds ratio of a beta distributed random variable, similar to a F distribution). But this is somewhat an arbitrary choice, only based on the conjugacy for the central hypergeometric distribution, and a "heavy-tailed" distribution (possibly robust? definitely more robust than gamma or inverse gamma) for the odds ratio parameter.
Also, which parameters are you integrating out? and which parameters are you taking the likelihood of?
| null | CC BY-SA 2.5 | null | 2011-01-18T14:52:28.410 | 2011-01-18T14:52:28.410 | null | null | 2392 | null |
6342 | 1 | 6344 | null | 6 | 8600 | I have continuous data "A", binary categorical data "O", gender/sex and age for several participants in a study.
A linear model in R shows no correlation between A and age. I would now like to group A into groups by age and see if there is a difference between the groups. I know about 'hist' and 'split' in R, but these do not do what I need.
(1) How can I divide/split A into groups based on age (18 to 27, 28 to 37, etc.).
(2) Once I've done that, I would use a $\chi^2$ test?
(3) Could I also test O in the same groups, using counts?
| Splitting one variable according to bins from another variable | CC BY-SA 2.5 | null | 2011-01-18T15:01:29.900 | 2011-01-18T15:31:51.173 | 2011-01-18T15:07:26.673 | 919 | 2824 | [
"r",
"regression",
"histogram"
]
|
6343 | 2 | null | 5542 | 4 | null | A useful way to incorporate data into a prior distribution is the principle of maximum entropy. You basically provide constraints that the prior distribution is to satisfy (e.g. mean, variance, etc.,etc.) and then choose the distribution which is most "spread out" that satisfies these constraints.
The distribution generally has the form $p(x) \propto exp(...)$
Edwin Jaynes was the originator of this principle, so searching for his work is a good place to start.
See the wiki page (http://en.wikipedia.org/wiki/Principle_of_maximum_entropy) and links therein for a more detailed description.
| null | CC BY-SA 2.5 | null | 2011-01-18T15:16:19.887 | 2011-01-18T15:16:19.887 | null | null | 2392 | null |
6344 | 2 | null | 6342 | 5 | null | ```
> A <- round(rnorm(100, 100, 15), 2) # generate some data
> age <- sample(18:65, 100, replace=TRUE)
> sex <- factor(sample(0:1, 100, replace=TRUE), labels=c("f", "m"))
# 1) bin age into 4 groups of similar size
> ageFac <- cut(age, breaks=quantile(age, probs=seq(from=0, to=1, by=0.25)),
+ include.lowest=TRUE)
> head(ageFac)
[1] (26,36.5] (26,36.5] (36.5,47] [18,26] [18,26] [18,26]
Levels: [18,26] (26,36.5] (36.5,47] (47,65]
> table(ageFac) # check group size
ageFac
[18,26] (26,36.5] (36.5,47] (47,65]
27 23 26 24
# 2) test continuous DV in age-groups
> anova(lm(A ~ ageFac))
Analysis of Variance Table
Response: A
Df Sum Sq Mean Sq F value Pr(>F)
ageFac 3 15.8 5.272 0.0229 0.9953
Residuals 96 22099.2 230.200
# 3) chi^2-test for equal distributions of sex in age-groups
> addmargins(table(sex, ageFac))
ageFac
sex [18,26] (26,36.5] (36.5,47] (47,65] Sum
f 11 10 12 11 44
m 16 13 14 13 56
Sum 27 23 26 24 100
> chisq.test(table(sex, ageFac))
Pearson's Chi-squared test
data: table(sex, ageFac)
X-squared = 0.2006, df = 3, p-value = 0.9775
```
| null | CC BY-SA 2.5 | null | 2011-01-18T15:24:02.170 | 2011-01-18T15:31:51.173 | 2011-01-18T15:31:51.173 | 1909 | 1909 | null |
6345 | 2 | null | 6318 | 4 | null | The main advantages of a Bayesian approach, at least to me as a researcher in Psychology are:
1) lets you accumulate evidence in favor of the null
2) circumvents the theoretical and practical problems of sequential testing
3) is not vulnerable to reject a null just because of a huge N (see previous point)
4) is better suited when working with small effects (with large effects both Frequentist and Bayesian methods tend to agree pretty much all the time)
5) allows one to do hierarchical modeling in a feasible way. For instance, introducing item and participant effects in some model classes like Multinomial Processing Tree models would need to be done in a Bayesian framework otherwise computing time would be insanely long.
6) gets you "real" confidence intervals
7) You require 3 things: the likelihood, the priors, and probability of the data. the first you get from your data, the second you make up, and the third you don't need at all given proportionality. Ok, maybe I exaggerate a little ;-)
Overall, one can invert your question: Does this all mean that classical frequentist stats are not sufficient? I think that saying "no" is too harsh a verdict. Most problems can be somewhat avoided if one goes beyond p-values and looks at stuff like effect sizes, the possibility of item effects, and consistently replicate findings (too many one-experiment papers get published!).
But not everything is that easy with Bayes. Take for instance model selection with non-nested models. In these cases, the priors are extremely important as they greatly affect results, and sometimes you dont have that much knowledge on most of the models you wanna work with in order to get your priors right. Also, takes reaaaally long....
I leave two references for anybody that might be interested in diving into Bayes.
["A Course in Bayesian Graphical Modeling for Cognitive Science"](http://www.ejwagenmakers.com/BayesCourse/BayesBookWeb.pdf) by Lee and Wagenmakers
["Bayesian Modeling Using WinBUGS"](http://rads.stackoverflow.com/amzn/click/047014114X) by Ntzoufras
| null | CC BY-SA 2.5 | null | 2011-01-18T15:31:45.893 | 2011-01-18T15:31:45.893 | null | null | 447 | null |
6347 | 1 | null | null | 3 | 862 | PCA based filtering is used to identify and eliminate noise in data. This would basically involve computing the PCs and using the top k PCs to denoise the data. What if I know for sure that only the extremely small values in my matrix are noise? Now, a value may be small w.r.t the entire matrix but not small w.r.t a particular row/column. Can I achieve this with some transformation of the input matrix followed by a PCA based filter?
| PCA Based Filtering but only filter out small values | CC BY-SA 2.5 | null | 2011-01-18T16:19:49.377 | 2011-01-28T00:18:46.790 | null | null | 2806 | [
"pca"
]
|
6348 | 2 | null | 6330 | 23 | null | Some caveats before to proceed. As I often suggest to my students, use `auto.arima()` things only as a first approximation to your final result or if you want to have parsimonious model when you check that your rival theory-based model do better.
Data
You have clearly to start from the description of time series data you are working with. In macro-econometrics you usually work with aggregated data, and geometric means (surprisingly) have more empirical evidence for macro time series data, probably because most of them decomposable into exponentially growing trend.
By the way Rob's suggestion "visually" works for time series with clear seasonal part, as slowly varying annual data is less clear for the increases in variation. Luckily exponentially growing trend is usually seen (if it seems to be linear, than no need for logs).
Model
If your analysis is based on some theory that states that some weighted geometric mean $Y(t) = X_1^{\alpha_1}(t)...X_k^{\alpha_k}(t)\varepsilon(t)$ more known as the multiplicative regression model is the one you have to work with. Then you usually move to a log-log regression model, that is linear in parameters and most of your variables, but some growth rates, are transformed.
In financial econometrics logs are a common thing due to the popularity of log-returns, because...
Log transformations have nice properties
In log-log regression model it is the interpretation of estimated parameter, say $\alpha_i$ as the elasticity of $Y(t)$ on $X_i(t)$.
In error-correction models we have an empirically stronger assumption that proportions are more stable (stationary) than the absolute differences.
In financial econometrics it is easy to aggregate the log-returns over time.
There are many other reasons not mentioned here.
Finally
Note that log-transformation is usually applied to non-negative (level) variables. If you observe the differences of two time series (net export, for instance) it is not even possible to take the log, you have either to search for original data in levels or assume the form of common trend that was subtracted.
[addition after edit] If you still want a statistical criterion for when to do log transformation a simple solution would be any test for heteroscedasticity. In the case of increasing variance I would recommend [Goldfeld-Quandt Test](http://en.wikipedia.org/wiki/Goldfeld%E2%80%93Quandt_test) or similar to it. In R it is located in `library(lmtest)` and is denoted by `gqtest(y~1)` function. Simply regress on intercept term if you don't have any regression model, `y` is your dependent variable.
| null | CC BY-SA 2.5 | null | 2011-01-18T18:43:46.330 | 2011-01-19T09:49:07.870 | 2011-01-19T09:49:07.870 | 2645 | 2645 | null |
6350 | 1 | 6351 | null | 59 | 245584 | The [Wikipedia page on ANOVA lists three assumptions](http://en.wikipedia.org/wiki/Anova#Assumptions_of_ANOVA), namely:
- Independence of cases – this is an assumption of the model that simplifies the statistical analysis.
- Normality – the distributions of the residuals are normal.
- Equality (or "homogeneity") of variances, called homoscedasticity...
Point of interest here is the second assumption. Several sources list the assumption differently. Some say normality of the raw data, some claim of residuals.
Several questions pop up:
- are normality and normal distribution of residuals the same person (based on Wikipedia entry, I would claim normality is a property, and does not pertain residuals directly (but can be a property of residuals (deeply nested text within brackets, freaky)))?
- if not, which assumption should hold? One? Both?
- if the assumption of normally distributed residuals is the right one, are we making a grave mistake by checking only the histogram of raw values for normality?
| ANOVA assumption normality/normal distribution of residuals | CC BY-SA 2.5 | null | 2011-01-18T19:07:59.347 | 2021-07-05T18:41:58.707 | 2021-07-05T18:41:58.707 | 11887 | 144 | [
"anova",
"residuals",
"normality-assumption",
"assumptions",
"faq"
]
|
6351 | 2 | null | 6350 | 43 | null | Let's assume this is a [fixed effects](http://en.wikipedia.org/wiki/Analysis_of_variance#Fixed-effects_models_.28Model_1.29) model. (The advice doesn't really change for random-effects models, it just gets a little more complicated.)
First let us distinguish the "residuals" from the "errors:" the former are the differences between the responses and their predicted values, while the latter are random variables in the model. With sufficiently large amounts of data and a good fitting procedure, the distributions of the residuals will approximately look like the residuals were drawn randomly from the error distribution (and will therefore give you good information about the properties of that distribution).
The assumptions, therefore, are about the errors, not the residuals.
- No, normality (of the responses) and normal distribution of errors are not the same. Suppose you measured yield from a crop with and without a fertilizer application. In plots without fertilizer the yield ranged from 70 to 130. In two plots with fertilizer the yield ranged from 470 to 530. The distribution of results is strongly non-normal: it's clustered at two locations related to the fertilizer application. Suppose further the average yields are 100 and 500, respectively. Then all residuals range from -30 to +30, and so the errors will be expected to have a comparable distribution. The errors might (or might not) be normally distributed, but obviously this is a completely different distribution.
- The distribution of the residuals matters, because those reflect the errors, which are the random part of the model. Note also that the p-values are computed from F (or t) statistics and those depend on residuals, not on the original values.
- If there are significant and important effects in the data (as in this example), then you might be making a "grave" mistake. You could, by luck, make the correct determination: that is, by looking at the raw data you will seeing a mixture of distributions and this can look normal (or not). The point is that what you're looking it is not relevant.
ANOVA residuals don't have to be anywhere close to normal in order to fit the model. However, unless you have an enormous amount of data, near-normality of the residuals is essential for p-values computed from the F-distribution to be meaningful.
| null | CC BY-SA 4.0 | null | 2011-01-18T19:45:40.100 | 2020-09-29T21:09:34.100 | 2020-09-29T21:09:34.100 | 919 | 919 | null |
6352 | 2 | null | 6350 | 5 | null | In the one-way case with $p$ groups of size $n_{j}$:
$F = \frac{SS_{b} / df_{b}}{SS_{w} / df_{w}}$ where
$SS_{b} = \sum_{j=1}^{p}{n_{j} (M - M_{j}})^{2}$ and
$SS_{w} = \sum_{j=1}^{p}\sum_{i=1}^{n_{j}}{(y_{ij} - M_{j})^{2}}$
$F$ follows an $F$-distribution if $SS_{b} / df_{b}$ and $SS_{w} / df_{w}$ are independent, $\chi^{2}$-distributed variables with $df_{b}$ and $df_{w}$ degrees of freedom, respectively. This is the case when $SS_{b}$ and $SS_{w}$ are the sum of squared independent normal variables with mean $0$ and equal scale. Thus $M-M_{j}$ and $y_{ij}-M_{j}$ must be normally distributed.
$y_{i(j)} - M_{j}$ is the residual from the full model ($Y = \mu_{j} + \epsilon = \mu + \alpha_{j} + \epsilon$), $y_{i(j)} - M$ is the residual from the restricted model ($Y = \mu + \epsilon$). The difference of these residuals is $M - M_{j}$.
EDIT to reflect clarification by @onestop: under $H_{0}$ all true group means are equal (and thus equal to $M$), thus normality of the group-level residuals $y_{i(j)} - M_{j}$ implies normality of $M - M_{j}$ as well. The DV values themselves need not be normally distributed.
| null | CC BY-SA 2.5 | null | 2011-01-18T20:01:21.290 | 2011-01-19T09:43:20.550 | 2011-01-19T09:43:20.550 | 1909 | 1909 | null |
6353 | 1 | null | null | 12 | 53475 | It looks like you can use coding for one categorical variable, but I have two categorical and one continuous predictor variable. Can i use multiple regression for this in SPSS and if so how?
thanks!
| Can I use multiple regression when I have mixed categorical and continuous predictors? | CC BY-SA 2.5 | null | 2011-01-18T20:04:35.030 | 2016-03-17T08:02:16.877 | 2011-01-19T00:38:17.517 | null | null | [
"regression",
"spss",
"categorical-data",
"continuous-data"
]
|
6354 | 1 | null | null | 3 | 867 | Most methods for [symbolic data analyis](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470090162.html) are currently implemented in the SODAS software.
Are there any R packages for symbolic data except clamix and clusterSim?
| R package for symbolic data analysis | CC BY-SA 2.5 | null | 2011-01-18T20:46:44.443 | 2011-01-21T18:46:58.413 | null | null | 2831 | [
"r",
"clustering"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.