Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1839 | 2 | null | 1829 | 4 | null | I tend to think of 'least squares' as a criterion for defining the best fitting regression line (i.e., that which makes the sum of 'squared' residuals 'least') and the 'algorithm' in this context as the set of steps used to determine the regression coefficients that satisfy that criterion. This distinction suggests that it is possible to have different algorithms that would satisfy the same criterion.
I'd be curious to know whether others make this distinction and what terminology they use.
| null | CC BY-SA 2.5 | null | 2010-08-18T14:57:00.733 | 2010-08-18T14:57:00.733 | null | null | 183 | null |
1840 | 2 | null | 1826 | 6 | null | Let's say you investigate some process; you've gathered some data describing it and you have build a model (either statistical or ML, doesn't matter). But now, how to judge if it is ok? Probably it fits suspiciously good to the data it was build on, so no-one will believe that your model is so splendid that you think.
First idea is to separate a subset of your data and use it to test the model build by your method on the rest of data. Now the result is definitely overfitting-free, nevertheless (especially for small sets) you could have been (un)lucky and draw (less)more simpler cases to test, making it (harder)easier to predict... Also your accuracy/error/goodness estimate is useless for model comparison/optimization, since you probably know nothing about its distribution.
When in doubt, use brute force, so just replicate the above process, gather few estimates of accuracy/error/goodness and average them -- and so you obtain cross validation. Among better estimate you will also get a histogram, so you will be able to approximate distribution or perform some non-parametric tests.
And this is it; the details of test-train splitting are the reason for different CV types, still except of rare cases and small strength differences they are rather equivalent. Indeed it is a huge advantage, because it makes it a bulletproof-fair method; it is very hard to cheat it.
| null | CC BY-SA 2.5 | null | 2010-08-18T15:14:02.023 | 2010-08-18T15:14:02.023 | null | null | null | null |
1841 | 1 | 2057 | null | 8 | 662 | Disclaimer: I'm a software engineer, not a statistician, so please forgive any blunt error :-)
I have a set of time-series "curves", each measuring the entropy of a given artifact. Now, I'm standing over the following premises (please criticize them as you see fit):
- In order to approximate the upper bounds of the Kolmogorov complexity $K(s)$, of a string $s$, one can simply compress the string $s$ with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the resulting string's length.
- For this purpose, I've used the bzip2 application, setting its compression level to the supported maximum (-9).
- If one is only interested in a time-series analysis of a set of evolving strings, calculating the compressed deltas is enough to present a relative measure of entropy between any two strings (at least that's my interpretation after reading Cilibrasi05).
- For that, I used the diff unix tool, with the (--minimal) parameter, again followed by a bzip2 compression, with the aforementioned settings.
I'm doing this to analyze the evolution of the entropy in a software artifact (code, model, whatever). I'm not worried with the absolute values, but with the relative increase (or decrease) in entropy. Now here comes the problem:
- I've done this for a set of 6 artifacts, which ought to belong to the same population, but I don't know how to provide statistical evidence of that (the corresponding of doing a two-tailed t-test of two samples).
- One of the artifacts evolution should be different from all the others. We're talking something like an exponential v.s. sub-linear growth. How do I provide statistical evidence of that?
Again, the disclaimer of being a software engineer. Although I would appreciate every academic reference (papers, books, etc.) you could handle, I'm looking for something pragmatic that I can use in the next few days, like a script in R, or something in SPSS.
P.S. I'm sorry for asking for a recipe, instead of a theoretical explanation.
| Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population | CC BY-SA 2.5 | null | 2010-08-18T15:19:44.807 | 2010-08-25T13:52:20.933 | 2010-08-18T16:00:37.460 | 990 | 990 | [
"r",
"time-series",
"hypothesis-testing",
"spss",
"entropy"
] |
1842 | 2 | null | 1838 | 12 | null | I think you can get what you want using `ggplot2`. Using the code below, I can produce:

Obviously things like line colours can be changed to what ever you want. On the x-axis I specified major lines on years and minor lines on months.
```
require(ggplot2)
t = as.Date(0:1000, origin="2008-01-01")
y1 = rexp(1001)
y2 = cumsum(y1)
df = data.frame(t=t, values=c(y2,y1), type=rep(c("Bytes", "Changes"), each=1001))
g = ggplot(data=df, aes(x=t, y=values)) +
geom_line() +
facet_grid(type ~ ., scales="free") +
scale_y_continuous(trans="log10") +
scale_x_date(major="years", minor="months") +
ylab("Log values")
g
```
| null | CC BY-SA 2.5 | null | 2010-08-18T15:25:59.757 | 2010-08-18T15:25:59.757 | null | null | 8 | null |
1843 | 2 | null | 1838 | 16 | null | You can use `par(new=TRUE)` to plot into the same graph using two different y-axes! This should also solve your problem.
Next you will find a simple example that plots two random normal variables, one on mean 0 the other one on mean 100 (both sd s = 1) in the same plot. The first one in red on the left y-axis, the second one in blue on the right y-axis. Then, axis labels are added.
Here you go:
```
x <- 1:10
y1 <- rnorm(10)
y2 <- rnorm(10)+100
plot(x,y1,pch=0,type="b",col="red",yaxt="n",ylim=c(-8,2))
par(new=TRUE)
plot(x,y2,pch=1,type="b",col="blue",yaxt="n",ylim=c(98,105))
axis(side=2)
axis(side=4)
```
looks like this then (remember red on left axis, blue on right axis): 
UPDATE:
Based on comments I produced an updated version of my graph. Now I dig a little deeper into base graph functionality using `par(mar=c(a,b,c,d))` to create a bigger margin around the graph (needed for right axis label), `mtext` to show the axis labels and and advanced use of the `axis` function:
```
x <- 1:100
y1 <- rnorm(100)
y2 <- rnorm(100)+100
par(mar=c(5,5,5,5))
plot(x,y1,pch=0,type="b",col="red",yaxt="n",ylim=c(-8,2),ylab="")
axis(side=2, at=c(-2,0,2))
mtext("red line", side = 2, line=2.5, at=0)
par(new=TRUE)
plot(x,y2,pch=1,type="b",col="blue",yaxt="n",ylim=c(98,108), ylab="")
axis(side=4, at=c(98,100,102), labels=c("98%","100%","102%"))
mtext("blue line", side=4, line=2.5, at=100)
```

As you see it is pretty straight forward. You can define the position of your data with `ylim` in the `plot` function, then use `at` in the `axis` function to select which axis ticks you wanna see. Furthermore, you can even provide the labels for the axis ticks (pretty useful for nominal x-axis) via `labels` in the `axis` function (done here on the right axis). To add axis labels, use `mtext` with `at` for vertical positioning (`line` for horizontal positioning).
Make sure to check `?plot`, `?par`, `?axis`, and `?mtext` for further info.
Great web resources are: [Quick-R](http://www.statmethods.net/index.html) for Graphs: [1](http://www.statmethods.net/advgraphs/parameters.html), [2](http://www.statmethods.net/graphs/index.html), and [3](http://www.statmethods.net/advgraphs/index.html).
| null | CC BY-SA 3.0 | null | 2010-08-18T15:43:03.723 | 2013-11-16T01:50:25.777 | 2013-11-16T01:50:25.777 | 442 | 442 | null |
1844 | 1 | 2053 | null | 18 | 12148 | One way to summarize the comparison of two survival curves is to compute the hazard ratio (HR). There are (at least) two methods to compute this value.
- Logrank method. As part of the Kaplan-Meier calculations, compute the number of observed events (deaths, usually) in each group ($Oa$, and $Ob$), and the number of expected events assuming a null hypothesis of no difference in survival ($Ea$ and $Eb$). The hazard ratio then is:
$$
HR= \frac{(Oa/Ea)}{(Ob/Eb)}
$$
- Mantel-Haenszel method. First compute V, which is the sum of the hypergeometric variances at each time point. Then compute the hazard ratio as:
$$
HR= \exp\left(\frac{(Oa-Ea)}{V}\right)
$$
I got both these equations from chapter 3 of Machin, Cheung and Parmar, Survival Analysis. That book states that the two methods usually give very similar methods, and indeed that is the case with the example in the book.
Someone sent me an example where the two methods differ by a factor of three. In this particular example, it is obvious that the logrank estimate is sensible, and the Mantel-Haenszel estimate is far off. My question is if anyone has any general advice for when it is best to choose the logrank estimate of the hazard ratio, and when it is best to choose the Mantel-Haenszel estimate? Does it have to do with sample size? Number of ties? Ratio of sample sizes?
| What are the pros and cons of using the logrank vs. the Mantel-Haenszel method for computing the Hazard Ratio in survival analysis? | CC BY-SA 3.0 | null | 2010-08-18T15:47:15.627 | 2013-05-25T15:42:51.127 | 2013-05-25T15:42:51.127 | 7290 | 25 | [
"survival",
"hazard"
] |
1845 | 2 | null | 1829 | 35 | null | Regarding the question in the title, about what is the algorithm that is used:
In a linear algebra perspective, the linear regression algorithm is the way to solve a linear system $\mathbf{A}x=b$ with more equations than unknowns. In most of the cases there is no solution to this problem. And this is because the vector $b$ doesn't belong to the column space of $\mathbf{A}$, $C(\mathbf{A})$.
The `best straight line` is the one that makes the overall error $e=\mathbf{A}x-b$ as small as it takes. And is convenient to think as small to be the squared length, $\lVert e \rVert^2$, because it's non negative, and it equals 0 only when $b\in C(\mathbf{A})$.
Projecting (orthogonally) the vector $b$ to the nearest point in the column space of $\mathbf{A}$ gives the vector $b^*$ that solves the system (it's components lie on the best straight line) with the minimum error.
$\mathbf{A}^T\mathbf{A}\hat{x}=\mathbf{A}^Tb \Rightarrow \hat{x}=(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$
and the projected vector $b^*$ is given by:
$b^*=\mathbf{A}\hat{x}=\mathbf{A}(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$
Perhaps the least squares method is not exclusively used because that `squaring` overcompensates for outliers.
Let me give a simple example in R, that solves the regression problem using this algorithm:
```
library(fBasics)
reg.data <- read.table(textConnection("
b x
12 0
10 1
8 2
11 3
6 4
7 5
2 6
3 7
3 8 "), header = T)
attach(reg.data)
A <- model.matrix(b~x)
# intercept and slope
inv(t(A) %*% A) %*% t(A) %*% b
# fitted values - the projected vector b in the C(A)
A %*% inv(t(A) %*%A ) %*% t(A) %*% b
# The projection is easier if the orthogonal matrix Q is used,
# because t(Q)%*%Q = I
Q <- qr.Q(qr(A))
R <- qr.R(qr(A))
# intercept and slope
best.line <- inv(R) %*% t(Q) %*% b
# fitted values
Q %*% t(Q) %*% b
plot(x,b,pch=16)
abline(best.line[1],best.line[2])
```
| null | CC BY-SA 3.0 | null | 2010-08-18T16:19:28.490 | 2011-12-04T07:53:22.947 | 2011-12-04T07:53:22.947 | 2914 | 339 | null |
1846 | 2 | null | 1376 | 2 | null | Some robust alernatives are discussed in [A class of robust stepwise alternativese to Hotelling's T 2 tests](http://dx.doi.org/10.1080/02664760050076434), which deals with trimmed means of the marginals of residuals produced by stepwise regression, and in [A comparison of robust alternatives to Hoteslling's T^2 control chart](http://dx.doi.org/10.1080/02664760902810813), which outlines some robust alternatives based on MVE, MCD, RMCD and trimmed means.
| null | CC BY-SA 2.5 | null | 2010-08-18T16:28:57.907 | 2010-08-18T16:28:57.907 | null | null | 795 | null |
1847 | 2 | null | 1787 | 3 | null | Have you thought about using simulate in the arm package? Gelman & Hill have some nice chapters on this in their book.
| null | CC BY-SA 2.5 | null | 2010-08-18T16:38:48.443 | 2010-08-18T16:38:48.443 | null | null | 101 | null |
1848 | 1 | 1849 | null | 9 | 2126 | I'm implementing a rating system to be used on my website, and I think the Bayesian average is the best way to go about it. Every item will be rated in six different categories by the users. I don't want items with only one high rating to shoot to the top though, which is why I want to implement a Bayesian system.
Here is the formula:
```
Bayesian Rating = ( (avg_num_votes * avg_rating) + (this_num_votes * this_rating) ) / (avg_num_votes + this_num_votes)
```
Because the items will be rated in 6 different categories, should I use the average of the sums of those categories as "this_rating" for the Bayesian system? For instance, take one item with two ratings (scale of 0-5):
```
Rating 1:
Category A: 3
Category B: 1
Category C: 2
Category D: 4
Category E: 5
Category F: 3
Sum: 18
Rating 2:
Category A: 2
Category B: 3
Category C: 3
Category D: 5
Category E: 0
Category F: 1
Sum: 14
```
Should "this_rating" be simply the average of the sums listed above? Is my thinking correct, or should a Bayesian system be implemented for each category as well (or is that overthinking it)?
| Bayesian rating system with multiple categories for each rating | CC BY-SA 2.5 | null | 2010-08-18T16:43:11.870 | 2014-01-31T15:08:16.867 | null | null | 991 | [
"bayesian"
] |
1849 | 2 | null | 1848 | 6 | null | It depends on whether you want to wind up only with a cumulative rating of each object, or category-specific rating. Having a separate system in each category sounds more realistic, but your particular context might suggest otherwise. You could even do both a category-specific and overall rating!
| null | CC BY-SA 2.5 | null | 2010-08-18T16:58:10.540 | 2010-08-18T16:58:10.540 | null | null | 279 | null |
1850 | 1 | 1930 | null | 32 | 91771 | For an effect size analysis, I am noticing that there are differences between Cohen's d, Hedges's g and Hedges' g*.
- Are these three metrics normally very similar?
- What would be a case where they would produce different results?
- Also is it a matter of preference which I use or report with?
| Difference between Cohen's d and Hedges' g for effect size metrics | CC BY-SA 2.5 | null | 2010-08-18T17:35:17.280 | 2020-11-03T20:02:33.500 | 2010-08-19T08:02:01.917 | 183 | 559 | [
"effect-size",
"cohens-d"
] |
1852 | 2 | null | 1807 | 3 | null | Suppose there are 999 workers at ACME north factory each making a wage of 112, and 1 CEO making 88112. The population mean salary is $\mu = 0.999 * 112 + 0.001 * 88112 = 200.$ The probability of drawing the CEO from a sample of 49 people at the factory is $49 / 1000 < 0.05$ (this is from the hypergeometric distribution), thus with 95% confidence, your population sample mean will be 112. In fact, by adjusting the ratio of workers/CEOs, and the salary of the CEO, we can make it arbitrarily unlikely that a sample of 49 employees will draw a CEO, while fixing the population mean at 200, and the sample mean at 112. Thus, without making some assumptions about the underlying distribution, you cannot draw any inference about the population mean.
| null | CC BY-SA 2.5 | null | 2010-08-18T18:21:12.870 | 2010-08-19T17:09:15.637 | 2010-08-19T17:09:15.637 | 795 | 795 | null |
1853 | 1 | 1953 | null | 13 | 2510 | What tests are available for testing two independent samples for the null hypothesis that they come from populations with the same skew? There is a classical 1-sample test for whether the skew equals a fixed number (the test involves the 6th sample moment!); is there a straightforward translation to a 2-sample test?
Are there techniques which don't involve very high moments of the data? (I am anticipating an answer of the form 'bootstrap it': are bootstrap techniques known to be appropriate for this problem?)
| Testing two independent samples for null of same skew? | CC BY-SA 2.5 | null | 2010-08-18T18:49:48.227 | 2015-09-16T07:58:23.800 | 2015-09-16T07:58:23.800 | 11887 | 795 | [
"hypothesis-testing",
"distributions",
"bootstrap",
"moments",
"l-moments"
] |
1854 | 2 | null | 1826 | 10 | null | "Avoid learning your training data by heart by making sure the trained model performs well on independent data."
| null | CC BY-SA 2.5 | null | 2010-08-18T19:09:37.983 | 2010-08-18T19:09:37.983 | null | null | 961 | null |
1855 | 2 | null | 1797 | 7 | null | It has the same meaning as any other confidence interval: under the assumption that the model is correct, if the experiment and procedure is repeated over and over, 95% of the time the true value of the quantity of interest will lie within the interval. In this case, the quantity of interest is the expected value of the response variable.
It is probably easiest to explain this in the context of a linear model (mixed models are just an extension of this, so the same ideas apply):
The usual assumption is that:
$y_i = X_{i1} \beta_1 + X_{i2} \beta_2 + \ldots X_{ip} \beta_p + \epsilon $
where $y_i$ is the response, $X_{ij}$'s are the covariates, $\beta_j$'s are the parameters, and $\epsilon$ is the error term which has mean zero. The quantity of interest is then:
$E[y_i] = X_{i1} \beta_1 + X_{i2} \beta_2 + \ldots X_{ip} \beta_p $
which is a linear function of the (unknown) parameters, since the covariates are known (and fixed). Since we know the sampling distribution of the parameter vector, we can easily calculate the sampling distribution (and hence the confidence interval) of this quantity.
So why would you want to know it? I guess if you're doing out-of-sample prediction, it could tell you how good your forecast is expected to be (though you'd need to take into account model uncertainty).
| null | CC BY-SA 2.5 | null | 2010-08-18T19:16:05.037 | 2010-08-18T19:16:05.037 | null | null | 495 | null |
1856 | 1 | 2888 | null | 17 | 4618 | What do you think about applying machine learning techniques, like Random Forests or penalized regression (with L1 or L2 penalty, or a combination thereof) in small sample clinical studies when the objective is to isolate interesting predictors in a classification context? It is not a question about model selection, nor am I asking about how to find optimal estimates of variable effect/importance. I don't plan to do strong inference but just to use multivariate modeling, hence avoiding testing each predictor against the outcome of interest one at a time, and taking their interrelationships into account.
I was just wondering if such an approach was already applied in this particular extreme case, say 20-30 subjects with data on 10-15 categorical or continuous variables. It is not exactly the $n\ll p$ case and I think the problem here is related to the number of classes we try to explain (which are often not well balanced), and the (very) small n. I am aware of the huge literature on this topic in the context of bioinformatics, but I didn't find any reference related to biomedical studies with psychometrically measured phenotypes (e.g. throughout neuropsychological questionnaires).
Any hint or pointers to relevant papers?
Update
I am open to any other solutions for analyzing this kind of data, e.g. C4.5 algorithm or its derivatives, association rules methods, and any data mining techniques for supervised or semi-supervised classification.
| Application of machine learning techniques in small sample clinical studies | CC BY-SA 2.5 | null | 2010-08-18T20:36:59.617 | 2015-06-20T20:44:31.313 | 2010-09-19T10:57:54.697 | 930 | 930 | [
"machine-learning",
"feature-selection"
] |
1857 | 2 | null | 1856 | 4 | null | One common rule of thumb is to have at least 10 times the number of training data instances (not to speak of any test/validation data, etc.) as there are adjustable parameters in the classifier. Keep in mind that you have a problem wherein you need to not only have adequate data but also representative data. In the end, there is no systematic rule because there are so many variables when making this decision. As Hastie, Tibshirani,
and Friedman say in [The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) (see Chapter 7):
>
it is too difficult to give a general
rule on how much training data is
enough; among other things, this
depends on the signal-to-noise ratio
of the underlying function, and the
complexity of the models being fit to
the data.
If you are new to this field, I recommend reading this short ["Pattern Recognition"](http://users.rowan.edu/~polikar/RESEARCH/PUBLICATIONS/wiley06.pdf) paper from the Encyclopedia of Biomedical Engineering which gives a brief summary of some of the data issues.
| null | CC BY-SA 2.5 | null | 2010-08-18T20:51:32.280 | 2010-08-18T21:23:54.503 | 2010-08-18T21:23:54.503 | 5 | 5 | null |
1858 | 2 | null | 1856 | 3 | null | I can assure you that RF would work in that case and its importance measure would be pretty insightful (because there will be no large tail of misleading unimportant attributes like in standard (n << p)s). I can't recall now any paper dealing with similar problem, but I'll look for it.
| null | CC BY-SA 2.5 | null | 2010-08-18T21:28:17.517 | 2010-08-18T21:28:17.517 | null | null | null | null |
1860 | 1 | 1861 | null | 4 | 126 | I am studying a population of individuals who all begin with a measureable score of interest (ranging from -2 to 2) [call it "old"], then they all undergo a change to a new score (also ranging from -2 to 2) ["new"]. Thus all the variation is in the change (which can be positive or negative), and there are also a variety of predictors that help to explain variation in the amount of change.
My initial model is simply:
```
change = a + bx + e
```
where x is my vector of predictors.
But now I'm concerned that some of these predictors could be correlated with the baseline (old) score. Is this, then, a better specification?
```
change = a + bx + old + e
```
Or perhaps
```
new = a + bx + old + e
```
Thanks!
| Regression specification choices | CC BY-SA 2.5 | null | 2010-08-18T22:58:05.027 | 2010-08-19T12:59:30.587 | null | null | 78 | [
"regression"
] |
1861 | 2 | null | 1860 | 3 | null | You are right, version 1 is not acceptable. The second or third options (as long as `old` has a coefficient that will be estimated) are both OK, and in fact equivalent with respect to estimates for `a` and `b`. This can be seen if you replace `change` with `new-old` in the second equation, and solve it for `new`. All that happens is that the coefficient of `old` is increased by 1 as compared to the third equation. Other statistics such as R^2 will change, of course, as they are decomposing a different variability.
Note, however, that you have a different problem as well. If your scores are restricted to a -2 to 2 range, somebody with `old=-2` cannot possibly get worse, and similarly for `old=2` you can't get any better. Such a range restriction is usually not modeled well by a linear regression.
| null | CC BY-SA 2.5 | null | 2010-08-18T23:17:46.677 | 2010-08-19T12:59:30.587 | 2010-08-19T12:59:30.587 | 279 | 279 | null |
1862 | 1 | 1869 | null | 13 | 8268 | I've been struggling with the following problem with hopefully is an easy one for statisticians (I'm a programmer with some exposure to statistics).
I need to summarize the responses to a survey (for management). The survey has 100+ questions, grouped in different areas (with about 5 to 10 questions per area). All answers are categorical (on an ordinal scale, they are like "not at all", "rarely" ... "daily or more frequently").
Management would like to get a summary for each area and this is my problem: how to aggregate categorical answers within the related question?. The questions are too many to make a graph or even a lattice plot for each area. I favor a visual approach if possible, compared to, say, tables with numbers (alas, they won't read them).
The only thing I can come up with is to count the number of answers in each area, then plot the histogram.
Is there any thing else available for categorical data?
I use R, but not sure if it's relevant, I feel this is more of a general statistics question.
| How to summarize categorical data? | CC BY-SA 3.0 | null | 2010-08-19T00:31:44.013 | 2017-07-14T08:29:38.673 | 2017-07-14T08:29:38.673 | 11887 | 840 | [
"categorical-data",
"data-transformation",
"descriptive-statistics"
] |
1863 | 1 | 1871 | null | 6 | 1617 | I ran a within subjects repeated measures experiment, where the independent variable had 3 levels. The dependent variable is a measure of correctness and is recorded as either correct / incorrect. Time taken to provide an answer was also recorded.
A within subjects repeated measures ANOVA is used to establish whether there is significant differences in correctness (DV) between the 3 levels of the IV, there is significant. Now, I'd like to analyze whether there is significant differences in the time taken to provide the answers when the answers are 1) correct, and 2) incorrect.
My problem is: Across the levels there are different numbers of correct / incorrect answers, e.g. level 1 has 67 correct answers, level 2 has 30, level 3 has 25.
How can I compare the time take taken for all correct answers across the 3 levels? I think this means its unbalanced? Can I do 3 one way ANOVAS to do a pairwise comparison, while adjusting p downwards to account for each comparison?
Thanks
| Might be an unbalanced within subjects repeated measures? | CC BY-SA 2.5 | null | 2010-08-19T00:50:20.120 | 2010-08-21T07:21:06.203 | null | null | 993 | [
"variance",
"unbalanced-classes",
"repeated-measures"
] |
1864 | 2 | null | 1856 | 5 | null | I would have very little confidence in the generalisability of results of an exploratory analysis with 15 predictors and a sample size of 20.
- The confidence intervals of parameter estimates would be large. E.g., the 95% confidence interval on r = .30 with n = 20 is -0.17 to 0.66 .
- Issues tend to be compounded when you have multiple predictors used in an exploratory and data driven way.
In such circumstances, my advice would generally be to limit analyses to bivariate relationships.
If you take a bayesian perspective, then I'd say that your prior expectations are equally if not more important than the data.
| null | CC BY-SA 2.5 | null | 2010-08-19T00:59:56.543 | 2010-08-19T05:53:41.183 | 2010-08-19T05:53:41.183 | 183 | 183 | null |
1865 | 1 | null | null | 7 | 1201 | Standard deck has 52 cards, 26 Red and 26 Black. A run is a maximum contiguous block of cards, which has the same color.
Eg.
- (R,B,R,B,...,R,B) has 52 runs.
- (R,R,R,...,R,B,B,B,...,B) has 2 runs.
What is the expected number of runs in a shuffled deck of cards?
| What is the expected number of runs of same color in a standard deck of cards? | CC BY-SA 2.5 | null | 2010-08-19T01:15:27.043 | 2023-04-18T10:58:32.437 | 2010-09-19T16:20:20.837 | null | 994 | [
"probability",
"games"
] |
1866 | 1 | 1918 | null | 13 | 4633 | Following to the recent questions we had [here](https://stats.stackexchange.com/questions/1818/how-to-determine-the-sample-size-needed-for-repeated-measurement-anova/1823#1823).
I was hopping to know if anyone had come across or can share R code for performing a custom power analysis based on simulation for a linear model?
Later I would obviously like to extend it to more complex models, but `lm` seems to right place to start.
| How to simulate a custom power analysis of an lm model (using R)? | CC BY-SA 4.0 | null | 2010-08-19T02:10:15.867 | 2021-08-19T18:16:33.700 | 2021-08-19T18:08:54.867 | 11887 | 253 | [
"r",
"simulation",
"statistical-power"
] |
1867 | 2 | null | 1863 | 2 | null | So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them).
I think the easiest way for doing this would be to take the mean response time for each subject for each of the three levels (which will results in 3 numbers per subject).
And then run a friedman test on that (there is also a [post hoc friedman test in R](http://www.r-statistics.com/2010/02/post-hoc-analysis-for-friedmans-test-r-code/), in case you would want that - I assume you would)
The downside of this is that this assumes, in a sense, that your estimation of the three means (a mean for each of the three levels, per subject), are the same where in fact they are not. You have more variability in your estimation of level 3 then of level 1.
Realistically, I would ignore that. Theoretically, I hope someone here can offer a better solution so both of us would be able to learn :)
| null | CC BY-SA 2.5 | null | 2010-08-19T02:35:58.207 | 2010-08-19T02:35:58.207 | null | null | 253 | null |
1868 | 2 | null | 1860 | 2 | null | A few references that you might find useful:
- Edwards (2001) has a nice article called Ten Difference Score Myths.
- I have a post with some general points on change scores.
| null | CC BY-SA 2.5 | null | 2010-08-19T03:03:39.437 | 2010-08-19T03:03:39.437 | null | null | 183 | null |
1869 | 2 | null | 1862 | 10 | null | You really need to figure out what is the question that you are trying to answer- or what question is management most interested in. Then you can select the survey questions that are most relevant to your problem.
Without knowing anything about your problem or dataset, here are some generic solutions:
- Visually represent the answers as clusters. My favorite is by either using dendrograms or just plotting on an xy axis (Google "cluster analysis r" and go to the first result by statmethods.net)
- Rank the questions from greatest to least "daily or more frequently" responses. This is an example that may not exactly work for you but perhaps it will inspire you http://www.programmingr.com/content/building-scoring-and-ranking-systems-r
- Crosstabs: if for example, you have a question "How often do you come in late for work?" and "How often do you use Facebook?," by crosstabbing the two questions you can find out the percentage of people who rarely do both, or who do both everyday.(Google "r frequency crosstabs" or go to the aforementioned statmethods.net)
- Correlograms. I don't have any experience with these but I saw it also on the statmethods.net website. Basically you find which questions have the highest correlation and then create a table. You may find this useful although it looks kind of "busy."
| null | CC BY-SA 2.5 | null | 2010-08-19T03:15:34.427 | 2010-08-19T03:21:26.100 | 2010-08-19T03:21:26.100 | 995 | 995 | null |
1870 | 1 | 1879 | null | 8 | 875 | The question is in the header, but I would extend the context a bit.
Next semester I am due to be a teaching assistant (TA) in a course in statistics, where I would need to help sociology students learn to use SPSS. I don't know SPSS, yet, and would like to learn how to use it.
I was thinking of taking a simple dataset, and start reviewing it with methods I know, thus starting to map out where are methods I know of. And once finished, to try and explore more options.
Can someone propose other/better strategies to master a new statistical graphical user interface (GUI)? (in my case SPSS, but it could apply to many other GUI's).
| Learning how to use a new statistical GUI? | CC BY-SA 3.0 | null | 2010-08-19T04:35:15.690 | 2016-12-19T21:47:15.810 | 2016-12-19T21:47:15.810 | 22468 | 253 | [
"spss",
"references",
"software",
"teaching"
] |
1871 | 2 | null | 1863 | 3 | null | It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means.
Just as you aggregated your accuracies to get a percentage correct and do your ANOVA in the first place you average your latencies as well. Each participant provides 6 values, therefore it is not imbalanced.
Most likely though... the ANOVA was not the best analysis in the first place. You should probably be using mixed-effect modelling. For the initial test of the accuracies you'd use mixed effects logistic regression. For the second one you propose it would be a 3-levels x 2-correctnesses analysis of the latencies. Both would have subjects as a random effect.
In addition it's often best to do some sort of normality correction on the times like a log or -1/T correction. This is less of a concern in ANOVA because you aggregate across a number of means first and that often ameliorates the skew of latencies through the central limit theorem. You could check with a boxcox analysis to see what fits best.
On a more important note though... what are you expecting to find? Is this just exploratory? What would it mean to have different latencies in the correct and incorrect groups and what would it mean for them to interact? Unless you are fully modelling the relationship between accuracy and speed in your experiment, or you have a full model that you are testing, then you are probably wasting your time. A latency with an incorrect response means that someone did something other than what you wanted them to... and it could be anything. That's why people almost always only work with the latencies to the correct responses.
(these two types of responses also often have very different distributions with incorrect much flatter because they disproportionately make up both the short and long latencies)
| null | CC BY-SA 2.5 | null | 2010-08-19T04:35:45.863 | 2010-08-21T07:21:06.203 | 2010-08-21T07:21:06.203 | 601 | 601 | null |
1872 | 2 | null | 1870 | 6 | null | Since you are pretty well versed in R, get a copy of Muenchen's "[R for SAS and SPSS Users](http://www.springer.com/statistics/computanional+statistics/book/978-0-387-09417-5)" (Springer, 2009) and work backwards.
| null | CC BY-SA 2.5 | null | 2010-08-19T04:47:46.457 | 2010-08-19T04:47:46.457 | null | null | 597 | null |
1873 | 1 | null | null | 3 | 194 | I posted this on mathoverflow, but they sent me here. This question relates to a problem I had at work a while ago, doing a little data mining at a car rental company. Names changed, of course. I'm using Oracle DBMS if it matters.
There was a flight of steps out the front of our building. It had a dodgy step on it, on which people often stub their toes.
I had records for everyone who works in the building, detailing how many times they climbed these steps and how many of these times they stubbed their toes on the dodgy step. There's a total of 3000 stair-climbing incidents and 1000 toe-stubbing incidents.
Jack climbed the steps 15 times and stubbed his toes 7 times, which is 2 more than you'd expect. What's the probability that this is just random, vs the probability that Joe is actually clumsy?
I'm pretty sure from half-remembered statistics 1 that its something to do with chi-squared, but beats me where to go from there.
...
Of course, we actually had several flights of steps, each with different rates of toe stubbing and instep bashing. How would I combine the stats from those to get a more accurate better likelihood of Joe being clumsy? We can assume that there's no systematic bias in respect of more clumsy people being inclined to use certain flights of steps.
| Based on my data, is Jack likely to be clumsy? | CC BY-SA 2.5 | null | 2010-08-19T04:50:25.533 | 2010-09-16T06:49:06.127 | 2010-09-16T06:49:06.127 | null | 997 | [
"hypothesis-testing"
] |
1874 | 1 | null | null | 5 | 167 | I'm looking to construct a 3-D surface of a part of the brain based on 2-D contours from cross-sectional slices from multiple angles. Once I get this shape, I want to "fit" it to another set of contours via rescaling.
I'm aspiring to do this in the context of an MCMC analysis (So as to be able to make inferences, so it would be very nice if I could easily compute the volume of the rescaled surface, and the minimum distance between a given point of the distance. (Accurate approximations are fine).
What would be a good image reconstruction algorithm that allows for volume and distance to be quickly calculated?
| Parametric Surface Reconstruction from Contours with Quick Rescaling | CC BY-SA 2.5 | null | 2010-08-19T04:53:32.270 | 2010-10-12T18:11:52.460 | null | null | 996 | [
"bayesian",
"markov-chain-montecarlo",
"fitting",
"optimal-scaling",
"interpolation"
] |
1875 | 1 | 1903 | null | 21 | 1443 | A question which bothered me for some time, which I don't know how to address:
Every day, my weatherman gives a percentage chance of rain (let's assume its calculated to 9000 digits and he has never repeated a number). Every subsequent day, it either rains or does not rain.
I have years of data - pct chance vs rain or not. Given this weatherman's history, if he says tonight that tomorrow's chance of rain is X, then what's my best guess as to what the chance of rain really is?
| Is my weatherman accurate? | CC BY-SA 2.5 | null | 2010-08-19T05:56:06.483 | 2020-12-26T15:49:48.700 | 2020-12-26T15:49:48.700 | 11887 | 997 | [
"hypothesis-testing",
"forecasting",
"scoring-rules"
] |
1876 | 2 | null | 1862 | 8 | null | Standard options include:
- getting the mean for items within a scale (e.g., if the scale is 1 to 5, the mean will be 1 to 5)
- converting each item to a binary measure (e.g., if item >= 3, then 1, else 0) and then taking the mean of this binary response
Given that you are aggregating over items and over large samples of people in the organisation, both options above (i.e., the mean of 1 to 5 or the mean of percentage above a point) will be reliable at the organisational-level ([see here for further discussion](http://jeromyanglim.blogspot.com/2009/10/job-satisfaction-measurement-scales.html)). Thus, either of the above options are basically communicating the same information.
In general I wouldn't be worried about the fact that items are categorical. By the time you create scales by aggregating over items and then aggregate over your sample of respondents, the scale will be a close approximation to a continuous scale.
Management may find one metric easier to interpret. When I get Quality of Teaching scores (i.e., the average student satisfaction score of say 100 students) , it is the average on a 1 to 5 scale and that's fine. Over the years after seeing my own scores from year to year and also seeing some norms for the university I've developed a frame of reference of what different values mean.
However, management sometimes prefers to think about the percentage endorsing a statement, or the percentage of positive responses even when it is in a sense the mean percentage.
The main challenge is to give some tangible frame of reference for the scores. Management will want to know what the numbers actually mean. For example, if the mean response for a scale is 4.2, What does that mean? Is it good? Is it bad? Is it just okay?
If you are using the survey over multiple years or in different organisations, then you can start to develop some norms. Access to norms is one reason organisations often get an external survey provider or use a standard survey.
You may also wish to run a factor analysis to validate that the assignment of items to scales is empirically justifiable.
In terms of a visual approach, you can have a simple line or bar graph with the scale type on the x-axis and the score on the y-axis. If you have normative data, you could add that also.
| null | CC BY-SA 2.5 | null | 2010-08-19T06:13:02.550 | 2010-08-19T06:13:02.550 | null | null | 183 | null |
1877 | 2 | null | 1873 | 3 | null | ```
chisq.test(c(15,7),p=c(3000,1000),rescale.p=TRUE)
Chi-squared test for given probabilities
data: c(15, 7)
X-squared = 0.5455, df = 1, p-value = 0.4602
```
There is not enough evidence against the Null hypothesis (that is just a random incident).
A difference from the expected value as big as or bigger than the one observed will arise by chance alone in more than 46% of cases and is clearly not statistically significant.
The chi-squared value is
```
sum((c(15,7) - 22*c(3000,1000)/4000)^2 / (22*c(3000,1000)/4000))
[1] 0.5454545
```
and the p-value comes from the right-hand tail of the cumulative probability function of
the chi-squared distribution 1-pchisq with 1 degrees of freedom (2 comparisons -1 for
contingency; the total count must be 22)
```
1-pchisq(0.5454545,1)
[1] 0.460181
```
exactly as we obtained using the built-in chisq.test function, above.
EDIT
Alternatively, you could carry out a binomial test:
```
binom.test(c(15,7),p=3/4)
Exact binomial test
data: c(15, 7)
number of successes = 15, number of trials = 22, p-value = 0.463
alternative hypothesis: true probability of success is not equal to 0.75
95 percent confidence interval:
0.4512756 0.8613535
sample estimates:
probability of success
0.6818182
```
You can see that the 95% confidence interval for the proportion of successes (0.45, 0.86)
contains 0.75, so there is no evidence against a 3:1 success ratio in these data. The p value is slightly different than it was in the chi-squared test, but the interpretation is exactly the same.
| null | CC BY-SA 2.5 | null | 2010-08-19T06:16:25.087 | 2010-08-19T06:29:38.990 | 2010-08-19T06:29:38.990 | 339 | 339 | null |
1878 | 2 | null | 1875 | 11 | null | Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the [Brier score](http://en.wikipedia.org/wiki/Brier_score)
but you can also use [ROC curve](http://en.wikipedia.org/wiki/Receiver_operating_characteristic) since any probability forecast of this type can be transformed into a dicrimination procedure with a varying threshold
Indeed you can say "it will rain" if your probability is greater than $\tau$ and evaluate the missed, false discovery,true discovery and true negatives for different values of $\tau$.
You should take a look at how the European center for medium range weather forecast ([ECMWF does](http://www.ecmwf.int/products/forecasts/guide/The_verification_of_ECMWF_forecasts.html)) .
| null | CC BY-SA 2.5 | null | 2010-08-19T06:22:09.973 | 2011-01-25T18:19:47.763 | 2011-01-25T18:19:47.763 | 223 | 223 | null |
1879 | 2 | null | 1870 | 11 | null | As someone who made the shift the other way from SPSS to R, I'd say that SPSS is relatively simple and intuitive relative to R. The menus and dialog boxes guide you through the process. Of course this means that it is also fairly easy to run analyses that don't make sense. And the GUI leads to less flexible analyses and tedious button pressing especially for repetitive analyses.
Thus, your approach of taking a dataset and just playing around might be sufficient.
There's plenty of how-to books out there, such as:
- Discovering Statistics Using SPSS
- SPSS Survival Manual
There's also plenty of websites offering tutorials:
- Andy Field
- My old notes
- UCLA on SPSS
I'd also recommend that if you are teaching students about SPSS, you encourage them to use syntax. Using SPSS syntax is not as good as using technologies like [R and Sweave](http://www.r-bloggers.com/getting-started-with-sweave-r-latex-eclipse-statet-texlipse/).
However, using syntax is much better than just pressing menus and buttons in an ad hoc way and then wondering later what you've actually done.
I wrote a post listing tips for using [SPSS syntax in order to approximate reproducible research with SPSS](http://jeromyanglim.blogspot.com/2009/10/introduction-to-spss-syntax-advice-for.html).
| null | CC BY-SA 2.5 | null | 2010-08-19T06:28:05.217 | 2010-08-19T06:28:05.217 | null | null | 183 | null |
1880 | 2 | null | 1866 | 3 | null | Here are a few sources of simulation code in R. I'm not sure if any specifically address linear models, but perhaps they provide enough of an example to get the gist:
- Benjamin Bolker has written a great book Ecological Data and Models with R. An early draft of the whole book along with Sweave code is available online. Chapter 5 addresses power analysis and simulation.
There's another couple of examples of simulation at the following sites:
- http://www.personality-project.org/R/r.datageneration.html
- http://psy-ed.wikidot.com/simulation
| null | CC BY-SA 2.5 | null | 2010-08-19T06:35:02.147 | 2010-08-19T06:35:02.147 | null | null | 183 | null |
1881 | 1 | 1901 | null | 16 | 6561 | I would like an advice on a analysis method I am using, to know if it it statistically sound.
I have measured two point processes $T^1 = t^1_1, t^1_2, ..., t^1_n$ and $T^2 = t^2_1, t^2_2, ..., t^2_m$ and I want to determine if the events in $T^1$ are somehow correlated to the events in $T^2$.
One of the methods that I have found in the literature is that of constructing a cross-correlation histogram: for each $t^1_n$ we find the delay to all the events of $T^2$ that fall in a given window of time (before and after $t^1_n$), and then we construct an histogram of all these delays.
If the two processes are not correlated I would expect a flat histogram, as the probability of having an event in $T^2$ after (or before) an event in $T^1$ is equal at all delays. On the other hand if there is a peak in the histogram, this suggests that the two point process are somehow influencing each other (or, at least, have some common input).
Now, this is nice and good, but how do I determine whether the histograms do have a peak (I have to say that for my particular set of data they're clearly flat, but still it would be nice to have a statistical way of confirming that)?
So, here what I've done: I've repeated the process of generating the histogram for several (1000) times keeping $T^1$ as it is and using a "shuffled" version of $T^2$.
To shuffle $T^2$ I calculate the intervals between all the events, shuffle them and sum them to reconstitute a new point process. In R I simply do this with:
```
times2.swp <- cumsum(sample(diff(times2)))
```
So, I end up with 1000 new histogram, that show me the density of events in $T^{2*}$ compared to $T^1$.
For each bin of these histogram (they're all binned in the same way) I calculate the density of 95% of the histogram. In other words I'm saying, for instance: at time delay 5 ms, in 95% of the shuffled point processes there is a probability x of finding an event in $T^{2*}$ after an event in $T^1$.
I would then take this 95% value for all of the time delays and use it as some "confidence limit" (probably this is not the correct term) so that anything that goes over this limit in the original histogram can be considered a "true peak".
Question 1: is this method statistically correct? If not how would you tackle this problem?
Question 2: another thing that I want to see is whether there is a "longer" type of correlation of my data. For instance there may be similar changes in the rate of events in the two point processes (note that they may have quite different rates), but I'm not sure how to do that. I thought of creating an "envelope" of each point process using some sort of smoothing kernel and then performing a cross-correlation analysis of the two envelopes. Could you suggest any other possible type of analysis?
Thank you and sorry for this very long question.
| Analysis of cross correlation between point-processes | CC BY-SA 2.5 | null | 2010-08-19T06:42:15.767 | 2010-08-19T13:07:22.083 | null | null | 582 | [
"point-process",
"cross-correlation"
] |
1882 | 2 | null | 1829 | 82 | null | To answer the letter of the question, "ordinary least squares" is not an algorithm; rather it is a type of problem in computational linear algebra, of which linear regression is one example. Usually one has data $\{(x_1,y_1),\dots,(x_m,y_m)\}$ and a tentative function ("model") to fit the data against, of the form $f(x)=c_1 f_1(x)+\dots+c_n f_n(x)$. The $f_j(x)$ are called "basis functions" and can be anything from monomials $x^j$ to trigonometric functions (e.g. $\sin(jx)$, $\cos(jx)$) and exponential functions ($\exp(-jx)$). The term "linear" in "linear regression" here does not refer to the basis functions, but to the coefficients $c_j$, in that taking the partial derivative of the model with respect to any of the $c_j$ gives you the factor multiplying $c_j$; that is, $f_j(x)$.
One now has an $m\times n$ rectangular matrix $\mathbf A$ ("design matrix") that (usually) has more rows than columns, and each entry is of the form $f_j(x_i)$, $i$ being the row index and $j$ being the column index. OLS is now the task of finding the vector $\mathbf c=(c_1\,\dots\,c_n)^\top$ that minimizes the quantity $\sqrt{\sum\limits_{j=1}^{m}\left(y_j-f(x_j)\right)^2}$ (in matrix notation, $\|\mathbf{A}\mathbf{c}-\mathbf{y}\|_2$ ; here, $\mathbf{y}=(y_1\,\dots\,y_m)^\top$ is usually called the "response vector").
There are at least three methods used in practice for computing least-squares solutions: the normal equations, QR decomposition, and singular value decomposition. In brief, they are ways to transform the matrix $\mathbf{A}$ into a product of matrices that are easily manipulated to solve for the vector $\mathbf{c}$.
George already showed the method of normal equations in his answer; one just solves the $n\times n$ set of linear equations
$\mathbf{A}^\top\mathbf{A}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$
for $\mathbf{c}$. Due to the fact that the matrix $\mathbf{A}^\top\mathbf{A}$ is symmetric positive (semi)definite, the usual method used for this is Cholesky decomposition, which factors $\mathbf{A}^\top\mathbf{A}$ into the form $\mathbf{G}\mathbf{G}^\top$, with $\mathbf{G}$ a lower triangular matrix. The problem with this approach, despite the advantage of being able to compress the $m\times n$ design matrix into a (usually) much smaller $n\times n$ matrix, is that this operation is prone to loss of significant figures (this has something to do with the "condition number" of the design matrix).
A slightly better way is QR decomposition, which directly works with the design matrix. It factors $\mathbf{A}$ as $\mathbf{A}=\mathbf{Q}\mathbf{R}$, where $\mathbf{Q}$ is an orthogonal matrix (multiplying such a matrix with its transpose gives an identity matrix) and $\mathbf{R}$ is upper triangular. $\mathbf{c}$ is subsequently computed as $\mathbf{R}^{-1}\mathbf{Q}^\top\mathbf{y}$. For reasons I won't get into (just see any decent numerical linear algebra text, like [this one](http://books.google.com/books?id=epilvM5MMxwC&pg=PA385)), this has better numerical properties than the method of normal equations.
One variation in using the QR decomposition is the [method of seminormal equations](http://dx.doi.org/10.1016/0024-3795%2887%2990101-7). Briefly, if one has the decomposition $\mathbf{A}=\mathbf{Q}\mathbf{R}$, the linear system to be solved takes the form
$$\mathbf{R}^\top\mathbf{R}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$$
Effectively, one is using the QR decomposition to form the Cholesky triangle of $\mathbf{A}^\top\mathbf{A}$ in this approach. This is useful for the case where $\mathbf{A}$ is sparse, and the explicit storage and/or formation of $\mathbf{Q}$ (or a factored version of it) is unwanted or impractical.
Finally, the most expensive, yet safest, way of solving OLS is the singular value decomposition (SVD). This time, $\mathbf{A}$ is factored as $\mathbf{A}=\mathbf{U}\mathbf \Sigma\mathbf{V}^\top$, where $\mathbf{U}$ and $\mathbf{V}$ are both orthogonal, and $\mathbf{\Sigma}$ is a diagonal matrix, whose diagonal entries are termed "singular values". The power of this decomposition lies in the diagnostic ability granted to you by the singular values, in that if one sees one or more tiny singular values, then it is likely that you have chosen a not entirely independent basis set, thus necessitating a reformulation of your model. (The "condition number" mentioned earlier is in fact related to the ratio of the largest singular value to the smallest one; the ratio of course becomes huge (and the matrix is thus ill-conditioned) if the smallest singular value is "tiny".)
This is merely a sketch of these three algorithms; any good book on computational statistics and numerical linear algebra should be able to give you more relevant details.
| null | CC BY-SA 3.0 | null | 2010-08-19T06:42:28.403 | 2016-06-13T18:36:26.280 | 2016-06-13T18:36:26.280 | 830 | 830 | null |
1883 | 1 | null | null | 10 | 2075 | What areas of statistics have been substantially revolutionised in the last 50 years? For example, about 40 years ago, Akaike with colleagues revolutionised the area of statistical model discrimination. About 10 years ago, Hyndman with colleagues revolutionised the area of exponential smoothing. About XX years ago, ...
How do I possibly continue the list, with years and names please? By statistics I mean its all four types from Bartholomew's 1995 presidential address, Chambers's greater and lesser statistics together, as featuring in Hand's recent presidential address on 'Modern statistics' and so on - anything professionally relevant.
| Revolutions in statistics for the last 50 years? | CC BY-SA 3.0 | null | 2010-08-19T07:00:53.840 | 2015-11-22T13:36:21.940 | 2015-11-22T13:36:21.940 | 28666 | 273 | [
"history"
] |
1884 | 2 | null | 1865 | 2 | null | Suppose $X_n$ denotes the color of the $n$th card in the shuffled deck.
Then note that the last card always denotes the end of a run. Other ends of runs are characterized by $X_n\ne X_{n+1}$ which indicates a run ending at $n$.
Note that $P(X_n\ne X_{n+1})=26/51$ (since once you fix a card, you can choose another card from remaining 51 out of which 26 will have a different color).
So summing up the indicators $X_n\ne X_{n+1}$ we get the number of runs -
$$\#\text{runs}=1+\sum_{n=1}^{51}\mathbb{I}_{X_n\ne X_{n+1}}.$$
So $$E(\#\text{runs})=1+\sum_{n=1}^{51}P(X_n\ne X_{n+1})=1+\sum_{n=1}^{51}26/51=27.$$
---
[Source](https://math.stackexchange.com/a/2764/1074816)
| null | CC BY-SA 4.0 | null | 2010-08-19T08:34:51.493 | 2023-04-18T10:58:32.437 | 2023-04-18T10:58:32.437 | 362671 | 994 | null |
1885 | 2 | null | 1850 | 9 | null | To my understanding, Hedges's g is a somewhat more accurate version of Cohen's d (with pooled SD) in that we add a correction factor for small sample. Both measures generally agree when the homoscedasticity assumption is not violated, but we may found situations where this is not the case, see e.g. McGrath & Meyer, Psychological Methods 2006, 11(4): 386-401. Other papers are listed at the end of my reply.
I generally found that in almost every psychological or biomedical studies, this is the Cohen's d that is reported; this probably stands from the well-known rule of thumb for interpreting its magnitude (Cohen, 1988). I don't know about any recent paper considering Hedges's g (or Cliff delta as a non-parametric alternative). Bruce Thompson has a [revised version](https://web.archive.org/web/20120322231849/http://people.cehd.tamu.edu/%7Ebthompson/apaeffec.htm) of the APA section on effect size.
Googling about Monte Carlo studies around effect size measures, I found this paper which might be interesting (I only read the abstract and the simulation setup): [Robust Confidence Intervals for Effect Sizes: A Comparative Study of Cohen’s d and Cliff’s Delta Under Non-normality and Heterogeneous Variances](https://web.archive.org/web/20140917160457/http://www.coedu.usf.edu/main/departments/me/documents/cohen.pdf) (pdf).
About your 2nd comment, the `MBESS` R package includes various utilities for ES calculation (e.g., `smd` and related functions).
Other references
- Zakzanis, K.K. (2001). Statistics to tell the truth, the whole truth, and nothing but the truth: Formulae, illustrative numerical examples, and heuristic interpretation of effect size analyses for neuropsychological researchers. Archives of Clinical Neuropsychology, 16(7), 653-667.
- Durlak, J.A. (2009). How to Select, Calculate, and Interpret Effect Sizes. Journal of Pediatric Psychology
| null | CC BY-SA 4.0 | null | 2010-08-19T08:50:44.340 | 2020-11-03T20:02:33.500 | 2020-11-03T20:02:33.500 | 930 | 930 | null |
1887 | 2 | null | 10 | 48 | null | Maybe too late but I add my answer anyway...
It depends on what you intend to do with your data: If you are interested in showing that scores differ when considering different group of participants (gender, country, etc.), you may treat your scores as numeric values, provided they fulfill usual assumptions about variance (or shape) and sample size. If you are rather interested in highlighting how response patterns vary across subgroups, then you should consider item scores as discrete choice among a set of answer options and look for log-linear modeling, ordinal logistic regression, item-response models or any other statistical model that allows to cope with polytomous items.
As a rule of thumb, one generally considers that having 11 distinct points on a scale is sufficient to approximate an interval scale (for interpretation purpose, see @xmjx's comment)). Likert items may be regarded as true ordinal scale, but they are often used as numeric and we can compute their mean or SD. This is often done in attitude surveys, although it is wise to report both mean/SD and % of response in, e.g. the two highest categories.
When using summated scale scores (i.e., we add up score on each item to compute a "total score"), usual statistics may be applied, but you have to keep in mind that you are now working with a latent variable so the underlying construct should make sense! In psychometrics, we generally check that (1) unidimensionnality of the scale holds, (2) scale reliability is sufficient. When comparing two such scale scores (for two different instruments), we might even consider using attenuated correlation measures instead of classical Pearson correlation coefficient.
Classical textbooks include:
1. Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill Series in Psychology.
2. Streiner, D.L. and Norman, G.R. (2008). Health Measurement Scales. A practical guide to their development and use (4th ed.). Oxford.
3. Rao, C.R. and Sinharay, S., Eds. (2007). Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V.
4. Dunn, G. (2000). Statistics in Psychiatry. Hodder Arnold.
You may also have a look at [Applications of latent trait and latent class models in the social sciences](http://www.ipn.uni-kiel.de/aktuell/buecher/rostbuch/inhalt.htm), from Rost & Langeheine, and W. Revelle's website on [personality research](http://www.personality-project.org/).
When validating a psychometric scale, it is important to look at so-called ceiling/floor effects (large asymmetry resulting from participants scoring at the lowest/highest response category), which may seriously impact on any statistics computed when treating them as numeric variable (e.g., country aggregation, t-test). This raises specific issues in cross-cultural studies since it is known that overall response distribution in attitude or health surveys differ from one country to the other (e.g. chinese people vs. those coming from western countries tend to highlight specific response pattern, the former having generally more extreme scores at the item level, see e.g. Song, X.-Y. (2007) Analysis of multisample structural equation models with applications to Quality of Life data, in Handbook of Latent Variable and Related Models, Lee, S.-Y. (Ed.), pp 279-302, North-Holland).
More generally, you should look at the psychometric-related literature which makes extensive use of Likert items if you are interested with measurement issue. Various statistical models have been developed and are currently headed under the Item Response Theory framework.
| null | CC BY-SA 3.0 | null | 2010-08-19T10:00:00.370 | 2012-07-11T11:04:18.083 | 2012-07-11T11:04:18.083 | 930 | 930 | null |
1888 | 2 | null | 203 | 13 | null | Clason & Dormody discussed the issue of statistical testing for Likert items ([Analyzing data measured by individual Likert-type items](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.2197&rep=rep1&type=pdf)). I think that a bootstrapped test is ok when the two distributions look similar (bell shaped and equal variance). However, a test for categorical data (e.g. trend or Fisher test, or ordinal logistic regression) would be interesting too since it allows to check for response distribution across the item categories, see Agresti's book on Categorical Data Analysis (Chapter 7 on Logit models for multinomial responses).
Aside from this, you can imagine situations where the t-test or any other non-parametric tests would fail if the response distribution is strongly imbalanced between the two groups. For example, if all people from group A answer 1 or 5 (in equally proportion) whereas all people in group B answer 3, then you end up with identical within-group mean and the test is not meaningful at all, though in this case the homoscedasticity assumption is largely violated.
| null | CC BY-SA 3.0 | null | 2010-08-19T10:19:35.723 | 2014-12-15T03:53:44.170 | 2014-12-15T03:53:44.170 | 805 | 930 | null |
1889 | 2 | null | 1883 | 15 | null | Efron's work on the [Bootstrap](http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29) comes to mind.
| null | CC BY-SA 2.5 | null | 2010-08-19T10:29:27.773 | 2010-08-19T10:44:00.393 | 2010-08-19T10:44:00.393 | null | 334 | null |
1890 | 2 | null | 1883 | 4 | null | The creation of this site ;-)
| null | CC BY-SA 2.5 | null | 2010-08-19T10:45:32.653 | 2010-08-19T10:45:32.653 | null | null | null | null |
1891 | 2 | null | 1883 | 6 | null | Generalized linear models due to the recently deceased John Nelder and Robert Wedderburn.
| null | CC BY-SA 2.5 | null | 2010-08-19T11:03:39.603 | 2010-08-19T11:03:39.603 | null | null | 521 | null |
1892 | 2 | null | 1883 | 4 | null |
- Revolution 1: S (ACM Software Systems Award)
- Revolution 2: R (Ross Ihaka (1998) on the history of R to that point)
| null | CC BY-SA 2.5 | null | 2010-08-19T11:06:22.780 | 2010-08-19T11:06:22.780 | null | null | 183 | null |
1893 | 2 | null | 1883 | 15 | null | The application of Bayesian statistics with Monte Carlo methods.
| null | CC BY-SA 2.5 | null | 2010-08-19T11:21:55.827 | 2010-08-19T11:21:55.827 | null | null | 5 | null |
1894 | 2 | null | 1883 | 11 | null | [Ensemble methods](http://en.wikipedia.org/wiki/Ensemble_learning) like boosting, bagging, ... etc are another potential candidate.
| null | CC BY-SA 2.5 | null | 2010-08-19T11:22:51.240 | 2010-08-19T11:22:51.240 | null | null | 334 | null |
1895 | 1 | null | null | 4 | 1108 | I have a software benchmark which is quite noisy. I am trying to for the bugs which are causing the noise, and I need to be able to measure it somehow.
The benchmark is comprised of a number of subbenchmarks, for example:
```
"3d-cube": 31.56884765625,
"3d-morph": 21.89599609375,
"3d-raytrace": 51.802978515625,
"access-binary-trees": 15.09521484375,
"access-fannkuch": 45.578857421875,
"access-nbody": 8.651123046875,
```
The times are in milliseconds. The times typically vary between runs. For example, on my machine, the "3d-cube" benchmark tends to take around 35ms, but I've seen it go as high as 44ms, and 31ms (above) is uncharacteristically low.
My aim is to change the benchmark so that minor improvements to the run-time can be visible in a benchmark result. What I need is a number that tells me whether I have reduced the "variability" of the benchmark.
### My own solution
I run it the benchmark 1000 times, the took the sum of the differences between each subbenchmark's mean and its actual run-times. In pseudo-code:
```
v = 0
for s in subbenchmarks:
x = mean of all iterations of s
for i in iteration
v += absolute_value(results[s][i] - x)
```
I'm sure this isn't statistically valid (having asked someone), but what is a "correct" way of measuring this "variability" so that I can reduce it.
| Determining the "variability" of a benchmark | CC BY-SA 2.5 | null | 2010-08-19T11:33:01.067 | 2022-11-23T10:04:33.483 | 2017-12-14T08:55:25.773 | 1352 | 1001 | [
"variance"
] |
1896 | 2 | null | 1895 | 3 | null | I guess that your method is the one described [here](http://en.wikipedia.org/wiki/Absolute_deviation), and it's apparently valid. You could also have used the [standard deviation](http://en.wikipedia.org/wiki/Standard_deviation) as a measure of variability (which according to the article, it's not as [robust](http://en.wikipedia.org/wiki/Robust_statistics) as your absolute deviation)
Check out [this](http://en.wikipedia.org/wiki/Statistical_dispersion), for other measures of statistical dispersion.
| null | CC BY-SA 2.5 | null | 2010-08-19T11:58:09.057 | 2010-08-19T11:58:09.057 | null | null | 339 | null |
1897 | 2 | null | 1895 | 4 | null | As gd047 mentioned, the standard way of measuring variability is to use the [variance](http://en.wikipedia.org/wiki/Variance). So your pseudo-code will be:
```
vnew = vector of length subbenchmarks
for s in subbenchmarks:
vnew[i] = variance(s)
```
Now the problem is, even if you don't change your code, `vnew` will be different for each run - there is noise. To determine if a change is significant, we need to perform a [hypothesis test](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing), i.e. can the change be explained as random variation or is likely that something has changed. A quick and dirty rule would be:
\begin{equation}
Y_i = \sqrt{n/2} \left(\frac{vnew_i}{vold_i} -1\right) \sim N(0,1)
\end{equation}
This means any values of $Y_i < -1.96$ (at a 5% significance level) can be considered significant, i.e. an improvement. However, I would probably increase this to -3 or -4. This would test for improvement in individual benchmarks.
If you want to combine all your benchmarks into a single test, then let
\begin{equation}
\bar Y = \frac{1}{n} \sum Y_i
\end{equation}
So
\begin{equation}
\sqrt{n} \bar Y \sim N(0, 1)
\end{equation}
Hence, an appropriate test would be to consider values of $\bar Y < 1.96$ to indicate an improvement.
---
Edit
If the benchmarks aren't Normal, then I would try working with log(benchmarks). It also depends on what you want to do. I read your question as "You would like a good rule of thumb". In this case, taking logs is probably OK.
---
- Further details of the mathematical reasoning are found at Section 3.2 of this document.
- I've made a approximation by assuming that v_old represents the true underlying variance.
| null | CC BY-SA 4.0 | null | 2010-08-19T12:29:30.023 | 2022-11-23T10:04:33.483 | 2022-11-23T10:04:33.483 | 362671 | 8 | null |
1898 | 2 | null | 1883 | 5 | null | There was a great discussion on metaoptimize called "[Most Influential Ideas 1995 - 2005](http://metaoptimize.com/qa/questions/867/most-influential-ideas-1995-2005)"
Which holds a great collection of ideas.
The one I mentioned there, and will repeat here, is the "revolution" in the concept of multiple comparisons, specifically the shift from using FWE to FDR methods, for testing very many hypothesis (like in micro array or fMRI and so on)
Here is one of the first articles that introduced this notion to the scientific community: [Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing". Journal of the Royal Statistical Society](http://www.math.tau.ac.il/~ybenja/MyPapers/benjamini_hochberg1995.pdf)
| null | CC BY-SA 2.5 | null | 2010-08-19T12:45:55.700 | 2010-08-19T12:45:55.700 | null | null | 253 | null |
1899 | 2 | null | 1883 | 4 | null | Cox proportional hazards survival analysis:
[http://en.wikipedia.org/wiki/Cox_proportional_hazards_model](http://en.wikipedia.org/wiki/Cox_proportional_hazards_model)
| null | CC BY-SA 2.5 | null | 2010-08-19T12:50:42.303 | 2010-08-19T12:50:42.303 | null | null | 521 | null |
1900 | 2 | null | 1883 | 9 | null | John Tukey's truly strange idea: exploratory data analysis.
[http://en.wikipedia.org/wiki/Exploratory_data_analysis](http://en.wikipedia.org/wiki/Exploratory_data_analysis)
| null | CC BY-SA 2.5 | null | 2010-08-19T13:01:46.790 | 2010-08-19T13:01:46.790 | null | null | 521 | null |
1901 | 2 | null | 1881 | 11 | null | A standard method to analyze this problem in two or more dimensions is Ripley's (cross) K function, but there's no reason not to use it in one dimension, too. (A Google search does a good job of digging up references.) Essentially, it plots the CDF of all distances between points in the two realizations rather than a histogram approximation to the PDF of those distances. (A variant, the L function, plots the difference between K and the null distribution for two uniform uncorrelated processes.) This neatly sidesteps most of the issues you are confronting with the need to choose bins, to smooth, etc. Confidence bands for K are typically created through simulation. This is easy to do in R. Many spatial stats packages for R can be used directly or readily adapted to this 1D case. Roger Bivand's [overview page](http://cran.r-project.org/web/views/Spatial.html) on CRAN lists these packages: refer to the section on "Point Pattern Analysis."
| null | CC BY-SA 2.5 | null | 2010-08-19T13:07:22.083 | 2010-08-19T13:07:22.083 | null | null | 919 | null |
1902 | 2 | null | 1883 | 4 | null | The Box-Jenkins approach to time-series modelling: ARIMA models etc.
[http://en.wikipedia.org/wiki/Box-Jenkins](http://en.wikipedia.org/wiki/Box-Jenkins)
| null | CC BY-SA 2.5 | null | 2010-08-19T13:09:50.597 | 2010-08-19T13:09:50.597 | null | null | 521 | null |
1903 | 2 | null | 1875 | 6 | null | In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoulli variate having probability p(q) of success. This is a classic logistic regression setup if you are willing to model the true chance as a linear combination of basis functions f1, f2, ..., fk; that is, the model says
>
Logit(p) = b0 + b1 f1(q) + b2 f2(q) + ... + bk fk(q) + e
with iid errors e. If you're agnostic about the form of the relationship (although if the weatherman is any good p(q) - q should be reasonably small), consider using a set of splines for the basis. The output, as usual, consists of estimates of the coefficients and an estimate of the variance of e. Given any future prediction q, just plug the value into the model with the estimated coefficients to obtain an answer to your question (and use the variance of e to construct a prediction interval around that answer if you like).
This framework is flexible enough to include other factors, such as the possibility of changes in the quality of predictions over time. It also lets you test hypotheses, such as whether p = q (which is what the weatherman implicitly claims).
| null | CC BY-SA 2.5 | null | 2010-08-19T13:21:56.153 | 2010-08-19T13:21:56.153 | null | null | 919 | null |
1904 | 1 | null | null | 14 | 1002 | What are the most significant annual Statistics conferences?
Rules:
- One conference per answer
- Include a link to the conference
| Statistics conferences? | CC BY-SA 2.5 | null | 2010-08-19T13:25:53.413 | 2022-12-15T06:46:10.637 | null | null | 5 | [
"conferences"
] |
1905 | 2 | null | 1904 | 3 | null | Shameless plug: [R/Finance](http://www.RinFinance.com) which relevant for its intersection of domain-specifics as well as tools, and so far well received by participants of the 2009 and 2010 conference. .
Disclaimer: I am one of the organizers.
| null | CC BY-SA 2.5 | null | 2010-08-19T13:30:01.783 | 2010-08-19T13:30:01.783 | null | null | 334 | null |
1906 | 1 | null | null | 8 | 597 | What are the most significant annual Data Mining conferences?
Rules:
- One conference per answer
- Include a link to the conference
| Data mining conferences? | CC BY-SA 2.5 | null | 2010-08-19T13:37:35.557 | 2022-12-04T11:14:04.433 | 2011-11-17T15:30:18.353 | 6976 | 5 | [
"data-mining",
"conferences"
] |
1907 | 2 | null | 1906 | 7 | null | [KDD](https://web.archive.org/web/20100701205656/http://www.sigkdd.org/conferences.php) (ACM Special Interest Group on Knowledge Discovery and Data Mining)
- KDD 2010
| null | CC BY-SA 4.0 | null | 2010-08-19T13:40:35.720 | 2022-12-04T11:06:28.573 | 2022-12-04T11:06:28.573 | 362671 | 5 | null |
1908 | 1 | null | null | 6 | 1809 | What are the most significant annual Machine Learning conferences?
Rules:
- One conference per answer
- Include a link to the conference
| Machine Learning conferences? | CC BY-SA 2.5 | null | 2010-08-19T13:45:36.037 | 2015-10-28T19:34:49.797 | 2010-08-23T15:27:33.560 | 877 | 5 | [
"machine-learning",
"conferences"
] |
1909 | 2 | null | 1904 | 7 | null | UseR!
- List of previous and upcoming R conferences on r-project
Related Links:
- 2011: University of Warwick, Coventry, UK
- Videos of some keynote speakers from 2010
| null | CC BY-SA 4.0 | null | 2010-08-19T13:46:41.337 | 2022-12-15T05:22:59.057 | 2022-12-15T05:22:59.057 | 362671 | 183 | null |
1910 | 2 | null | 1908 | 7 | null | [ICML](http://en.wikipedia.org/wiki/ICML) (International Conference on Machine Learning)
- ICML 2010
| null | CC BY-SA 2.5 | null | 2010-08-19T13:47:24.127 | 2010-08-19T13:47:24.127 | null | null | 5 | null |
1911 | 2 | null | 1883 | 10 | null | In 1960 most people doing statistics were calculating with a four-function manual calculator or a slide rule or by hand; mainframe computers were just beginning to run some programs in Algol and Fortran; graphical output devices were rare and crude. Because of these limitations, Bayesian analysis was considered formidably difficult due to the calculations required. Databases were managed on punch cards and computer tape drives limited to a few megabytes. Statistical education focused initially on learning formulas for t-testing and ANOVA. Statistical practice usually did not go beyond such routine hypothesis testing (although some brilliant minds had just begun to exploit computers for deeper analysis, as exemplified by Mosteller & Wallace's book on the Federalist papers, for instance).
I recounted this well-known history as a reminder that all of statistics has undergone a revolution due to the rise and spread of computing power during this last half century, a revolution that has made possible almost every other innovation in statistics during that time (with the notable exception of Tukey's pencil-and-paper EDA methods, as Thylacoleo has already observed).
| null | CC BY-SA 2.5 | null | 2010-08-19T14:13:27.077 | 2010-08-19T14:13:27.077 | null | null | 919 | null |
1912 | 1 | 1959 | null | 4 | 374 | [Gary King](http://gking.harvard.edu/) made the [following statement on Twitter](http://twitter.com/kinggary/status/21513150698):
>
scale invariance sounds cool but is
usually statisticians shirking
responsibility & losing power by
neglecting subject matter info
What is an example of this phenomena, where [scale invariance](http://www.statistics.com/resources/glossary/s/scaleinv.php) causes a loss of power?
Edit:
[Gary responded](http://twitter.com/kinggary/status/21624260888):
>
not [statistical] power, but scale invariance
loses the power that can be extracted
from knowledge of the substance
Now this makes more sense. How can scale invariance cause a loss of explanatory power from the resulting analysis?
| Why can scale invariance cause a loss of explanatory power? | CC BY-SA 2.5 | null | 2010-08-19T14:15:28.693 | 2010-08-27T15:21:05.230 | 2010-08-20T15:19:17.757 | 5 | 5 | [
"scale-invariance"
] |
1913 | 2 | null | 1904 | 5 | null | In terms of overall breadth, I would say that the ASA/IMS Joint Statistical Meetings are the most significant. Next year, the statisticians are taking their talents to South Beach...or [Miami Beach](http://www.amstat.org/meetings/jsm/2011/index.cfm) is more correct. I just couldn't help to use that line from Lebron James' infamous press conference. Having said that, I prefer smaller conferences like the UseR! conferences, ICORS (robust statistics), etc.
| null | CC BY-SA 2.5 | null | 2010-08-19T14:38:51.867 | 2010-08-19T14:38:51.867 | null | null | null | null |
1914 | 1 | null | null | 5 | 410 | I doing spatial bayesian data analysis, I am assuming a no-nugget exponential covariance. I have tried a variety of priors for the sills and range parameters (gamma, inverse gamma etc.) , unfortunately the convergence diagonstics are typically horrible.
I am wondering how to figure out the poor mixing I observe, is there something I can do to make the MCMC chain behave better?
| Choice for priors for exponential spatial covariance | CC BY-SA 2.5 | null | 2010-08-19T15:13:49.887 | 2010-09-18T22:00:39.013 | 2010-09-18T22:00:39.013 | 930 | 1004 | [
"bayesian",
"markov-chain-montecarlo",
"spatial"
] |
1915 | 1 | 1917 | null | 8 | 3855 | I am interested in running Newman's [modularity clustering](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1482622/) algorithm on a large graph. If you can point me to a library (or R package, etc) that implements it I would be most grateful.
| Newman's modularity clustering for graphs | CC BY-SA 3.0 | null | 2010-08-19T16:09:23.430 | 2016-05-17T02:18:02.557 | 2016-05-17T02:18:02.557 | 114327 | 1007 | [
"clustering",
"networks",
"partitioning",
"igraph",
"modularity"
] |
1916 | 2 | null | 1914 | 3 | null | Diggle and Ribeiro discuss this in their book ("[Model-based Geostatistics](http://rads.stackoverflow.com/amzn/click/0387329072)"): see section 5.4.2. They quote some research suggesting that re-parameterization might help a little. For an exponential model (a Matern model with kappa = 1/2) this research suggests using the equivalent of log(sill/range) and log(range). Diggle and Ribeiro themselves recommend a profile likelihood method to investigate the log-likelihood surface. Their software is implemented in the R package [geoRglm](http://gbi.agrsci.dk/~ofch/geoRglm/).
Have you looked at an experimental variogram to check that a zero nugget and an exponential shape are appropriate?
| null | CC BY-SA 2.5 | null | 2010-08-19T16:31:49.917 | 2010-08-20T08:27:30.700 | 2010-08-20T08:27:30.700 | 8 | 919 | null |
1917 | 2 | null | 1915 | 6 | null | The [igraph](http://cran.r-project.org/web/packages/igraph/index.html) library implements some algorithms for community structure based on Newman's optimization of modularity. You can consult the [reference manual](http://cran.r-project.org/web/packages/igraph/igraph.pdf) for details and citations.
| null | CC BY-SA 2.5 | null | 2010-08-19T16:59:14.233 | 2010-08-19T16:59:14.233 | null | null | 251 | null |
1918 | 2 | null | 1866 | 4 | null | I'm not sure you need simulation for a simple regression model. For example, see the paper [Portable Power](http://www.jstor.org/stable/1267939), by Robert E. Wheeler (Technometrics , May, 1974, Vol. 16, No. 2). For more complex models, specifically mixed effects, the [pamm](http://cran.r-project.org/web/packages/pamm/index.html) package in R performs power analyses through simulations. Also see Todd Jobe's [post](http://toddjobe.blogspot.de/2009/09/power-analysis-for-mixed-effect-models.html) which has R code for simulation.
| null | CC BY-SA 4.0 | null | 2010-08-19T17:13:01.420 | 2021-08-19T18:16:33.700 | 2021-08-19T18:16:33.700 | 11887 | 251 | null |
1919 | 2 | null | 1904 | 0 | null | Not a "statistics" conference in the technical sense, but Predictive Analytics World is a case study conference on how companies are using predictive and other analytics in theis businesses.
[Predictive Analytics World](http://www.predictiveanalyticsworld.com/)
| null | CC BY-SA 2.5 | null | 2010-08-19T17:14:39.123 | 2010-08-19T17:14:39.123 | null | null | 11 | null |
1920 | 2 | null | 1904 | 0 | null | [ACM SIGKDD 2010](http://www.kdd.org/kdd2010/index.shtml)
[KDD 2011 in San Diego](http://kdd.org/kdd/2011/)
| null | CC BY-SA 2.5 | null | 2010-08-19T17:19:52.763 | 2010-08-19T17:19:52.763 | null | null | 11 | null |
1921 | 2 | null | 1863 | 4 | null | I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclusions. See: [http://dx.doi.org/10.1016/j.jml.2007.11.004](http://dx.doi.org/10.1016/j.jml.2007.11.004)
As John Christie notes, the best way to approach analysis of accuracy data is a mixed effects model using the binomial link and participants as a random effect, eg:
```
#R code
library(lme4)
fit = lmer(
formula = acc ~ my_IV + (1|participant)
, family = 'binomial'
, data = my_data
)
print(fit)
```
Note that "my_data" should be the raw, trial-by-trial data such that "acc" is either 1 for accurate trials or 0 for inaccurate trials. That is, data should not be aggregated to proportions before analysis.
| null | CC BY-SA 2.5 | null | 2010-08-19T17:24:19.183 | 2010-08-19T17:43:34.613 | 2010-08-19T17:43:34.613 | 364 | 364 | null |
1922 | 2 | null | 1862 | 9 | null | There's a nice paper on visualization techniques you might use by Michael Friendly:
- Visualizing Categorical Data: Data, Stories, and Pictures
(Actually, there's a whole [book](http://books.google.com/books?id=eG0phz62f1cC&lpg=PP1&dq=michael%20friendly%20visualizing%20categorical%20data&pg=PP1#v=onepage&q&f=false) devoted to this by the same author.) The [vcd](http://cran.r-project.org/web/packages/vcd/index.html) package in R implements many of these techniques.
| null | CC BY-SA 2.5 | null | 2010-08-19T17:28:18.883 | 2010-08-19T17:28:18.883 | null | null | 251 | null |
1923 | 1 | 2509 | null | 6 | 513 | Suppose instead of maximizing likelihood I maximize some other function g. Like likelihood, this function decomposes over x's (ie, g({x1,x2})=g({x1})g({x2}), and "maximum-g" estimator is consistent. How do I compute asymptotic variance of this estimator?
Update 8/24/10: Percy Liang goes through derivation of asymptotic variance in a similar setting in [An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators.](https://web.archive.org/web/20151009204218/http://www.eecs.berkeley.edu/~pliang/papers/asymptotics-icml2008.pdf)
Update 9/14/10: Most [useful theorem](http://yaroslavvb.com/upload/efficiency-vandervaart.djvu) seems to be from Van der Vaart's "Asymptotic Statistics"
Under some general regularity conditions, distribution of this estimator approaches normal centered around its expected value $\theta_0$ with covariance matrix
$$\frac{\ddot{g}(\theta_0)^{-1} E[\dot{g}\dot{g}^T] \ddot{g}(\theta_0)^{-1}}{n}$$
Where $\ddot{g}$ is a matrix of second derivatives, $\dot{g}$ is gradient, $n$ is number of samples
| How to compute efficiency? | CC BY-SA 4.0 | null | 2010-08-19T17:56:58.550 | 2022-12-04T06:16:41.183 | 2022-12-04T06:16:41.183 | 362671 | 511 | [
"estimation",
"efficiency",
"asymptotics"
] |
1924 | 2 | null | 1923 | 2 | null | The consistency and asymptotic normality of the maximum likelihood estimator is demonstrated using some regularity conditions on the likelihood function. The wiki link on [consistency](http://en.wikipedia.org/wiki/Maximum_likelihood#Consistency) and [asymptotic normality](http://en.wikipedia.org/wiki/Maximum_likelihood#Asymptotic_normality) has the conditions necessary to prove these properties. The conditions at the wiki may be stronger than what you need as they are used to prove asymptotic normality whereas you simply want to compute the variance of the estimator.
I am guessing that if your function satisfies the same conditions then the proof will carry over to your function as well. If not then we need to know one or both of the following: (a) the specific condition that $g(.)$ does not satisfy from the list at the wiki and (b) the specifics of $g(.)$ to give a better answer to your question.
| null | CC BY-SA 2.5 | null | 2010-08-19T18:29:24.990 | 2010-08-19T18:29:24.990 | null | null | null | null |
1925 | 2 | null | 203 | 0 | null | Proportional odds ratio model is better then t-test for Likert item scale.
| null | CC BY-SA 2.5 | null | 2010-08-19T18:30:39.777 | 2010-08-19T18:30:39.777 | null | null | 419 | null |
1926 | 2 | null | 1904 | 5 | null | For biostatistics the largest US conferences are the meetings of the local sections of the International Biometrics Society (IBS):
- ENAR for the Eastern region
- WNAR for the Western region
Of these ENAR is by far larger.
| null | CC BY-SA 4.0 | null | 2010-08-19T18:43:16.163 | 2022-12-15T05:26:02.587 | 2022-12-15T05:26:02.587 | 362671 | 279 | null |
1927 | 1 | null | null | 38 | 6757 | If so, what?
If not, why not?
For a sample on the line, the median minimizes the total absolute deviation. It would seem natural to extend the definition to R2, etc., but I've never seen it. But then, I've been out in left field for a long time.
| Is there an accepted definition for the median of a sample on the plane, or higher ordered spaces? | CC BY-SA 2.5 | null | 2010-08-19T19:36:01.337 | 2022-08-06T19:09:10.103 | 2018-08-01T20:42:56.327 | 11887 | 1011 | [
"multivariate-analysis",
"spatial",
"median"
] |
1928 | 2 | null | 1927 | 21 | null | I'm not sure there is one accepted definition for a multivariate median. The one I'm familiar with is [Oja's median point](http://cgm.cs.mcgill.ca/~athens/Geometric-Estimators/oja.html), which minimizes the sum of volumes of simplices formed over subsets of points. (See the link for a technical definition.)
Update: The site referenced for the Oja definition above also has a nice paper covering a number of definitions of a multivariate median:
- Geometric Measures of Data Depth
| null | CC BY-SA 2.5 | null | 2010-08-19T19:48:16.567 | 2010-08-19T19:58:31.783 | 2010-08-19T19:58:31.783 | 251 | 251 | null |
1929 | 2 | null | 1927 | 1 | null | I do not know if any such definition exists but I will try and extend the [standard definition of the median](http://en.wikipedia.org/wiki/Median#An_optimality_property) to $R^2$. I will use the following notation:
$X$, $Y$: the random variables associated with the two dimensions.
$m_x$, $m_y$: the corresponding medians.
$f(x,y)$: the joint pdf for our random variables
To extend the definition of the median to $R^2$, we choose $m_x$ and $m_y$ to minimize the following:
$E(|(x,y) - (m_x,m_y)|$
The problem now is that we need a definition for what we mean by:
$|(x,y) - (m_x,m_y)|$
The above is in a sense a distance metric and several possible candidate definitions are possible.
[Eucliedan Metric](http://en.wikipedia.org/wiki/Euclidean_metric)
$|(x,y) - (m_x,m_y)| = \sqrt{(x-m_x)^2 + (y-m_y)^2}$
Computing the median under the euclidean metric will require computing the expectation of the above with respect to the joint density $f(x,y)$.
[Taxicab Metric](http://en.wikipedia.org/wiki/Taxicab_geometry)
$|(x,y) - (m_x,m_y)| = |x-m_x| + |y-m_y|$
Computing the median in the case of the taxicab metric involves computing the median of $X$ and $Y$ separately as the metric is separable in $x$ and $y$.
| null | CC BY-SA 2.5 | null | 2010-08-19T19:53:51.467 | 2010-08-19T19:53:51.467 | null | null | null | null |
1930 | 2 | null | 1850 | 25 | null | Both Cohen's d and Hedges' g pool variances on the assumption of equal population variances, but g pools using n - 1 for each sample instead of n, which provides a better estimate, especially the smaller the sample sizes. Both d and g are somewhat positively biased, but only negligibly for moderate or larger sample sizes. The bias is reduced using g*. The d by Glass does not assume equal variances, so it uses the sd of a control group or baseline comparison group as the standardizer for the difference between the two means.
These effect sizes and Cliff's and other nonparametric effect sizes are discussed in detail in my book:
Grissom, R. J., & Kim, J, J. (2005). Effect sizes for research: A broad practical approach. Mahwah, NJ: Erlbaum.
| null | CC BY-SA 2.5 | null | 2010-08-19T20:52:10.220 | 2010-08-19T20:52:10.220 | null | null | 1013 | null |
1931 | 2 | null | 1927 | 12 | null | There are distinct ways to generalize the concept of median to higher dimensions. One not yet mentioned, but which was proposed long ago, is to construct a convex hull, peel it away, and iterate for as long as you can: what's left in the last hull is a set of points that are all candidates to be "medians."
["Head-banging"](http://surveillance.cancer.gov/headbang) is another more recent attempt (c. 1980) to construct a robust center to a 2D point cloud. (The link is to documentation and software available at the US National Cancer Institute.)
The principal reason why there are multiple distinct generalizations and no one obvious solution is that R1 can be ordered but R2, R3, ... cannot be.
| null | CC BY-SA 3.0 | null | 2010-08-19T20:58:59.157 | 2015-02-11T14:37:28.947 | 2015-02-11T14:37:28.947 | 919 | 919 | null |
1932 | 2 | null | 1908 | 7 | null | [NIPS (Neural Information Processing Systems)](http://nips.cc). It's actually an intersection of machine learning, and application areas such as speech/language, vision, neuro-science, and other related areas.
| null | CC BY-SA 2.5 | null | 2010-08-19T21:44:04.807 | 2010-08-19T21:44:04.807 | null | null | 881 | null |
1933 | 2 | null | 1826 | 3 | null | Since you don't have access to the test data at the time of training, and you want your model to do well on the unseen test data, you "pretend" that you have access to some test data by repeatedly subsampling a small part of your training data, hold out this set while training the model, and then treating the held out set as a proxy to the test data (and choose model parameters that give best performance on the held out data). You hope that by randomly sampling various subsets from the training data, you might make them look like the test data (in the average behavior sense), and therefore the learned model parameters will be good for the test data as well (i.e., your model generalizes well for unseen data).
| null | CC BY-SA 2.5 | null | 2010-08-19T21:50:48.220 | 2010-08-19T23:49:54.203 | 2010-08-19T23:49:54.203 | 881 | 881 | null |
1934 | 2 | null | 1228 | 3 | null | Thomas Ryan ("Statistical Methods for Quality Improvement", Wiley, 1989) describes several procedures. He tends to try to reduce all control charting to the Normal case, so his procedures are not as creative as they could be, but he claims they work pretty well. One is to treat the values as Binomial data and use the ArcSin transformation, then run standard CUSUM charts. Another is to view the values as Poisson data and use the square root transformation, then again run a CUSUM chart. For these approaches, which are intended for process quality control, you're supposed to know the number of potentially exposed individuals during each period. If you don't, you probably have to go with the Poisson model. Given that the infections are rare, the square root transformation sets your upper control limit a tiny bit above (u/2)^2 where typically u = 3 (corresponding to the usual 3-SD UCL in a Normal chart), whence any count of Ceiling((3/2)^2) = 3 or greater would trigger an out-of-control condition.
One wonders whether control charting is the correct conceptual model for your problem, though. You're not really running any kind of quality control process here: you probably know, on scientific grounds, when the infection rate is alarming. You might know, as a hypothetical example, that fewer than ten infections over a week-long period is rarely a harbinger of an outbreak. Why not set your upper limit on this kind of basis rather than employing an almost useless statistical limit?
| null | CC BY-SA 2.5 | null | 2010-08-19T22:08:44.573 | 2010-08-19T22:08:44.573 | null | null | 919 | null |
1935 | 1 | null | null | 9 | 2088 | Despite several attempts at reading about bootstrapping, I seem to always hit a brick wall. I wonder if anyone can give a reasonably non-technical definition of bootstrapping?
I know it is not possible in this forum to provide enough detail to enable me to fully understand it, but a gentle push in the right direction with the main goal and mechanism of bootstrapping would be much appreciated! Thanks.
| Whither bootstrapping - can someone provide a simple explanation to get me started? | CC BY-SA 2.5 | null | 2010-08-20T00:10:33.740 | 2017-06-26T12:19:21.890 | 2017-06-26T12:19:21.890 | 11887 | 561 | [
"nonparametric",
"bootstrap",
"intuition"
] |
1936 | 2 | null | 1908 | 0 | null | One of the only machine learning conferences for those in Australia and New Zealand is:
- 23rd Australasian Joint Conference on Artificial Intelligence
It's held in Adelaide this year.
| null | CC BY-SA 2.5 | null | 2010-08-20T00:16:09.237 | 2010-08-20T00:16:09.237 | null | null | 530 | null |
1937 | 2 | null | 1935 | 4 | null | The wiki on [bootstrapping](http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29) gives the following description:
>
Bootstrapping allows one to gather many alternative versions of the single statistic that would ordinarily be calculated from one sample. For example, assume we are interested in the height of people worldwide. As we cannot measure all the population, we sample only a small part of it. From that sample only one value of a statistic can be obtained, i.e one mean, or one standard deviation etc., and hence we don't see how much that statistic varies. When using bootstrapping, we randomly extract a new sample of n heights out of the N sampled data, where each person can be selected at most t times. By doing this several times, we create a large number of datasets that we might have seen and compute the statistic for each of these datasets. Thus we get an estimate of the distribution of the statistic. The key to the strategy is to create alternative versions of data that "we might have seen".
I will provide more detail if you can clarify what part of the above description you do not understand.
| null | CC BY-SA 2.5 | null | 2010-08-20T00:20:49.803 | 2010-08-20T00:20:49.803 | null | null | null | null |
1938 | 2 | null | 1904 | 2 | null | The main regular conference in Australia is the "Australian Statistics Conference", held every second year. The next one is [ASC 2010](http://www.promaco.com.au/2010/asc/), to be held in Western Australia in December.
| null | CC BY-SA 2.5 | null | 2010-08-20T00:23:03.980 | 2010-08-20T00:23:03.980 | null | null | 159 | null |
1939 | 2 | null | 1935 | 8 | null | The Wikipedia entry on Bootstrapping is actually very good:
[http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29](http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29)
The most common reason bootstrapping is applied is when the form of the underlying distribution from which a sample is taken is unknown. Traditionally statisticians assume a normal distribution (for very good reasons related to the central limit theorem), but statistics (such as the standard deviation, confidence intervals, power calculations etc) estimated via normal distribution theory are only strictly valid if the underlying population distribution is normal.
By repeatedly re-sampling the sample itself, bootstrapping enables estimates that are distribution independent. Traditionally each "resample" of the original sample randomly selects the same number of observations as in the original sample. However these are selected with replacement. If the sample has N observations, each bootstrap resample will have N observations, with many of the original sample repeated and many excluded.
The parameter of interest (eg. odds ratio etc) can then be estimated from each bootstrapped sample. Repeating the bootstrap say 1000 times allows an estimate of the "median" and 95% confidence interval on the statistic (eg odds ratio) by selecting the 2.5th, 50th and 97.5th percentile.
| null | CC BY-SA 2.5 | null | 2010-08-20T00:35:28.433 | 2010-08-20T00:35:28.433 | null | null | 521 | null |
1940 | 2 | null | 1935 | 8 | null | The American Scientist recently had a nice article by Cosma Shalizi on [the bootstrap](http://www.americanscientist.org/issues/pub/2010/3/the-bootstrap/1) which is fairly easy reading and gives you the essentials to grasp the concept.
| null | CC BY-SA 2.5 | null | 2010-08-20T01:08:12.280 | 2010-08-20T01:08:12.280 | null | null | 251 | null |
1941 | 2 | null | 1781 | 5 | null | I have not full internalized the issue of matrix interference but here is one approach. Let:
$Y$ be a vector that represents the concentration of all the target compounds in the undiluted sample.
$Z$ be the corresponding vector in the diluted sample.
$d$ be the dilution factor i.e., the sample is diluted $d$:1.
Our model is:
$Y \sim N(\mu,\Sigma)$
$Z = \frac{Y}{d} + \epsilon$
where $\epsilon \sim N(0,\sigma^2\ I)$ represents the error due to dilution errors.
Therefore, it follows that:
$Z \sim N(\frac{\mu}{d}, \Sigma + \sigma^2\ I)$
Denote the above distribution of $Z$ by $f_Z(.)$.
Let $O$ be the observed concentrations and $\tau$ represent the test instrument's threshold below which it cannot detect a compound. Then, for the $i^{th}$ compound we have:
$O_i = Z_i I(Z_i > \tau) + 0 I(Z_i \le \tau)$
Without loss of generality let the first $k$ compounds be such that they are below the threshold. Then the likelihood function can be written as:
$L(O_1, ... O_k, O_{k+1},...O_n |- ) = [\prod_{i=1}^{i=k}{Pr(Z_i \le \tau)}] [\prod_{i=k+1}^{i=n}{f(O_i |-)}]$
where
$f(O_i |-) = \int_{j\neq i}{f_Z(O_i|-) I(O_i > \tau)}$
Estimation is then a matter of using either maximum likelihood or bayesian ideas. I am not sure how tractable the above is but I hope it gives you some ideas.
| null | CC BY-SA 2.5 | null | 2010-08-20T01:47:20.863 | 2010-08-20T01:47:20.863 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.