Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2477 | 2 | null | 125 | 12 | null | I have read some parts of [A First Course in Bayesian Statistical Methods](http://rads.stackoverflow.com/amzn/click/0387922997) by Peter Hoff, and I found it easy to follow. (Example R-code is provided throughout the text)
| null | CC BY-SA 3.0 | null | 2010-09-08T10:28:33.510 | 2012-01-25T05:18:00.047 | 2012-01-25T05:18:00.047 | 7497 | 339 | null |
2478 | 2 | null | 2476 | 24 | null | For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a representation may not be reliable since it is a known fact that in very high dimensions, distance between any two points starts to look the same. One way to deal with this is to reduce the data dimensionality by using [PCA](http://en.wikipedia.org/wiki/Principal_component_analysis) or LSA ([Latent Semantic Analysis](http://en.wikipedia.org/wiki/Latent_semantic_analysis); also known as [Latent Semantic Indexing](http://en.wikipedia.org/wiki/Latent_semantic_indexing)) and then measure the distances in the new space. Using something like LSA over PCA is advantageous since it can give a meaningful representation in terms of "semantic concepts", apart from measuring distances in a lower dimensional space.
Comparing documents based on the probability distributions is usually done by first computing the topic distribution of each document (using something like [Latent Dirichlet Allocation](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation)), and then computing some sort of divergence (e.g., KL divergence) between the topic distributions of pair of documents. In a way, it's actually kind of similar to doing LSA first and then measuring distances in the LSA space using KL-divergence between the vectors (instead of cosine similarity).
KL-divergence is a distance measure for comparing distributions so it may be preferable if the document representation is in terms of some distribution (which is often actually the case -- e.g., documents represented as distribution over topics, as in LDA). Also note that under such a representation, the entries in the feature vector would sum to one (since you are basically treating the document as a distribution over topics or semantic concepts).
Also see a [related thread here](http://metaoptimize.com/qa/questions/1595/how-to-compute-the-semantic-distance-between-two-text-documents).
| null | CC BY-SA 2.5 | null | 2010-09-08T10:49:23.317 | 2010-09-08T10:58:25.990 | 2010-09-08T10:58:25.990 | 881 | 881 | null |
2481 | 1 | 2495 | null | 7 | 4273 | I have found the term "asymptotic power of a statistical test" only related to the Kolmogorov-Smirnov test (to be precise: asyptotic power = 1). What does this term acctually mean? In my opinion it should be someting like this: "if the alternative hypothesis is true, than for every significance level alpha there exists a sample size n that the selected test would reject the null hypothesis". Is "my" definition correct? According to "my defintion" the majority of classical tests (t-tset, ...) should have the asymptotic power 1, not only KS test. Am I right? ;)
| Asymptotic power | CC BY-SA 2.5 | null | 2010-09-08T11:33:12.393 | 2019-01-16T15:02:26.317 | 2019-01-16T15:02:26.317 | 11887 | 1215 | [
"hypothesis-testing",
"statistical-significance",
"statistical-power",
"asymptotics"
] |
2482 | 2 | null | 2466 | 10 | null | How's the Garvan?
The problem is we don't know how many zero counts are observed. We have to estimate this. A classic statistical procedure for situations like this is the Expectation-Maximisation algorithm.
A simple example:
Assume we draw from an unknown population (of 1,000,000) with a poisson constant of 0.2.
```
counts <- rpois(1000000, 0.2)
table(counts)
0 1 2 3 4 5
818501 164042 16281 1111 62 3
```
But we don't observe the zero counts. Instead we observe this:
```
table <- c("0"=0, table(counts)[2:6])
table
0 1 2 3 4 5
0 164042 16281 1111 62 3
```
Possible frequencies observed
```
k <- c("0"=0, "1"=1, "2"=2, "3"=3, "4"=4, "5"=5)
```
Initialise mean of Poisson distribution - just take a guess (we know it's 0.2 here).
```
lambda <- 1
```
- Expectation - Poisson Distribution
P_k <- lambda^k*exp(-lambda)/factorial(k)
P_k
0 1 2 3 4 5
0.367879441 0.367879441 0.183939721 0.061313240 0.015328310 0.003065662
n0 <- sum(table[2:6])/(1 - P_k[1]) - sum(table[2:6])
n0
0
105628.2
table[1] <- 105628.2
- Maximisation
lambda_MLE <- (1/sum(table))*(sum(table*k))
lambda_MLE
[1] 0.697252
lambda <- lambda_MLE
- Second iteration
P_k <- lambda^k*exp(-lambda)/factorial(k)
n0 <- sum(table[2:6])/(1 - P_k[1]) - sum(table[2:6])
table[1] <- n0
lambda <- (1/sum(table))*(sum(table*k))
population lambda_MLE
[1,] 361517.1 0.5537774
Now iterate until convergence:
```
for (i in 1:200) {
P_k <- lambda^k*exp(-lambda)/factorial(k)
n0 <- sum(table[2:6])/(1 - P_k[1]) - sum(table[2:6])
table[1] <- n0
lambda <- (1/sum(table))*(sum(table*k))
}
cbind( population = sum(table), lambda_MLE)
population lambda_MLE
[1,] 1003774 0.1994473
```
Our population estimate is 1003774 and our poisson rate is estimated at 0.1994473 - this is the estimated proportion of the population sampled. The main problem you will have in the typical biological problems you are dealing with is assumption that the poisson rate is a constant.
Sorry for the long-winded post - this wiki is not really suitable for R code.
| null | CC BY-SA 2.5 | null | 2010-09-08T11:44:25.410 | 2010-09-08T13:35:34.850 | 2010-09-08T13:35:34.850 | null | 521 | null |
2483 | 1 | 2539 | null | 2 | 459 | I have reformulated the problem from a "dog barking warning system" to something else which hopefully, has less ambiguity. Instead, I will repose the problem as follows:
Let's assume that my neighbour is a "mad scientist", who claims to have invented a "terrorist event" forecasting machine. The machine has three colored bulbs - one of which will be illuminated, depending on the severity of a terrorist event forecasted by the machine.
The light bulbs have the following interpretation:
- No light bulbs illuminated means there is no perceived threat.
- Green bulb illuminated means there is a level 1 threat imminent.
- Orange bulb illuminated there is a level 2 threat imminent.
- Red bulb illuminated there is a level 3 threat imminent.
To avoid getting too pedantic, let's assume for the sake of argument that the following terms are defined and agreed upon:
- Level 1 terrorist event.
- Level 2 terrorist event.
- Level 3 terrorist event.
- "imminent" terrorist event.
What I am trying to find out, if there is a way I can design an experiment that can help me say with a degree of confidence, whether the scientists claims are statistically significant or not.
If the claims are found to be statistically significant, then I would like to be able to add a CI (confidence interval) to the claim. So, I can say something like - if the orange bulb is illuminated, then a level 2 terrorist event will occur within a x% CI.
Having said that - IIRC, CI are only meaningful for N~ RV.?
As an aside, I was thinking that something akin to Fischer's [tea experiment](http://en.wikipedia.org/wiki/Lady_tasting_tea) (or running a Bernoulli test would be useful, but my stats 'fu is not what it used to be).
The purpose of such a model (assuming that the scientist machine really does work), is to act as a decision support system - i.e. decisions can be taken on the output of the machine - if it can be depended upon.
| How to model and test a decision support system (e.g. a terrorist warning system)? | CC BY-SA 3.0 | null | 2010-09-08T11:49:25.087 | 2016-08-09T13:08:57.997 | 2016-08-09T13:08:57.997 | 22468 | 1216 | [
"confidence-interval",
"modeling",
"model-selection",
"experiment-design"
] |
2486 | 2 | null | 2481 | 3 | null | As I understood it, the asymptotic power is the hypothetical power when the effect size goes to zero and the sample size to infinity. Basically it should be 0 or 1, indicating whether the test cannot or can distinguish an arbitrary small deviation from the null hypothesis when the sample size is sufficiently large.
| null | CC BY-SA 2.5 | null | 2010-09-08T12:54:35.677 | 2010-09-08T12:54:35.677 | null | null | 1124 | null |
2487 | 2 | null | 2481 | 1 | null | Yes, you are right. I would only replace "there exists a sample size n that the selected test would reject the null hypothesis" with "for every e>0 there exists a sample size n_0 such that the probability to reject the null hypothesis is greater than 1-e for all n>n_0".
| null | CC BY-SA 2.5 | null | 2010-09-08T13:43:43.877 | 2010-09-08T13:43:43.877 | null | null | 1219 | null |
2489 | 2 | null | 2483 | 3 | null | I have thought about this question a while and have come to the following rather vague answer to a, in my eyes, rather vague question. Despite the asker's wish I don't use the word model as I didn't get it into my thinking on this problem. Sorry for that.
- What is your hypothesis?
I see three different possible interpretations of the hypothesis:
(a) The one implied by the question: If the dog barks one time, then it rains. (...) This hypotheses would be false if and only if the dog barks one time and it is currently not raining (i.e., a truth functional material implication)
(b) The one that seems more plausible to me: If it rains, then the dog barks exactly one time. (...) This hypotheses would be false if and only if it is raining and the dog does not bark exactly one time
(c) The stochastic hypotheses: Out of a population of dog's the neighbor's dog is a pretty reliable weather indicator compared to the other dogs and barks according to the mentioned scheme.
- How to test your hypotheses?
(a) & (b) Verifying these general claims is pretty difficult. It is like asking if it is really true that the sun always rises in the morning. There are pretty good arguments from philosophers of science as Karl Popper and the like that we can never proof claims like these but only try to falsify them and treat them as true as long as they haven't been falsified. In our example one could do two things: Firstly we could simply survey the weather conditions and the dog's behavior until we get bored and accept the hypotheses as true or see any pairing that is inconsistent with our hypotheses. Secondly, create an environment in which you can control the weather conditions (i.e., run an experiment) take the dog into this environment and proceed with surveying until boredom/falsification. Note however that in case of any falsification one could always argue that e.g., this rain was not really a rain but rather a light drizzle and so on. See Lakatos and also further the Duhem-Quine Thesis. To come back to my example: at some point humanity accepted that the sun always rises in the morning although we can never be sure of it!
(c) To test this claim I would do the following: I would take the neighbor's dog and some random dog from the same population and bring them, separately, to the above mentioned environment where you can control the weather. There you have four different stimuli (S1 = no rain, S2 = rain, ...) which I would present to each dog in a random order and record the reaction to each stimuli. Note that you need to repeat each stimulus several times (e.g., 30 times). Then you could enter the data with stimuli as the subject, the dog as the between-subjects manipulations and the stimuli as the within-subjects manipulations into an Analysis of Variance (ANOVA). Your hypotheses would probably predict an interaction of dog with stimuli. If this would occur you could run tests testing for: the random dog which should have equal barking for all four stimuli, and the neighbor's dog which should stick to the norm (e.g., one sample t-tests against a value).
Note that your test of hypotheses (c) would be easier and more 'traditional' if you would suspect that there exist two populations of dogs, one weather forecasting and one weather blind population of dogs. Then you could sample some (e.g., 30) from each population and aggregate the data from each stimuli. Dogs would be the subjects, population the between-subject manipulation and stimuli the within-subject manipulation.
Two caveats: I don't know if the described way is the usual one for testing for single outliers . I don't know if there exist more common methods to test for deviance from a norm.
| null | CC BY-SA 2.5 | null | 2010-09-08T15:43:01.230 | 2010-09-08T15:49:37.677 | 2010-09-08T15:49:37.677 | 442 | 442 | null |
2490 | 2 | null | 2483 | 2 | null | Count sucesses an failures when the barking patterns occur and get a contingency table. Then do a Chi squared test against 50/50 occurences (ie null hypothesis is that barking is not correlated with getting the correct weather pattern).
Of course, there is also the other side of this to consider: what barking pattern precedes a particular weather type.
| null | CC BY-SA 2.5 | null | 2010-09-08T16:31:59.693 | 2010-09-08T16:31:59.693 | null | null | 229 | null |
2491 | 2 | null | 321 | 6 | null | The second paper you cite seems to contain the answer to your question. To recap; mathematically, the main difference is in the shape of the loss function being used. Friedman, Hastie, and Tibshirani's loss function being easier to optimize at each iteration. –
| null | CC BY-SA 2.5 | null | 2010-09-08T17:30:41.103 | 2010-09-08T17:30:41.103 | null | null | 603 | null |
2492 | 1 | 2498 | null | 371 | 140129 | A former colleague once argued to me as follows:
>
We usually apply normality tests to the results of processes that,
under the null, generate random variables that are only
asymptotically or nearly normal (with the 'asymptotically' part dependent on some quantity which we cannot make large); In the era of
cheap memory, big data, and fast processors, normality tests should
always reject the null of normal distribution for large (though not insanely large) samples. And so, perversely, normality tests should
only be used for small samples, when they presumably have lower power
and less control over type I rate.
Is this a valid argument? Is this a well-known argument? Are there well known tests for a 'fuzzier' null hypothesis than normality?
| Is normality testing 'essentially useless'? | CC BY-SA 3.0 | null | 2010-09-08T17:47:21.820 | 2019-02-03T14:17:07.813 | 2016-01-16T19:01:05.907 | 100906 | 795 | [
"hypothesis-testing",
"normality-assumption",
"philosophical"
] |
2493 | 1 | 2497 | null | 16 | 1651 | I'm looking for the appropriate theoretical framework or speciality to help me deal with understanding how to deal with the errors that the GPS system has - especially when dealing with routes.
Fundamentally, I'm looking for the requirements on the data and any algorithms to use to be able to establish the length of a trail. The answer needs to be trustworthy.
A friend of mine was the race director of a race which was billed as 160km but the Garmin watches everybody has makes it more like 190km+. It caused quite some grief at the finish line, let me tell you!
So my friend went back to the course with various GPS devices in order to remap it and the results are interesting.
Using a handheld Garmin Oregon 300 she got 33.7km for one leg. For the same leg on a wrist watch Garmin Forerunner 310xt it came out to 38.3km.
When I got the data from the Oregon it was obvious that it was only recording data every 90 seconds or so. The Forerunner does it every couple of seconds.
When I plotted the data from the Oregon I could see that it got confused by some switchbacks and put a line straight through them and a curve was made a little less.
However, I muse that the difference in the recording frequency is much of the explanation. i.e. by recording every couple of seconds the Forerunner is closer to the real route. However, there will be an amount of error because of the way GPS works. If the points recorded are spread around the real route randomly (because of the error) then the total distance will be larger than the real route. (A wiggle line going either side of a straight line is longer than the straight line).
So, my questions: 1. Are there any techniques I can use on a single dataset to reduce the error in a valid way? 2. Does my theory about the difference in recording frequency hold water? 3. If I have multiple recordings of the same routes are there any valid techniques to combine them to get closer to the real route?
As I say, I don't really know what to search for to find any useful science about this. I'm looking for ways to establish just how long a given piece of trail is and it is very important to people. An extra 30km in a race is an extra 5+ hours we weren't expecting.
As requested here is some sample data:

[Detailed high-frequency sample data](https://web.archive.org/web/20151008094933/https://s3.amazonaws.com/trailhunger-stackexchange/trapper-lake-detailed.csv)
[Low-frequency sample data](https://web.archive.org/web/20151011123503/https://s3.amazonaws.com/trailhunger-stackexchange/trapper-lake-low-sample.csv)
| Managing error with GPS routes (theoretical framework?) | CC BY-SA 4.0 | null | 2010-09-08T19:48:42.110 | 2022-11-28T05:22:50.090 | 2022-11-28T05:22:50.090 | 362671 | 1227 | [
"error",
"sampling"
] |
2495 | 2 | null | 2481 | 7 | null | The definition above (a fixed alternative, sample size going to infinity) is more precisely related to the consistency (or not) of a hypothesis test. That is, a test is consistent against a fixed alternative if the power function approaches 1 at that alternative.
Asymptotic power is something different. As Joris remarked, with asymptotic power the alternatives $\theta_n$ are changing, are converging to the null value $\theta_0$ (on the order of $\sqrt n$, say) while the sample size marches to infinity.
Under some regularity conditions (for example, the test statistic has a monotone likelihood ratio, is asymptotically normal, has asymptotic variance $\tau$ continuous in $\theta$, yada yada yada) if $\sqrt n(\theta_n - \theta_0)$ goes to $\delta$ then the power function goes to $\Phi(\delta/\tau - z_\alpha)$, where $\Phi$ is the standard normal CDF. This last quantity is called the asymptotic power of just such a test.
See Lehmann's $\underline{\mbox{Elements of Large Sample Theory}}$ for discussion and worked out examples.
By the way, yes, the majority of classical tests are consistent.
| null | CC BY-SA 2.5 | null | 2010-09-08T20:23:12.640 | 2010-09-11T00:47:08.027 | 2010-09-11T00:47:08.027 | null | null | null |
2496 | 2 | null | 2446 | 2 | null | This a very simple multi-level (a.k.a. hierarchical) model. Douglas Bate his currently working on a book on the subject (draft avalaible here: [http://lme4.r-forge.r-project.org/book/](http://lme4.r-forge.r-project.org/book/)). While there are many books on this subject, Doug's has the added benefit of being designed arround the 'lme' R package, a very handy pakage designed to fit such model. I think it is best for you to go and read the first chapter of that book as well as practice the exemples provided there inside R. You can always come back with more specific questions.
| null | CC BY-SA 2.5 | null | 2010-09-08T21:21:58.237 | 2010-09-08T21:27:01.180 | 2010-09-08T21:27:01.180 | 603 | 603 | null |
2497 | 2 | null | 2493 | 8 | null | This is a well studied problem in geospatial science--you can find discussions of it on GIS forums.
First, note that the wiggles do not necessarily increase the route's length, because many of them actually cut inside curves. (I have evaluated this by having an entire classroom of students digitize the same path and then I compared the paths.) There really is a lot of cancellation. Also, we can expect that readings taken just a few seconds apart will have strongly, positively, correlated errors. Thus the measured path should wiggle only gradually around the true path. Even large departures don't affect the length much. For example, if you deviate by (say) 5 meters laterally in the middle of a 100 m straight stretch, your estimate of the length only goes up to $2 \sqrt{50^2 + 5^2} = 100.5$, a 0.5% error.
It is difficult to compare two arbitrary paths in an objective way. One of the better methods, I believe, is a form of bootstrapping: subsample (or otherwise generalize) the most detailed path you have. Plot its length as a function of the amount of subsampling. If you express the subsampling as a typical vertex-to-vertex distance, you can extrapolate a fit to a zero distance, which can provide an excellent estimate of the path length.
With multiple recordings, you can create a 2D kernel smooth of each, sum the smooths, and subject that to a topographic analysis to look for a "ridgeline." You won't get a single connected line, usually, but you can often patch the ridges together into a continuous path. People have used this method to average hurricane tracks, for example.
| null | CC BY-SA 2.5 | null | 2010-09-08T22:10:12.760 | 2010-09-08T22:10:12.760 | null | null | 919 | null |
2498 | 2 | null | 2492 | 273 | null | It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even the smallest deviation from perfect normality will lead to a significant result. And as every dataset has some degree of randomness, no single dataset will be a perfectly normally distributed sample. But in applied statistics the question is not whether the data/residuals ... are perfectly normal, but normal enough for the assumptions to hold.
Let me illustrate with [the Shapiro-Wilk test](https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test). The code below constructs a set of distributions that approach normality but aren't completely normal. Next, we test with `shapiro.test` whether a sample from these almost-normal distributions deviate from normality. In R:
```
x <- replicate(100, { # generates 100 different tests on each distribution
c(shapiro.test(rnorm(10)+c(1,0,2,0,1))$p.value, #$
shapiro.test(rnorm(100)+c(1,0,2,0,1))$p.value, #$
shapiro.test(rnorm(1000)+c(1,0,2,0,1))$p.value, #$
shapiro.test(rnorm(5000)+c(1,0,2,0,1))$p.value) #$
} # rnorm gives a random draw from the normal distribution
)
rownames(x) <- c("n10","n100","n1000","n5000")
rowMeans(x<0.05) # the proportion of significant deviations
n10 n100 n1000 n5000
0.04 0.04 0.20 0.87
```
The last line checks which fraction of the simulations for every sample size deviate significantly from normality. So in 87% of the cases, a sample of 5000 observations deviates significantly from normality according to Shapiro-Wilks. Yet, if you see the qq plots, you would never ever decide on a deviation from normality. Below you see as an example the qq-plots for one set of random samples

with p-values
```
n10 n100 n1000 n5000
0.760 0.681 0.164 0.007
```
| null | CC BY-SA 3.0 | null | 2010-09-08T22:23:19.163 | 2018-03-05T07:03:51.740 | 2018-03-05T07:03:51.740 | 197523 | 1124 | null |
2499 | 1 | 2506 | null | 85 | 25263 | There are several distinct usages:
- kernel density estimation
- kernel trick
- kernel smoothing
Please explain what the "kernel" in them means, in plain English, in your own words.
| What is a "kernel" in plain English? | CC BY-SA 3.0 | null | 2010-09-09T00:15:07.943 | 2018-07-30T10:14:22.703 | 2015-04-23T05:52:50.573 | 9964 | 74 | [
"kernel-trick",
"kernel-smoothing"
] |
2500 | 2 | null | 2499 | 50 | null | There appear to be at least two different meanings of "kernel": one more commonly used in statistics; the other in machine learning.
In statistics "kernel" is most commonly used to refer to [kernel density estimation](http://en.wikipedia.org/wiki/Kernel_density_estimation) and [kernel smoothing](http://en.wikipedia.org/wiki/Kernel_smoothing).
A straightforward explanation of kernels in density estimation can be found ([here](http://www.mvstat.net/tduong/research/seminars/seminar-2001-05/)).
In machine learning "kernel" is usually used to refer to the [kernel trick](http://en.wikipedia.org/wiki/Kernel_trick), a method of using a linear classifier to solve a non-linear problem "by mapping the original non-linear observations into a higher-dimensional space".
A simple visualisation might be to imagine that all of class $0$ are within radius $r$ of the origin in an x, y plane (class $0$: $x^2 + y^2 < r^2$); and all of class $1$ are beyond radius $r$ in that plane (class $1$: $x^2 + y^2 > r^2$). No linear separator is possible, but clearly a circle of radius $r$ will perfectly separate the data. We can transform the data into three dimensional space by calculating three new variables $x^2$, $y^2$ and $\sqrt{2}xy$. The two classes will now be separable by a plane in this 3 dimensional space. The equation of that optimally separating hyperplane where $z_1 = x^2, z_2 = y^2$ and $z_3 = \sqrt{2}xy$ is $z_1 + z_2 = 1$, and in this case omits $z_3$. (If the circle is off-set from the origin, the optimal separating hyperplane will vary in $z_3$ as well.) The kernel is the mapping function which calculates the value of the 2-dimensional data in 3-dimensional space.
In mathematics, there are [other uses of "kernels"](http://en.wikipedia.org/wiki/Kernel_%28mathematics%29), but these seem to be the main ones in statistics.
| null | CC BY-SA 4.0 | null | 2010-09-09T01:21:55.677 | 2018-07-30T10:14:22.703 | 2018-07-30T10:14:22.703 | 121522 | 521 | null |
2501 | 2 | null | 2492 | 209 | null | When thinking about whether normality testing is 'essentially useless', one first has to think about what it is supposed to be useful for. Many people (well... at least, many scientists) misunderstand the question the normality test answers.
The question normality tests answer: Is there convincing evidence of any deviation from the Gaussian ideal? With moderately large real data sets, the answer is almost always yes.
The question scientists often expect the normality test to answer: Do the data deviate enough from the Gaussian ideal to "forbid" use of a test that assumes a Gaussian distribution? Scientists often want the normality test to be the referee that decides when to abandon conventional (ANOVA, etc.) tests and instead analyze transformed data or use a rank-based nonparametric test or a resampling or bootstrap approach. For this purpose, normality tests are not very useful.
| null | CC BY-SA 2.5 | null | 2010-09-09T02:35:31.153 | 2010-09-09T21:37:22.197 | 2010-09-09T21:37:22.197 | 25 | 25 | null |
2504 | 1 | 2511 | null | 31 | 4050 | Is it possible to test for finiteness (or existence) of the variance of a random variable given a sample? As a null, either {the variance exists and is finite} or {the variance does not exist/is infinite} would be acceptable. Philosophically (and computationally), this seems very strange because there should be no difference between a population without finite variance, and one with a very very large variance (say > $10^{400}$), so I am not hopeful this problem can be solved.
One approach that had been suggested to me was via the Central Limit Theorem: assuming the samples are i.i.d., and the population has finite mean, one could check, somehow, whether the sample mean has the right standard error with increasing sample size. I'm not sure I believe this method would work, though. (In particular, I don't see how to make it into a proper test.)
| Test for finite variance? | CC BY-SA 2.5 | null | 2010-09-09T06:01:05.300 | 2017-04-07T23:18:53.103 | 2011-11-02T13:27:19.020 | 6961 | 795 | [
"hypothesis-testing",
"variance",
"central-limit-theorem"
] |
2506 | 2 | null | 2499 | 44 | null | In both statistics (kernel density estimation or kernel smoothing) and machine learning (kernel methods) literature, kernel is used as a measure of similarity. In particular, the kernel function $k(x,.)$ defines the distribution of similarities of points around a given point $x$. $k(x,y)$ denotes the similarity of point $x$ with another given point $y$.
| null | CC BY-SA 3.0 | null | 2010-09-09T06:09:23.923 | 2015-04-23T06:19:47.650 | 2015-04-23T06:19:47.650 | 35989 | 881 | null |
2507 | 2 | null | 2492 | 15 | null | Let me add one small thing:
Performing a normality test without taking its alpha-error into account heightens your overall probability of performing an alpha-error.
You shall never forget that each additional test does this as long as you don't control for alpha-error accumulation. Hence, another good reason to dismiss normality testing.
| null | CC BY-SA 2.5 | null | 2010-09-09T08:59:38.777 | 2010-09-09T08:59:38.777 | null | null | 442 | null |
2509 | 2 | null | 1923 | 5 | null | I think the standard solution goes as follows. I'll just do the scalar case, the multi parameter case is similar. Your objective function is $g_N(p,X_1,\dots,X_N)$
where $p$ is the parameter you want to estimate and $X_1,\dots,X_N$ are the observed random variables. For notational simplicity I will just write the objective function as $g(p)$ from now on.
We need some assumptions. Firstly I'll assume that you have already shown that the maximiser of $g$ is a consistent estimator (this actually tends to be the hardest part!). So, if the `true' value of the parameter is $p_0$ and the estimator is
$$ \hat{p} = \arg\max_{p} g(p) $$
then we have that $\hat{p} \rightarrow p_0$ almost surely as $N \rightarrow \infty$. Our second assumption is that $g$ is twice differentiable in a neighbourhood about $p_0$ (you can sometimes get away without this assumption, but the solution becomes more problem dependent). In view of strong consistency we can and will assume that $\hat{p}$ is inside this neighbourhood.
Denote by $g'$ and $g''$ the first and second derivatives of $g$ with respect to $p$. Then
$$ g'(p_0) - g'(\hat{p}) = (p_0 - \hat{p})g''(\bar{p})$$
where $\bar{p}$ lies between $\hat{p}$ and $p_0$. Now because $\hat{p}$ maximises $g$ we have $g'(\hat{p}) = 0$ so
$$(p_0 - \hat{p}) = \frac{g'(p_0)}{g''(\bar{p})}$$
and because $\hat{p} \rightarrow p_0$ almost surely then $\bar{p} \rightarrow p_0$ almost surely so $g''(\bar{p}) \rightarrow g''(p_0)$ almost surely and
$$(p_0 - \hat{p}) \rightarrow \frac{g'(p_0)}{g''(p_0)}$$
almost surely. So, in order to describe the distribution of $p_0 - \hat{p}$, i.e. the estimators central limit theorem, you need to find the distribution of $\frac{g'(p_0)}{g''(p_0)}$. This now becomes problem dependent.
| null | CC BY-SA 3.0 | null | 2010-09-09T12:02:48.057 | 2013-11-19T21:13:50.550 | 2013-11-19T21:13:50.550 | 352 | 352 | null |
2510 | 1 | 2521 | null | 4 | 1032 | For the more mathematically minded,
we have $x \in \mathbb{R}^2$ and the function $h(x)$ defined as:
$h(x)=\alpha_1x_1^2+\alpha_2x_2^2+\alpha_3x_1+\alpha_4x_2+\alpha_5x_1x_2+\alpha_6$
and the vector of alpha's is known and further guaranteed to be such that $h(x)$ is a general
elliptic paraboloid (i.e. a convex function).
Now, define $g(x)=(h(x))^+$ where $(z)^+$ is the positive part of $z\in\mathbb{R}$.
The questions follows:
- Is there a way to globally approximate $g(x)$ by a polynomial ? If so what it is?
- Can this approach (the answer to the previous sub-question) be extended to $x\in\mathbb{R}^p$ with $p$ moderatly large ?
- Would the problem be any easier if we were to assume $\alpha_5=0$ ?
Following Whuber's comment: is it possible to find a polynomial approximation to $g(x)$ that would be better than $h(x)$? in the event that many approximations solution for this problem exist, what are they ?
I can obviously solve $argmin.\int_{\mathbb{R}^2}(g(x)-\hat{g}(x))^2 dx$. I'm wondering if there are explicit, known solutions to this problem (i.e. polynomial series for example of which i know close to nothing).
| Can we approximate this function by a polynomial? | CC BY-SA 2.5 | null | 2010-09-09T12:33:53.397 | 2010-09-09T20:15:54.960 | 2010-09-09T19:21:26.227 | 603 | 603 | [
"approximation"
] |
2511 | 2 | null | 2504 | 14 | null | No, this is not possible, because a finite sample of size $n$ cannot reliably distinguish between, say, a normal population and a normal population contaminated by a $1/N$ amount of a Cauchy distribution where $N$ >> $n$. (Of course the former has finite variance and the latter has infinite variance.) Thus any fully nonparametric test will have arbitrarily low power against such alternatives.
| null | CC BY-SA 2.5 | null | 2010-09-09T12:57:35.380 | 2010-09-09T12:57:35.380 | null | null | 919 | null |
2513 | 1 | 2530 | null | 8 | 4148 | I have 2560 paired observations from an experiment in which participants provided two ratings for a set of objects, at two different points in time. Half of the objects in the set had the value of an attribute A changed between the two time points, half did not. Of the objects that were changed in each participant's set, half went from A' to A'' and half from A'' to A'. (i.e. all participants experienced both orders). My main hypothesis is that changing this attribute from A' to A'' leads on average to a higher rating, and this is indeed supported by the data. I am also interested in determining whether the magnitude and perhaps direction of this effect depends on the A' rating.
For the purposes of this question, I am considering only those instances where A was changed (1280 pairs of obs). The following GLMM
(A'' rating - A' rating) = participant + order + A' rating
where A' rating is a covariate and participant and order are categorical variables, leads to the conclusion that there is a significant positive correlation between A' rating and effect of changing to A'' and that this correlation is <1, such that objects with a low A' rating have their rating increased by changing to A'' but that objects with a high A' rating actually get rated lower when changed to A''.
I want to test whether this is simply due to regression to the mean. To this end, I have followed [Kelly and Price](http://www.ncbi.nlm.nih.gov/pubmed/16475086) in using Pitman's test of equality of variances for paired samples and would appreciate some feedback on whether I've done the right thing.
This is what I did, following the suggestion of a colleague:
1) calculated the SD of A'' ratings $(SD_1)$ and the SD of A' ratings $(SD_2)$
2) regressed A'' rating on A' rating and recorded the correlation $r$.
3) calculated T as $T=\frac{\sqrt{(n-2)} [(SD_1/SD_2)-(SD_2/SD_1)]}{2 \sqrt{(1-r^2)}}$
The 2-tailed p value of T (Student's t dist with 1280-2 DF) is 0.07, i.e. at alpha=0.05 there is no significant difference between the variances for the two sets of ratings and thus no effect of A' rating on rating difference beyond regression to the mean. (We can argue about 2-t vs. 1-t p values later).
I now plan to adjust my difference scores to account for this and re-do the GLMM outlined above, as outlined by Kelly & Price.
If you've got this far through the detail, then firstly well done, and secondly, can you tell me if 1) my colleague's suggestion was sensible and 2) if it was, have I implemented it correctly? I have a couple of concerns/apparent grey areas but I'd be interested to hear what others have to say first.
Thank you.
| Pitman's test of equality of variance and testing for regression to the mean: am I doing the right thing? | CC BY-SA 2.5 | null | 2010-09-09T15:04:28.853 | 2010-09-28T02:27:31.433 | 2010-09-10T11:50:47.273 | 266 | 266 | [
"regression",
"standard-deviation",
"variance"
] |
2516 | 1 | 2519 | null | 176 | 52251 | In a [recent article](http://magazine.amstat.org/blog/2010/09/01/statrevolution/) of Amstat News, the authors (Mark van der Laan and Sherri Rose) stated that "We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true — will declare a statistically significant effect.".
Well, I for one didn't know that. Is this true? Does it mean that hypothesis testing is worthless for large data sets?
| Are large data sets inappropriate for hypothesis testing? | CC BY-SA 2.5 | null | 2010-09-09T18:21:30.200 | 2021-10-15T14:16:05.627 | 2019-11-29T13:29:23.393 | 11887 | 666 | [
"hypothesis-testing",
"statistical-significance",
"dataset",
"sample-size",
"large-data"
] |
2517 | 2 | null | 2516 | 2 | null | I think what they mean is that one often makes an assumption about the probability density of the null hypothesis which has a 'simple' form but does not correspond to the true probability density.
Now with small data sets, you might not have enough sensitivity to see this effect but with a large enough data set you will reject the null hypothesis and conclude that there is a new effect instead of concluding that your assumption about the null hypothesis is wrong.
| null | CC BY-SA 2.5 | null | 2010-09-09T18:42:48.590 | 2010-09-09T18:42:48.590 | null | null | 961 | null |
2518 | 2 | null | 2516 | 10 | null | In a certain sense, [all] many null hypothesis are [always] false (the group of people living in houses with odd numbers does never exactly earn the same on average as the group of people living in houses with even numbers).
In the frequentist framework, the question that is asked is whether the difference in income between the two group is larger than $T_{\alpha}n^{-0.5}$ (where $T_{\alpha}$ is the $\alpha$ quantile of the distribution of the test statistic under the null). Obviously, for $n$ growing without bounds, this band becomes increasingly easy to break through.
This is not a defect of statistical tests. Simply a consequence of the fact that without further information (a prior) we have that a large number of small inconsistencies with the null have to be taken as evidence against the null. No matter how trivial these inconsistencies turn out to be.
In large studies, it becomes then interesting to re-frame the issue as a bayesian test, i.e. ask oneself (for instance), what is $\hat{P}(|\bar{\mu}_1-\bar{\mu}_2|^2>\eta|\eta, X)$.
| null | CC BY-SA 2.5 | null | 2010-09-09T18:55:04.400 | 2010-09-09T19:55:14.900 | 2010-09-09T19:55:14.900 | 603 | 603 | null |
2519 | 2 | null | 2516 | 115 | null | It is not true. If the null hypothesis is true then it will not be rejected more frequently at large sample sizes than small. There is an erroneous rejection rate that's usually set to 0.05 (alpha) but it is independent of sample size. Therefore, taken literally the statement is false. Nevertheless, it's possible that in some situations (even whole fields) all nulls are false and therefore all will be rejected if N is high enough. But is this a bad thing?
What is true is that trivially small effects can be found to be "significant" with very large sample sizes. That does not suggest that you shouldn't have such large samples sizes. What it means is that the way you interpret your finding is dependent upon the effect size and sensitivity of the test. If you have a very small effect size and highly sensitive test you have to recognize that the statistically significant finding may not be meaningful or useful.
Given some people don't believe that a test of the null hypothesis, when the null is true, always has an error rate equal to the cutoff point selected for any sample size, here's a simple simulation in `R` proving the point. Make N as large as you like and the rate of Type I errors will remain constant.
```
# number of subjects in each condition
n <- 100
# number of replications of the study in order to check the Type I error rate
nsamp <- 10000
ps <- replicate(nsamp, {
#population mean = 0, sd = 1 for both samples, therefore, no real effect
y1 <- rnorm(n, 0, 1)
y2 <- rnorm(n, 0, 1)
tt <- t.test(y1, y2, var.equal = TRUE)
tt$p.value
})
sum(ps < .05) / nsamp
# ~ .05 no matter how big n is. Note particularly that it is not an increasing value always finding effects when n is very large.
```
| null | CC BY-SA 3.0 | null | 2010-09-09T18:59:36.507 | 2017-08-05T07:36:50.280 | 2017-08-05T07:36:50.280 | 601 | 601 | null |
2520 | 2 | null | 2510 | 1 | null | Note that $h(x)$ itself is a polynomial in $x_1$ and $x_2$ and $g(x)$ is what I would call a truncated polynomial.
The region where $h(x)$ is negative, $g(x)$ will be zero. You can approximate this region where $h(x)$ is flat by a non-constant polynomial (non-constant because you also need to approximate the region where $h(x) > 0$) but this will always be a 'wiggly' approximation (at least for a finite number of terms in the approximating polynomial).
It's not clear to me what you want to achieve, $g(x)$ actually has a quite simple analytical form and maybe for your problem you just have to consider the two regions $h(x) < 0$ and $h(x) \ge 0$ separately.
| null | CC BY-SA 2.5 | null | 2010-09-09T19:00:45.007 | 2010-09-09T19:00:45.007 | null | null | 961 | null |
2521 | 2 | null | 2510 | 6 | null | The $L^2$ distance between $g$ and a polynomial approximation will be finite if and only if the polynomial approximation behaves asymptotically like $g$, which means it behaves asymptotically like $h$, which implies it must equal $h$. Therefore $h$ is the unique $L^2$ approximator.
Thus:
(1) Yes; the global approximator to $g$ is $h$.
(2) Yes; the same reasoning holds for all finite $p$.
(3) No; the value of $\alpha_5$ makes no difference.
Follow-up question (i): No, you cannot do better than $h$.
Follow-up question (ii) [how many approximations]: Not applicable due to the answer to (i).
You can get a considerably better set of answers if you're willing to limit the domain of $g$ to a compact [measurable] subset.
| null | CC BY-SA 2.5 | null | 2010-09-09T19:27:32.943 | 2010-09-09T20:15:54.960 | 2010-09-09T20:15:54.960 | 919 | 919 | null |
2522 | 2 | null | 2516 | 35 | null | I agree with the answers that have appeared, but would like to add that perhaps the question could be redirected. Whether to test a hypothesis or not is a research question that ought, at least in general, be independent of how much data one has. If you really need to test a hypothesis, do so, and don't be afraid of your ability to detect small effects. But first ask whether that's part of your research objectives.
Now for some quibbles:
- Some null hypotheses are absolutely true by construction. When you're testing a pseudorandom number generator for equidistribution, for instance, and that PRG is truly equidistributed (which would be a mathematical theorem), then the null holds. Probably most of you can think of more interesting real-world examples arising from randomization in experiments where the treatment really does have no effect. (I would hold out the entire literature on esp as an example. ;-)
- In a situation where a "simple" null is tested against a "compound" alternative, as in classic t-tests or z-tests, it typically takes a sample size proportional to $1/\epsilon^2$ to detect an effect size of $\epsilon$. There's a practical upper bound to this in any study, implying there's a practical lower bound on a detectable effect size. So, as a theoretical matter der Laan and Rose are correct, but we should take care in applying their conclusion.
| null | CC BY-SA 2.5 | null | 2010-09-09T19:42:26.763 | 2010-09-09T19:42:26.763 | null | null | 919 | null |
2524 | 2 | null | 1646 | 4 | null | I think (1) is not a statistical question but a subject-area one. E.g., in the described example it would be up to those who study group psychology to determine appropriate language for the strength of ICCs. This is analogous to a Pearson correlation -- what constitutes 'strong' differs depending on whether one is working in, for example, sociology or physics.
(2) is to an extent also subject-area specific -- it depends on what researchers are aiming to measure and describe. But from a statistical point of view ICC is a reasonable metric for within-team relatedness. However I agree with Mike that when you say you'd like to
>
"describe the extent to which
the measure of team effectiveness is a
property of the team member's
idiosyncratic belief or a property of
a shared belief about the team"
then it is probably more appropriate to use variance components in their raw form than to convert them into an ICC.
To clarify, think of the ICC as calculated within a mixed model. For a single-level mixed model with random group-level intercepts $b_i \sim N(0, \sigma^2_b)$ and within-group errors $\epsilon_{ij} \stackrel{\mathrm{iid}}{\sim} N(0, \sigma^2)$, $\sigma^2_b$ describes the amount of variation between teams and $\sigma^2$ describes variation within teams. Then, for a single team, we get a response covariance matrix of $\sigma^2 \mathbf{I} + \sigma^2_b \mathbf{1}\mathbf{1}'$ which when converted to a correlation matrix is $\frac{\sigma^2}{\sigma^2 + \sigma^2_b} \mathbf{I} + \frac{\sigma^2_b}{\sigma^2 + \sigma^2_b} \mathbf{1}\mathbf{1}'$. So, $\frac{\sigma^2_b}{\sigma^2 + \sigma^2_b} = \mathrm{ICC}$ describes the level of correlation between effectiveness responses within a team, but it sounds as though you may be more interested in $\sigma^2$ and $\sigma^2_b$, or perhaps $\frac{\sigma^2}{\sigma^2_b}$.
| null | CC BY-SA 2.5 | null | 2010-09-09T20:39:15.037 | 2010-09-09T20:39:15.037 | null | null | 1107 | null |
2525 | 2 | null | 363 | 12 | null | Long ago, Jack Kiefer's little monograph ["Introduction to Statistical Inference"](http://rads.stackoverflow.com/amzn/click/0387964207) peeled away the mystery of a great deal of classical statistics and helped me get started with the rest of the literature. I still refer to it and warmly recommend it to strong students in second-year stats courses.
| null | CC BY-SA 2.5 | null | 2010-09-09T22:52:29.493 | 2011-02-20T02:32:14.440 | 2011-02-20T02:32:14.440 | 159 | 919 | null |
2526 | 2 | null | 1520 | -1 | null | You're right about the expectation.
You actually also have the right answer to the probability of getting more than your original stake back, although not the right proof. Consider, instead of the raw amount of money you have, its base-2 logarithm. This turns out to be the number of times you've doubled your money, minus the number of times you've halved it. This is the sum $S_n$ of $n$ independent random variables, each equal to $+1$ or $-1$ with probability $1/2$. The probability that you want is the probability that this is positive. If $n$ is odd, then by symmetry it's exactly $1/2$; if $n$ is even (call it $2k$) then it's $1/2$ minus half the probability that $S_n = 0$. But $P(S_{2k} = 0) = {2k \choose k}/2^{2k}$, which approaches $0$ as $k \to \infty$.
| null | CC BY-SA 2.5 | null | 2010-09-09T23:09:59.063 | 2010-09-09T23:09:59.063 | null | null | 98 | null |
2528 | 2 | null | 2516 | 17 | null | A (frequentist) hypothesis test, precisely, address the question of the probability of the observed data or something more extreme would be likely assuming the null hypothesis is true. This interpretation is indifferent to sample size. That interpretation is valid whether the sample is of size 5 or 1,000,000.
An important caveat is that the test is only relevant to sampling errors. Any errors of measurement, sampling problems,coverage, data entry errors, etc are outside of the scope of sampling error. As sample size increases, non-sampling errors become more influential as small departures can produce significant departures from the random sampling model. As a result, tests of significance become less useful.
This is in no way an indictment of significance testing. However, we need to be careful about our attributions. A result may be statistically significant. However, we need to be cautious about how we make attributions when sample size is large. Is that difference due to our hypothesized generating process vis a vis sampling error or is it the result of any of a number of possible non-sampling errors that could influence the test statistic (which the statistic does not account for)?
Another consideration with large samples is the practical significance of a result. A significant test might suggest (even if we can rule out non-sampling error) a difference that is trivial in a practical sense. Even if that result is unlikely given the sampling model, is it significant in the context of the problem? Given a large enough sample, a difference in a few dollars might be enough to produce a result that is statistically significant when comparing income among two groups. Is this important in any meaningful sense? Statistical significance is no replacement for good judgment and subject matter knowledge.
As an aside, the null is neither true nor false. It is a model. It is an assumption. We assume the null is true and assess our sample in terms of that assumption. If our sample would be unlikely given this assumption, we place more trust in our alternative. To question whether or not a null is ever true in practice is a misunderstanding of the logic of significance testing.
| null | CC BY-SA 3.0 | null | 2010-09-10T03:51:12.823 | 2018-02-27T21:16:38.203 | 2018-02-27T21:16:38.203 | 485 | 485 | null |
2529 | 2 | null | 36 | 2 | null | The more fire engines sent to a fire, the bigger the damage.
| null | CC BY-SA 2.5 | null | 2010-09-10T07:40:27.233 | 2010-09-10T07:40:27.233 | null | null | 1048 | null |
2530 | 2 | null | 2513 | 3 | null | As far as I know, the Pitman test is formulated as :
$$F=\frac{SD_2}{SD_1} ~with~ SD_2 > SD_1$$
$$T=\frac{(F-1)\sqrt{n-2}}{2\sqrt{F(1-r^2)}} $$ with $r$ the correlation between the scores in sample 1 and sample 2. This is not equivalent to the formula you use and mentioned in the paper. I'm not positive about my formula either, I got it from a course somewhere (alas no reference...)
Apart from that, it might be interesting to take a look at an alternative approach to dealing with regression to the mean. I found the [tutorial paper of Barnett et al](http://ije.oxfordjournals.org/content/34/1/215.full) on regression to the mean very enlightening.
Now let's get back a moment to the 2-sided versus 1-sided p-values. Regardless of the formula you use, the sign of T is only dependent on the order of the SD's. (In fact, how I know the pitman test, T is always positive.) Hence, the underlying distribution is -as far as I'm concerned- not the T distribution but half the T distribution, meaning you have to put the cutoff at $T_{0.975, df}$, but the related p-value is originating from one tail only. This is equivalent the standard F test for comparing variances.
| null | CC BY-SA 2.5 | null | 2010-09-10T09:52:48.877 | 2010-09-10T09:52:48.877 | null | null | 1124 | null |
2531 | 2 | null | 2516 | 5 | null | Hypothesis testing for large data should the desired level of difference into account, rather than whether there is a difference or not. You're not interested in the H0 that the estimate is exactly 0. A general approach would be to test whether the difference between the null hypothesis and the observed value is larger than a given cut-off value.
An simple example with the T-test:
You can make following assumptions for big sample sizes, given you have equal sample sizes and standard deviations in both groups, and $\bar{X_1} > \bar{X_2}$ :
$$T=\frac{\bar{X1}-\bar{X2}-\delta}{\sqrt{\frac{S^2}{n}}}+\frac{\delta}{\sqrt{\frac{S^2}{n}}} \approx N(\frac{\delta}{\sqrt{\frac{S^2}{n}}},1)$$
hence
$$T=\frac{\bar{X1}-\bar{X2}}{\sqrt{\frac{S^2}{n}}} \approx N(\frac{\delta}{\sqrt{\frac{S^2}{n}}},1)$$
as your null hypothesis $H_0:\bar{X1}-\bar{X2} = \delta $ implies:
$$\frac{\bar{X1}-\bar{X2}-\delta}{\sqrt{\frac{S^2}{n}}}\approx N(0,1)$$
This you can easily use to test for a significant and relevant difference. In R you can make use of the noncentrality parameter of the T distributions to generalize this result for smaller sample sizes as well. You should take into account that this is a one-sided test, the alternative $H_A$ is $\bar{X1}-\bar{X2} > \delta $.
```
mod.test <- function(x1,x2,dif,...){
avg.x1 <- mean(x1)
avg.x2 <- mean(x2)
sd.x1 <- sd(x1)
sd.x2 <- sd(x2)
sd.comb <- sqrt((sd.x1^2+sd.x2^2)/2)
n <- length(x1)
t.val <- (abs(avg.x1-avg.x2))*sqrt(n)/sd.comb
ncp <- (dif*sqrt(n)/sd.comb)
p.val <- pt(t.val,n-1,ncp=ncp,lower.tail=FALSE)
return(p.val)
}
n <- 5000
test1 <- replicate(100,
t.test(rnorm(n),rnorm(n,0.05))$p.value)
table(test1<0.05)
test2 <- replicate(100,
t.test(rnorm(n),rnorm(n,0.5))$p.value)
table(test2<0.05)
test3 <- replicate(100,
mod.test(rnorm(n),rnorm(n,0.05),dif=0.3))
table(test3<0.05)
test4 <- replicate(100,
mod.test(rnorm(n),rnorm(n,0.5),dif=0.3))
table(test4<0.05)
```
Which gives :
```
> table(test1<0.05)
FALSE TRUE
24 76
> table(test2<0.05)
TRUE
100
> table(test3<0.05)
FALSE
100
> table(test4<0.05)
TRUE
100
```
| null | CC BY-SA 2.5 | null | 2010-09-10T10:03:13.960 | 2010-09-10T13:45:11.413 | 2010-09-10T13:45:11.413 | 1124 | 1124 | null |
2532 | 2 | null | 36 | 5 | null | when $x_t$; $y_t$ are stationary time series, then correlation between $y_t$ and $x_{t-1}$ implies causation of $y_t$ by $x_{t-1}$. For some reason it's not mentioned here.
| null | CC BY-SA 2.5 | null | 2010-09-10T10:48:33.617 | 2010-09-10T10:48:33.617 | null | null | 603 | null |
2535 | 2 | null | 1646 | 1 | null | 1) With correlations, you can never really give sensible cut-offs, but the general rules of the normal correlation apply I'd say.
2) Regarding the appropriateness of the ICC : depending on the data, the ICC is equivalent to an F-test (see eg [Commenges & Jacqmin, 1994](http://www.jstor.org/stable/2533395) and [Kistner & Muller, 2004](http://worldcatlibraries.org/registry/gateway/?issn=0033-3123&aulast=Kistner&aufirst=Emily&date=2004&atitle=Exact%20Distributions%20of%20Intraclass%20Correlation%20and%20Cronbachs%20Alpha%20with%20Gaussian%20Data%20and%20General%20Covariance&title=Psychometrika&volume=69&issue=3&spage=459?)). So in essence, the mixed model framework can tell you at least as much about your hypothesis, and allows for simultaneously testing more hypotheses than the ICC.
Cronbach's $\alpha$ is also directly related to the ICC, and another measure that is (was?) often reported, albeit in the context of agreement between items within a group. This approach comes from psychological questionnaires, where a cut-off of 0.7 is rather often used to determine whether the questions really group into the studied factors.
| null | CC BY-SA 2.5 | null | 2010-09-10T11:28:41.017 | 2010-09-10T11:28:41.017 | null | null | 1124 | null |
2536 | 2 | null | 534 | 42 | null | I'll just add some additional comments about causality as viewed from an epidemiological perspective. Most of these arguments are taken from [Practical Psychiatric Epidemiology](http://occmed.oxfordjournals.org/content/56/6/434.2.full), by Prince et al. (2003).
Causation, or causality interpretation, are by far the most difficult aspects of epidemiological research. [Cohort](http://en.wikipedia.org/wiki/Cohort_study) and [cross-sectional](http://en.wikipedia.org/wiki/Cross-sectional_study) studies might both lead to confoundig effects for example. Quoting S. Menard (Longitudinal Research, Sage University Paper 76, 1991), H.B. Asher in Causal Modeling (Sage, 1976) initially proposed the following set of criteria to be fulfilled:
- The phenomena or variables in question must covary, as indicated for example by differences between experimental and control groups or by nonzero correlation between the two variables.
- The relationship must not be attributable to any other variable or set of variables, i.e., it must not be spurious, but must persist even when other variables are controlled, as indicated for example by successful randomization in an experimental design (no difference between experimental and control groups prior to treatment) or by a nonzero partial correlation between two variables with other variable held constant.
- The supposed cause must precede or be simultnaeous with the supposed effect in time, as indicated by the change in the cause occuring no later than the associated change in the effect.
While the first two criteria can easily be checked using a cross-sectional or time-ordered cross-sectional study, the latter can only be assessed with longitudinal data, except for biological or genetic characteristics for which temporal order can be assume without longitudinal data. Of course, the situation becomes more complex in case of a non-recursive causal relationship.
I also like the following illustration (Chapter 13, in the aforementioned reference) which summarizes the approach promulgated by Hill (1965) which includes 9 different criteria related to causation effect, as also cited by @James. The original article was indeed entitled "The environment and disease: association or causation?" ([PDF version](http://epiville.ccnmtl.columbia.edu/assets/pdfs/Hill_1965.pdf)).

Finally, Chapter 2 of Rothman's most famous book, Modern Epidemiology (1998, Lippincott Williams & Wilkins, 2nd Edition), offers a very complete discussion around causation and causal inference, both from a statistical and philosophical perspective.
I'd like to add the following references (roughly taken from an online course in epidemiology) are also very interesting:
- Swaen, G and van Amelsvoort, L (2009). A weight of evidence approach to causal inference. Journal of Clinical Epidemiology, 62, 270-277.
- Botti, C, Comba, P, Forastiere, F, and Settimi, L (1996). Causal inference in environmental epidemiology. the role of implicit values. The Science of the Total Environment, 184, 97-101.
- Weed, DL (2002). Environmental epidemiology. Basics and proof of cause effect. Toxicology, 181-182, 399-403.
- Franco, EL, Correa, P, Santella, RM, Wu, X, Goodman, SN, and Petersen, GM (2004). Role and limitations of epidemiology in establishing a causal association. Seminars in Cancer Biology, 14, 413–426.
Finally, this review offers a larger perspective on causal modeling, [Causal inference in statistics: An overview](http://www.i-journals.org/ss/viewarticle.php?id=57) (J Pearl, SS 2009 (3)).
| null | CC BY-SA 2.5 | null | 2010-09-10T12:33:09.340 | 2010-10-02T17:02:16.690 | 2010-10-02T17:02:16.690 | 930 | 930 | null |
2537 | 1 | 2538 | null | 4 | 3566 | Say we have 5 items, and people are asked which item they like. Multiple answers are possible, but also no answer is possible. The people are categorized according to factors like gender, age, and so on. One possible approach to analyze the differences between genders, age groups and so on is making use of the [Generalized Estimating Equations](http://en.wikipedia.org/wiki/Generalized_estimating_equation).
I thus construct a dataset with a binary variable indicating whether the item was liked or not, and as predictor variables I have the items, the person id, the age,... i.e. :
```
Liked Item Person ...
0 1 1
1 2 1
0 3 1
0 4 1
1 5 1
1 1 2
...
```
Then I apply a model with following form :
$$Liked = Item + Gender + Item*Gender + Age + Item*Age + ... $$
with Person as random factor (called id in some applications) and a logit link function.
Now I like to give confidence intervals on the conditional fractions, i.e. the confidence interval of the fractions males and females that liked a particular item, corrected for age differences and the likes. I know I could use the estimated coefficients to get the results I want, but I'm a bit lost in how to do that. I can calculate the estimated odds, but I'm not sure on the standard error (SE) on those odds based on the SE of the coefficients. I'm not sure on how to deal with the random component of the variance for example.
So:
1) Any pointers on how to calculate that SE from the SE of the coefficients?
2) Any alternatives for an approach? I've been thinking about mixed models, but a colleague directed me to GEE as more appropriate for these data. Your ideas?
---
Edit : for practical pointers, I'm using [geepack](http://cran.r-project.org/web/packages/geepack/index.html) in R for this. I tried `effect()`, but the option `se.fit=T` is not implemented. In any case, that would give the SE for every observation, which is not what I'm interested in.
| Confidence intervals on differences in choices in a GEE framework: methods and alternatives? | CC BY-SA 2.5 | null | 2010-09-10T12:51:35.787 | 2010-11-01T11:02:10.263 | 2010-11-01T11:02:10.263 | 930 | 1124 | [
"confidence-interval",
"estimation",
"mixed-model",
"psychometrics",
"random-effects-model"
] |
2538 | 2 | null | 2537 | 5 | null | Well, the [gee](http://cran.r-project.org/web/packages/gee/index.html) package includes facilities for fitting GEE and `gee()` return asymptotic and robust SE. I never used the [geepack](http://cran.r-project.org/web/packages/geepack/index.html) package. From what I saw in the online example, output seems to resemble more or less that of `gee`. To compute $100(1-\alpha)$ CIs for your main effects (e.g. gender), why not use the robust SE (in the following I will assume it is extracted from, say `summary(gee.fit)`, and stored in a variable `rob.se`)? I suppose that
```
exp(coef(gee.fit)["gender"]+c(-1,1)*rob.se*qnorm(0.975))
```
should yield 95% CIs expressed on the odds scale.
Now, in fact I rarely use GEE except when I am working with binary endpoints in longitudinal studies, because it's easy to pass or estimate a given working correlation matrix. In the case you summarize here, I would rather rely on an IRT model for dichotomous items (see the [psychometrics](http://cran.r-project.org/web/views/Psychometrics.html) task view), or (it is quite the same in fact) a mixed-effects GLM such as the one that is proposed in the [lme4](http://cran.r-project.org/web/packages/lme4/index.html) package, from Doug Bates. For study like yours, as you said, subjects will be considered as random effects, and your other covariates enter the model as fixed effects; the response is the 0/1 rating on each item (which enter the model as well). Then you will get 95% CI for fixed effects, either from the SE computed as `sqrt(diag(vcov(glmm.fit)))` or as read in `summary(glmm.fit)`, or using `confint()` together with an `lmList` object. Doug Bates gave nice illustrations in the following two paper/handout:
- Estimating the Multilevel Rasch Model: With the lme4 Package (JSS, 2007 20(2))
- Item Response Models as GLMMs (from a workshop at UseR 2008)
There is also a discussion about profiling `lmer` fits (based on profile deviance) to investigate variability in fixed effects, but I didn't investigate that point. I think it is still in section 1.5 of Doug's [draft on mixed models](http://lme4.r-forge.r-project.org/book/). There are a lot of discussion about computing SE and CI for GLMM as implemented in the `lme4` package (whose interface differs from the previous `nlme` package), so that you will easily find other interesting threads after googling about that.
It's not clear to me why GEE would have to be preferred in this particular case. Maybe, look at the R translation of Agresti's book by Laura Thompson, [R (and S-PLUS) Manual to Accompany Agresti's Categorical Data](https://home.comcast.net/~lthompson221/Splusdiscrete2.pdf).
---
Update:
I just realized that the above solution would only work if you're interested in getting a confidence interval for the gender effect alone. If it is the interaction item*gender that is of concern, you have to model it explicitly in the GLMM (my second reference on Bates's has an example on how to do it with `lmer`).
Another solution is to use an explanatory IRT model, where you explicitly acknowledge the potential effect of person covariates, like gender or age, and consider fitting them within a Rasch model, for example. This is called a Latent Regression Rasch Model, and is fully described in de Boeck and Wilson's book, Explanatory item response models: a generalized linear and nonlinear approach (Springer, 2004), which you can read online on [Google books](http://books.google.fr/books?id=pDeLy5L14mAC&pg=PA66&lpg=PA66&dq=latent+regression+Rasch+model&source=bl&ots=Ivct9snqQO&sig=okSBDj9TAe5gZHIcMDHOQxvAbZU&hl=fr&ei=q0yKTPlCgYDgBq-CrdoK&sa=X&oi=book_result&ct=result&resnum=7&ved=0CEMQ6AEwBg#v=onepage&q=latent%20regression%20Rasch%20model&f=false) (section 2.4). There are some facilities to fit this kind of model in Stata (see [there](http://www.gllamm.org/aggression.html)). In R, we can mimic such model with a mixed-effects approach; a toy example would look something like
```
lmer(response ~ 0 + Age + Sex + item + (Sex|id), data=df, binomial)
```
if I remember correctly. I'm not sure whether the [eRm](http://cran.r-project.org/web/packages/eRm/index.html) allows to easily incorporate person covariates (because we need to construct a specific design matrix), but it may be worth checking out since it provides 95% CIs too.
| null | CC BY-SA 2.5 | null | 2010-09-10T14:46:40.440 | 2010-09-10T15:37:00.850 | 2010-09-10T15:37:00.850 | 930 | 930 | null |
2539 | 2 | null | 2483 | 3 | null | I am going to give a simple answer in response to your edit. If my answer does not meet your real needs please include it in the comments and I will change/complexify it a bit.
Let:
$S$ be the severity of the event of interest with higher numbers representing more severity
$W$ be the warning generated by the system.
The range of values that these variables can take is assumed to be from 1 to $n$.
Either via an experiment or via an observational study you have a set of observations about actual events and the warnings generated by the system. Thus, you can calculate the following empirical probabilities:
$P(W=w | S=s)$ which is the probability that the system generates warning $w$ when the event is $s$.
From a decision making perspective, you really want to know:
$P(S=s | W=w)$
In other words, you want to know the probability of different events occurring in the near future given that the system has given you a specific warning.
You can compute the required probabilities using [Bayes theorem](http://en.wikipedia.org/wiki/Bayes_theorem). Briefly, the calculation is:
$P(S=s | W=w) = \frac{P(W=w | S=s) P(S=s)}{\sum_s(P(W=w |S=s)P(S=s))}$
From your study you know:
$P(W=w | S=s)$ = Proportion of time the system generated warning $w$ when the event was $s$,
In order to compute the required probability, $P(S=s | W=w)$ you need to have some sense of $P(S=s)$ which you can estimate or guess based on previous experience.
You can then assess the accuracy of the device by looking at the following probabilities: $P(S=s|W=s)$.
| null | CC BY-SA 2.5 | null | 2010-09-10T16:30:12.310 | 2010-09-10T18:25:12.273 | 2010-09-10T18:25:12.273 | null | null | null |
2540 | 2 | null | 1520 | 1 | null | Let $S_k$ be the wealth after $k$ plays of this game, where we assume $S_0 = 1.$ The temptation here is to take $X_k = \log{S_k}$, and study $X_k$ as a symmetric random walk, with innovations of size $\pm \log{2}$. This, as it turns out, will be fine for the second question, but not the first. A bit of work will show that, asymptotically we have $X_k \sim \mathcal{N}(0,k(\log{2})^2)$. From this you cannot conclude that $S_k$ is asymptotically log normally distributed with $\mu = 0, \sigma = \log{2}\sqrt{k}.$ The log operation does not commute with the limit. If it did, you would get the expected value of $S_k$ as $\exp{(k \log{2}\log{2}/2)}$, which is nearly correct, but not quite.
However, this method is just fine for finding quantiles of $S_k$, and other questions of probability, like question (2). We have $S_k \ge (\frac{5}{4})^k \Leftrightarrow X_k \ge k \log{(5/4)} \Leftrightarrow X_k / \sqrt{k}\log{2} \ge \sqrt{k}\log{(5/4})/\log{2}.$ The quantity on the lefthand side of the last inequality is, asymptotically, a standard normal, and so the probability that $S_k$ exceeds its mean approaches $1 - \Phi{(\sqrt{k}\log{(5/4)}/\log{2})},$ where $\Phi$ is the CDF of the standard normal. This approaches zero fairly quickly.
Matlab code to check this:
```
top_k = 512;
nsamps = 8192;
innovs = log(2) * cumsum(sign(randn(top_k,nsamps)),1);
s_k = exp(innovs);
k_vals = (1:top_k)';
mean_v = (5/4) .^ k_vals;
exceed = bsxfun(@ge,s_k,mean_v);
prob_g = mean(double(exceed),2);
%theoretical value
%(can you believe matlab doesn't come with normal cdf function!?)
nrmcdf = @(x)((1 + erf(x / sqrt(2)))/2);
p_thry = 1 - nrmcdf(sqrt(k_vals) * log(5/4) / log(2));
loglog(k_vals,prob_g,'b-',k_vals,p_thry,'r-');
legend('empirical probability','theoretical probability');
```
the graph produced:

| null | CC BY-SA 2.5 | null | 2010-09-10T16:32:41.367 | 2010-09-11T21:21:57.313 | 2010-09-11T21:21:57.313 | 795 | 795 | null |
2541 | 1 | 2542 | null | 51 | 107639 | I have read/heard many times that the sample size of at least 30 units is considered as "large sample" (normality assumptions of means usually approximately holds due to the CLT, ...). Therefore, in my experiments, I usually generate samples of 30 units. Can you please give me some reference which should be cited when using sample size 30?
| What references should be cited to support using 30 as a large enough sample size? | CC BY-SA 3.0 | null | 2010-09-10T17:07:39.130 | 2022-03-15T23:55:20.267 | 2017-12-04T09:46:55.683 | 28666 | 1215 | [
"references",
"sample-size",
"normality-assumption",
"central-limit-theorem",
"rule-of-thumb"
] |
2542 | 2 | null | 2541 | 45 | null | The choice of n = 30 for a boundary between small and large samples is a rule of thumb, only. There is a large number of books that quote (around) this value, for example, Hogg and Tanis' Probability and Statistical Inference (7e) says "greater than 25 or 30".
That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tables in the back of textbooks to fit nicely on one page. That, and the critical values (between Student's t and Normal) are only off by approximately up to 0.25, anyway, from df = 30 to df = infinity. For hand computation the difference didn't really matter.
Nowadays it is easy to compute critical values for all sorts of things to 15 decimal places. On top of that we have resampling and permutation methods for which we aren't even restricted to parametric population distributions.
In practice I never rely on n = 30. Plot the data. Superimpose a normal distribution, if you like. Visually assess whether a normal approximation is appropriate (and ask whether an approximation is even really needed). If generating samples for research and an approximation is obligatory, generate enough of a sample size to make the approximation as close as desired (or as close as computationally feasible).
| null | CC BY-SA 2.5 | null | 2010-09-10T17:44:58.943 | 2010-09-10T17:44:58.943 | null | null | null | null |
2543 | 2 | null | 146 | 3 | null | If your data is high dimensional and noisy, and you don't have a large number of sample, you run into the danger of overfitting. In such cases, it does make sense to use PCA (which can capture a dominant part of data variance; orthogonality isn't an issue) or factor analysis (which can find the true explanatory variables underlying the data) to reduce data dimensionality and then train a regression model with them.
For factor analysis based approaches, see this paper [Bayesian Factor Regression Model](http://ftp.isds.duke.edu/WorkingPapers/02-12.pdf), and a [nonparametric Bayesian version of this model](http://www.umiacs.umd.edu/~hal/docs/daume08ihfrm.pdf) that does not assume that you a priori know the "true" number of relevant factors (or principal components in case of PCA).
I'd add that in many cases, supervised dimensionality reduction (e.g., [Fisher Discriminant Analysis](http://en.wikipedia.org/wiki/Linear_discriminant_analysis)) can give improvements over simple PCA or FA based approaches, because you can make use of the label information while doing dimensionality reduction.
| null | CC BY-SA 2.5 | null | 2010-09-10T18:14:48.727 | 2010-09-10T18:14:48.727 | null | null | 881 | null |
2544 | 2 | null | 2541 | 9 | null | IMO, it all depends on what you want to use your sample for. Two "silly" examples to illustrate what I mean: If you need to estimate a mean, 30 observations is more than enough. If you need to estimate a linear regression with 100 predictors, 30 observations will not be close to enough.
| null | CC BY-SA 2.5 | null | 2010-09-10T18:41:42.803 | 2010-09-10T18:41:42.803 | null | null | 1204 | null |
2545 | 2 | null | 2541 | 10 | null | Mostly arbitrary rule of thumb. This statement depends on a number of factor to be true. For example on the distribution of the data. If the data comes from a Cauchy for example, even 30^30 observations are not enough to estimate the mean (in that case even an infinite number of observations would not be enough to cause $\bar{\mu}^{(n)}$ to converge). This number (30) is also false if the values you draw are not independent from one another (again, you may have that there are no convergence at all, regardless of sample size).
More generally, the CLT needs essentially two pillars to hold:
- That the random variables are independent: that you can re-order your observations without losing any information*.
- That the r.v. come from a distribution with finite second moments: meaning that the classical estimators of mean and s.d. tend to converge as sample size increases.
(Both these condition can be somewhat weakened, but the differences are largely of theoretical nature)
| null | CC BY-SA 3.0 | null | 2010-09-10T19:05:05.227 | 2017-12-04T09:48:55.993 | 2017-12-04T09:48:55.993 | 603 | 603 | null |
2546 | 2 | null | 2541 | 52 | null | Actually, the "magic number" 30 is a fallacy. See Jacob's Cohen's delightful paper, [Things I Have Learned (So Far) (Am. Psych. December 1990 45 #12, pp 1304-1312)](https://pdfs.semanticscholar.org/fa77/0a7fb7c45a59abbc4c2bc7d174fa51e5d946.pdf). This myth is his first example of how "some things you learn aren't so".
>
[O]ne of my fellow doctoral candidates undertook a dissertation [with] a sample of only 20 cases per group. ... [L]ater I discovered ... that for a two-independent-group-mean comparison with $n = 30$ per group at the sanctified two-tailed $.05$ level, the probability that a medium-sized effect would be labeled as significant by ... a t test was only $.47$. Thus, it was approximately a coin flip whether one would get a significant result, even though, in reality, the effect size was meaningful. ... [My friend] ended up with nonsignificant results–with which he proceeded to demolish an important branch of psychoanalytic theory.
| null | CC BY-SA 4.0 | null | 2010-09-10T19:42:16.590 | 2019-05-16T11:14:23.453 | 2020-06-11T14:32:37.003 | -1 | 666 | null |
2547 | 1 | 2550 | null | 50 | 96987 |
- If you look at Wolfram Alpha

- Or this Wikipedia page List of countries by median age

Clearly [median](http://en.wikipedia.org/wiki/Median) seems to be the statistic of choice when it comes to ages.
I am not able to explain to myself why [arithmetic mean](http://en.wikipedia.org/wiki/Arithmetic_mean) would be a worse statistic. Why is it so?
Originally posted [here](https://math.stackexchange.com/questions/4380/why-is-median-age-a-better-statistic-than-mean-age) because I did not know this site existed.
| Why is median age a better statistic than mean age? | CC BY-SA 2.5 | null | 2010-09-10T20:26:56.997 | 2020-07-25T12:06:36.090 | 2020-07-25T12:06:36.090 | 7290 | 285 | [
"mean",
"median",
"types-of-averages"
] |
2548 | 2 | null | 2547 | 18 | null | John gave you a good answer on the sister site.
One aspect he didn't mention explicitly is robustness: median as a measure of central location does better than the mean as it has a higher breakdown point (of 50%) whereas the mean has a very low one of 0 (see wikipedia for details).
Intuitively, it means that individual bad observations do not skew the median whereas they do for the mean.
| null | CC BY-SA 2.5 | null | 2010-09-10T20:48:49.707 | 2010-09-10T20:48:49.707 | null | null | 334 | null |
2549 | 2 | null | 2547 | 1 | null | John's answer on math.stackexchange can be viewed as the following:
>
When you have a skewed distribution the median may be a better summary statistic than the mean.
Note that when he says that there are more infants than adults he essentially is suggesting that the age distribution is a skewed distribution.
| null | CC BY-SA 2.5 | null | 2010-09-10T21:33:58.017 | 2010-09-10T21:33:58.017 | null | null | null | null |
2550 | 2 | null | 2547 | 53 | null | Statistics does not provide a good answer to this question, in my opinion. A mean can be relevant in mortality studies for example, but ages are not as easy to measure as you might think. Older people, illiterate people, and people in some third-world countries tend to round their ages to a multiple of 5 or 10, for instance.
The median is more resistant to such errors than the mean. Moreover, median ages are typically 20 – 40, but people can live to 100 and more (an increasing and noticeable proportion of the population of modern countries now lives beyond 100). People of such age have 1.5 to 4 times the influence on the mean than they do on the median compared to very young people. Thus, the median is a bit more up-to-date statistic concerning a country's age distribution and is a little more independent of mortality rates and life expectancy than the mean is.
Finally, the median gives us a slightly better picture of what the age distribution itself looks like: when you see a median of 35, for example, you know that half the population is older than 35 and you can infer some things about birth rates, ages of parents, and so on; but if the mean is 35, you can't say as much, because that 35 could be influenced by a large population bulge at age 70, for example, or perhaps a population gap in some age range due to an old war or epidemic.
Thus, for demographic, not statistical, reasons, a median appears more worthy of the role of an omnibus value for summarizing the ages of relatively large populations of people.
| null | CC BY-SA 3.0 | null | 2010-09-10T22:17:24.443 | 2016-10-22T23:06:11.547 | 2016-10-22T23:06:11.547 | 92454 | 919 | null |
2551 | 2 | null | 726 | 11 | null | >
There is no free hunch.
-- Robert Abelson
| null | CC BY-SA 2.5 | null | 2010-09-10T23:05:50.640 | 2010-09-10T23:05:50.640 | null | null | 666 | null |
2552 | 2 | null | 1908 | 1 | null | [European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)](http://www.ecmlpkdd2010.org/)
To see the type of papers presented at the conference see the videos of the last confenece on [videolectures.net](http://videolectures.net/ecmlpkdd09_bled/)
| null | CC BY-SA 2.5 | null | 2010-09-10T23:50:38.213 | 2010-09-10T23:50:38.213 | null | null | 1215 | null |
2553 | 2 | null | 2547 | 7 | null | Why is an axe better than a hatchet?
That's similar to your question. They just mean and do different things. If one is talking about medians then the story they are trying to convey, the model they are trying to apply to the data, is different than one with means.
| null | CC BY-SA 2.5 | null | 2010-09-11T00:30:30.430 | 2010-09-11T00:30:30.430 | null | null | 601 | null |
2555 | 1 | 2558 | null | 3 | 618 | In "On Gaussian-like Densities of Order Greater than Two" (Willett, P. Thomas, J. B., 1987), section II, the author state:
$\mathcal{N}(x,y,\rho)=\phi(x)\phi(y)\sum_{n=0}^{\infty}\rho^nH_n(x)H_n(y)$
where
- $\phi(.)$ is the unit normal density,
- $H_n(.)$ is the $n^{th}$ Hermite polynomial,
- $\mathcal{N}(x,y,\rho)$ is the bivariate Gaussian density with correlation $\rho$.
I tried to repeat this (approximately, that is up to the 10th Hermite polynomial) in R:
```
library(PolynomF)
x<-polynom()
H<-polylist(1,x);for(n in 2:10) H[[n+1]]<-x*H[[n]]-(n-1)*H[[n-1]]
Hp<-as.function(H)
#The Hermite polynomial of order 10
rho<-0.7
R<-c();for(n in 0:10) R[[n+1]]<-rho^n
HPA<-function(z) prod(dnorm(z))*sum(Hp(z[1])*Hp(z[2])*R)
z<-runif(2)
HPA(z)
```
Which is completely off the mark (i.e. for $z=(0.65,0.63)$ i get 1194). What am i doing wrong ?
| Issues using Hermite approximation to bivariate Gaussian in R | CC BY-SA 2.5 | null | 2010-09-11T01:13:44.640 | 2010-09-11T04:44:51.973 | 2010-09-11T01:39:26.823 | 603 | 603 | [
"approximation"
] |
2558 | 2 | null | 2555 | 5 | null | It would appear that you neglected to normalize your Hermite polynomials. Try this:
```
library(PolynomF)
x <- polynom()
H <- polylist(1,x); for(n in 2:10) H[[n+1]] <- x*H[[n]] - (n-1)*H[[n-1]]
for(n in 1:11) H[[n]] <- H[[n]]*exp(-lgamma(n)/2)
Hp <- as.function(H)
#The (normalized) Hermite polynomial of order 10
rho <- 0.7
R<-c(); for(n in 0:10) R[[n+1]] <- rho^n
HPA <- function(z) prod(dnorm(z))*sum(Hp(z[1])*Hp(z[2])*R)
set.seed(1)
z <- runif(2)
HPA(z)
```
Now have a look at what it should be:
```
library(mvtnorm)
sigma <- matrix(c(1, rho, rho, 1), ncol = 2)
dmvnorm(z, sigma = sigma)
```
which is only off in the thousandths place on my machine.
Also, you may be interested in the orthopolynom package for another way to generate normalized Hermite polynomials.
| null | CC BY-SA 2.5 | null | 2010-09-11T04:44:51.973 | 2010-09-11T04:44:51.973 | null | null | null | null |
2559 | 1 | 2570 | null | 7 | 912 | The following paper describes an implementation of R in parallel on a graphics processing unit (GPU).
- Buckner et al., The gputools package enables GPU computing in R, BIOINFORMATICS, Vol. 26 no. 1 2010, pages 134–135
In the experimental section, on a 4-core computer, they compare the performance of the program running on the GPU with the performance without the GPU. The write the following:
>
We chose to use a single thread of the R environment in our test, since this is the way that most users interact with R.
So, the authors find the baseline by running their experiments using a single core (running in serial).
But, the experimental conditions for the GPU side is unclear (to me). When using a GPU, for efficiency we should simultaneously make use of the CPUs. If the authors used the remaining CPUs in the computer (which would be sensible to do in an optimised algorithm), then the speedup would be based on additional CPUs as well as GPUs over the baseline (and thus be artificially inflated by a factor slightly less than 4).
>
How should this experiment be interpreted?
In particular, I would like to know if my above interpretation is the correct one, and if so, what does this experiment actually tell us.
| Gputools for R: how to interpret the experimental procedure? | CC BY-SA 2.5 | null | 2010-09-11T07:52:28.570 | 2010-09-16T12:27:58.123 | 2010-09-11T08:45:18.317 | 8 | 386 | [
"r",
"gpu",
"parallel-computing"
] |
2560 | 2 | null | 2559 | 7 | null | There is fundamental difference between parallel computing in CPUs and GPUs. Essentially, the [CPU](http://en.wikipedia.org/wiki/Central_processing_unit) has been designed to do clever things on behalf of the programmer. For example, [Instruction level parallelism](http://en.wikipedia.org/wiki/Central_processing_unit#Instruction_level_parallelism). The [GPU](http://en.wikipedia.org/wiki/Graphics_processing_unit) on the other hand, stips away this useful stuff and instead contains many more cores. It's a trade off between the processor helping you out and giving you more cores. Therefore, to use the GPU effectively, you need to submit as many threads (as memory allows as possible). The reason for this is because the GPU doesn't do any clever scheduling. So when it requests data for one thread, you want to have another one in the thread queue waiting to take over.
Example
Suppose you have a `for` loop that you want to make parallel:
```
#f(i) does not depend on f(j)
#for any j != i
for(i in 1:100000)
w[i] = f(i)
```
You can submit N=1000000 threads (spread over the number cores) to the GPU. Now you may think that you could let n threads be done on the multi-core CPU, but:
- There's quite a lot of extra programming baggage for at most little gain. GPU programming is hard (at least I think so), so combing it with multi-core CPUs is something you want to avoid.
- The f(i) that you submit to the GPU tends to be a very simple function, say multiplying two elements of a matrix together.
- You will get a time penalty if you use the GPU and CPU together since they both have to ask each other if they are finished.
- By reducing the number of threads used on the GPU, you could easily be reducing efficiency, i.e. it takes the same amount of time to do N-n operations as it does for N operations!
Of course there are situations when you may want to use both the GPU and CPU, but typically you don't use them for the same operation.
---
Unfortunately, I don't have access to the paper at the moment. It's on my (long) list of things to read. So the above is more a general discussion on CPU and GPUs. I'll try and read it in the next day or two.
| null | CC BY-SA 2.5 | null | 2010-09-11T09:10:29.047 | 2010-09-11T09:10:29.047 | null | null | 8 | null |
2561 | 2 | null | 726 | 12 | null | >
At their best, graphics are instruments for reasoning.
Edward Tufte, www.edwardtufte.com
| null | CC BY-SA 2.5 | null | 2010-09-11T10:26:16.900 | 2010-09-11T21:12:24.913 | 2010-09-11T21:12:24.913 | 930 | 930 | null |
2562 | 2 | null | 726 | 17 | null | May I add this one, because I like Jan's contributions to psychometrics and statistics...
>
Causal interpretation of the results
of regression analysis of
observational data is a risky
business. The responsibility rests
entirely on the shoulders of the
researcher, because the shoulders of
the statistical technique cannot carry
such strong inferences.
Jan de Leeuw, homepage
| null | CC BY-SA 2.5 | null | 2010-09-11T10:29:14.257 | 2010-09-11T10:29:14.257 | null | null | 930 | null |
2563 | 1 | 2571 | null | 4 | 5892 | What are some of the best ranking algorithms with inputs as up and down votes?
| What are some of the best ranking algorithms with inputs as up and down votes? | CC BY-SA 2.5 | null | 2010-09-11T13:15:03.890 | 2012-07-07T23:14:55.627 | null | null | 1248 | [
"algorithms"
] |
2564 | 2 | null | 726 | 26 | null | >
We are drowning in information and starving for knowledge.
Rutherford D. Roger
| null | CC BY-SA 2.5 | null | 2010-09-11T13:50:33.270 | 2010-09-11T13:50:33.270 | null | null | 961 | null |
2565 | 2 | null | 2547 | 14 | null | Here's my answer first posted on math.stackexchange:
Median is what many people actually have in mind when they say "mean." It's easier to interpret the median: half the population is above this age and half are below. Mean is a little more subtle.
People look for symmetry and sometimes impose symmetry when it isn't there. The age distribution in a population is far from symmetric, so the mean could be misleading. Age distributions are something like a pyramid. Lots of children, not many elderly. (Or at least that's how it is in a sort of steady state. In the US, the post-WWII baby boom generation has distorted this distribution as they age. Some people have called this "squaring the pyramid" because the boomers have made the top of the pyramid wider than it has been in the past.)
With an asymmetrical distribution, it may be better to report the median because it is a symmetrical statistic. The median is symmetrical even if the sampling distribution isn't.
| null | CC BY-SA 2.5 | null | 2010-09-11T14:20:43.827 | 2010-09-11T14:20:43.827 | null | null | 319 | null |
2566 | 2 | null | 726 | 32 | null | >
…the statistician knows…that in nature
there never was a normal distribution,
there never was a straight line, yet
with normal and linear assumptions,
known to be false, he can often derive
results which match, to a useful
approximation, those found in the real
world.
George Box (JASA, 1976, Vol. 71, 791-799)
| null | CC BY-SA 2.5 | null | 2010-09-11T15:18:26.880 | 2010-09-11T15:18:26.880 | null | null | 1107 | null |
2567 | 2 | null | 726 | 9 | null | >
The business of the statistician is to
catalyze the scientific learning
process.
George Box
| null | CC BY-SA 2.5 | null | 2010-09-11T15:27:50.037 | 2010-09-11T15:27:50.037 | null | null | 1107 | null |
2569 | 2 | null | 726 | 6 | null | >
We statisticians, as a police of
science (a label some dislike but I am
proud of...), have
the fundamental duty of helping others
to engage in statistical thinking as a
necessary step of scientific inquiry
and evidence-based policy formulation.
In order to truly fulfill this task,
we must constantly firm up and deepen
our own foundation, and resist the
temptation of competing for “methods
and results” without pondering deeply
whether we are helping others or
actually harming them by effectively
encouraging more false discoveries or
misguided policies. Otherwise, we
indeed can lose our identity, no
matter how much we are desired or
feared now.
Xiao-Li Meng
| null | CC BY-SA 2.5 | null | 2010-09-11T15:37:55.277 | 2010-09-11T15:37:55.277 | null | null | 1107 | null |
2570 | 2 | null | 2559 | 10 | null | >
But, the experimental conditions for the GPU side is unclear (to me). When using a GPU, for efficiency we should simultaneously make use of the CPUs.
That is not generally true, and in particular is not true for the [gputools](http://cran.r-project.org/package=gputools) R package which offers an 'everything to the GPU' mode with new functions `gpuMatMult()`, `gpuQr()`, `gpuCor()` etc. In other words, it offers you new functions that shift the computations completely to the GPU.
But your intuition is good. There should be a mixed mode with hybrid operations between the GPU and the CPU -- and the [Magma](http://icl.cs.utk.edu/magma/) library aims to offer just that. Better still, the [magma](http://cran.r-project.org/package=magma) R package brings this to R.
Moreover, I have a benchmarking paper / vignette / small package almost completed that compares these as well as several BLAS such as Atlas, Goto and the MKL. I'll update this entry with a URL in a couple of days.
Edit on 16 Sep: The paper I mentioned is now out and on CRAN with its own package [gcbd](http://cran.r-project.org/package=gcbd); I and I wrote a [brief blog entry](http://dirk.eddelbuettel.com/blog/2010/09/15#gcbd_0.2.2) about it too.
| null | CC BY-SA 2.5 | null | 2010-09-11T16:05:23.110 | 2010-09-16T12:27:58.123 | 2010-09-16T12:27:58.123 | 334 | 334 | null |
2571 | 2 | null | 2563 | 6 | null | David MacIver had a nice post on the algorithm he used for ranking programming languages based on relative rankings from different users for various factors:
- Rank Aggregation in The Right Tool
It's based the [Rank Aggregation Methods for the Web](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.8702) paper by Dwork et al.
If you want something similar to this site, then [Reddit](http://www.reddit.com/)'s code is open source. There was some discussion of their ranking algorithm on Hacker News a while back:
- Reddit's Ranking Algorithm
For a quick and simple take on "hotness", consider Evan Miller's algorithm with time decay:
- Rank Hotness With Newton's Law of Cooling
| null | CC BY-SA 2.5 | null | 2010-09-11T18:51:18.540 | 2010-09-11T18:51:18.540 | null | null | 251 | null |
2572 | 1 | 2605 | null | 4 | 1779 | Could you suggest good articles / books / online materials on correlation stress testing?
Correlation stress testing: say, we have a positive semidefinite correlation matrix. We might want to explore the impact on the results if some or all elements of the matrix are increased or a decrease by a given value/s. Of course, there are limits to stress testing due to the requirement of the correlation matrix to remain positive semidefinite.
Thank you in advance for your suggestions!
| Correlation stress testing | CC BY-SA 2.5 | null | 2010-09-11T19:05:38.680 | 2010-09-13T10:36:05.453 | null | null | 1250 | [
"correlation"
] |
2573 | 1 | 2621 | null | 20 | 65318 | If we want to visibly see the distribution of a continuous data, which one among histogram and pdf should be used?
What are the differences, not formula wise, between histogram and pdf?
| Difference between histogram and pdf? | CC BY-SA 2.5 | null | 2010-09-11T19:36:38.390 | 2015-07-14T13:18:08.613 | 2010-10-06T14:33:18.633 | 8 | null | [
"distributions",
"density-function",
"histogram"
] |
2574 | 2 | null | 2573 | 13 | null | A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the other hand, is a closed-form expression for a given distribution. That is different from describing your dataset with an estimated density or histogram.
| null | CC BY-SA 2.5 | null | 2010-09-11T19:40:11.540 | 2010-09-11T19:40:11.540 | null | null | 334 | null |
2575 | 2 | null | 2573 | 6 | null | There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that an estimated density covers up. For example, Andrew Gelman makes this point:
>
Variations on the histogram
A key benefit of a histogram is that, as a plot of raw data, it contains the seeds of its own error assessment. Or, to put it another way, the jaggedness of a slightly undersmoothed histogram performs a useful service by visually indicating sampling variability. That's why, if you look at the histograms in my books and published articles, I just about always use lots of bins. I also almost never like those kernel density estimates that people sometimes use to display one-dimensional distributions. I'd rather see the histogram and know where the data are.
| null | CC BY-SA 2.5 | null | 2010-09-11T20:00:52.923 | 2010-09-11T20:00:52.923 | 2020-06-11T14:32:37.003 | -1 | 251 | null |
2576 | 1 | null | null | 7 | 2691 | I'm working with the results of a survey which has multiple questions. All answers (in this case) are categorical and ordinal (such as very unhappy, unhappy, neutral, happy, very happy).
I'm looking for a way to sort the questions from those with "worst results" to those with "best results". Getting the extremes is somewhat easy visually. If I plot the distribution of answers for each question, I can identify which questions have lots of 'good' answers (distribution is negatively skewed) or those with lots of 'bad' answers (positively skewed histogram). So picking the extremes is easy but this is also dependent on the data.
Quantitatively however, I don't know what to do. Since the answers are on an ordinal, but not an interval scale, I don't know how to calculate an aggregate number for each question. Perhaps giving a numerical value to each category (such as -2, -1, 0, 1 or 2) and summing up the results might work if there's nothing better, but I do realize that mathematically this is not accurate as this is not an interval scale.
Oh, I'm not a statistician, just a programmer. I hope there is a reasonable option to this, I can imagine it's a fairly common question with categorical data.
Thanks in advance.
PS I use R in case there is something built-in.
| How to rank the results of questions with categorical answers? | CC BY-SA 2.5 | 0 | 2010-09-11T20:20:34.950 | 2010-11-04T20:10:51.363 | 2010-11-04T15:49:14.910 | 930 | 840 | [
"categorical-data",
"scales",
"survey"
] |
2577 | 2 | null | 2576 | 2 | null | >
If I plot the distribution of answers for each question, I can identify which questions have lots of 'good' answers (distribution is negatively skewed) or those with lots of 'bad' answers (positively skewed histogram). So picking the extremes is easy but this is also dependent on the data.
Is an absolute ranking necessary? Like you point out, things may be fuzzier in the middle, so is it relevant to your investigation to distinguish between rank 8 and 9 (or whatever) based on some scoring method?
One approach would be to continue with what you stated above -- look at the distributions and categorize questions based on proportions of good/ok/bad based on the data. You might start with a mosaic plot (with questions as factors) to explore your data. This may help reveal criteria for collapsing questions into groups. Instead of piecemeal rankings, they get classified into categories (e.g. what might have been ranks 1-5 become category 1, etc).
| null | CC BY-SA 2.5 | null | 2010-09-11T20:38:07.827 | 2010-09-11T20:38:07.827 | null | null | 251 | null |
2578 | 2 | null | 2576 | 3 | null | Recoding your data with numerical values seems ok, provided the assumption of an ordinal scale holds. This is often the case for Likert-type item, but see these related questions:
- Is it appropriate to treat n-point Likert scale data as n trials from a binomial process?
- Under what conditions should Likert scales be used as ordinal or interval data?
When validating a questionnaire, we often provide usual numerical summaries (mean $\pm$ sd, range, quartiles) to highlight ceiling/floor effect, that is higher response rate in the extreme range of the scale. Dotplots are also great tool to summarize such data.
This is just for visualization/summary purpose. If you want to get into more statistical stuff, you can use proportional odds model or ordinal logistic regression, for ordinal items, and multinomial regression, for discrete ones.
| null | CC BY-SA 2.5 | null | 2010-09-11T21:08:54.290 | 2010-11-04T15:51:20.853 | 2017-04-13T12:44:36.923 | -1 | 930 | null |
2579 | 1 | null | null | 0 | 1171 | I have a question related to the time fixed effects when using Swamy Random coefficient model, as you may know in the fixed and random effect context, we use a set of time dummies to estimate the time fixed effects, and I would like if we can use the same approch whith swamy?
| Swamy Random Coefficient Model and time fixed effects | CC BY-SA 4.0 | null | 2010-09-11T23:08:48.143 | 2022-05-08T13:22:59.407 | 2022-05-08T13:22:59.407 | 94889 | 1251 | [
"econometrics",
"panel-data"
] |
2580 | 1 | 2606 | null | 11 | 1499 | I have data with a double peak that I'm trying to model, and there's enough overlap between the peaks that I can't treat them independently. A histogram of the data might look something like this:

I've created two models for this: one uses two Poisson distributions, and the other uses two negative binomial distributions (to account for overdispersion). What's the appropriate way to tell which model fits the data more accurately?
My initial thought is that I could use a Kolmogorov-Smirnov test to compare each model to the data, then do a likelihood ratio test to see if one is a significantly better fit. Does this make sense? If so, I'm not exactly sure how to perform the likelihood ratio test. Is chi-squared appropriate, and how many degrees of freedom do I have?
If it helps, some (very simplified) R code for the models might look something like this:
```
## inital data points
a <- read.table("data")
#create model data
model.pois = c(rpois(1000000,200),rpois(500000,250))
model.nb = c(rnbinom(1000000,200,0.5),rnbinom(500000,275,0.5)
#Kolmogorov-Smirnov test
#use ks.boot, since it's count data that may contain duplicate values
kpois = ks.boot(model.pois,a)
knb = ks.boot(model.nb,a)
#here's where I'd do some sort of likelihood ratio test
# . . .
```
---
Edit: Here's an image that may explain the data and the distributions I'm fitting better. It's totally clear from the visualization that the second model (using the negative binomial dist to account for overdispersion) is a better fit. I'd like to show this quantitatively, though.

(red - data, green - model)
| Measuring goodness-of-fit in a model that combines two distributions | CC BY-SA 2.5 | null | 2010-09-11T23:12:23.770 | 2013-06-18T08:07:02.577 | 2013-06-18T08:07:02.577 | 805 | 54 | [
"modeling",
"histogram",
"kolmogorov-smirnov-test",
"negative-binomial-distribution",
"likelihood-ratio"
] |
2581 | 2 | null | 2572 | 0 | null | The answer may depend on why you want to do stress testing but here is one partial idea:
You can do a [eigendecompostion](http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix) of your current matrix, The eigenvectors are aligned with the axis of the ellipsoid corresponding to the correlation matrix and the eigenvalues are a measure of how "long" the ellipsoid is along the corresponding vector. So one way to perform stress testing is to vary the eigenvalues so that the shape of the ellipsoid changes from the original one.
| null | CC BY-SA 2.5 | null | 2010-09-11T23:57:36.683 | 2010-09-12T02:11:56.117 | 2010-09-12T02:11:56.117 | 183 | null | null |
2582 | 2 | null | 2576 | 4 | null | If all your questions have the same response scale and they are standard Likert items, scaling the item 1,2,3,4,5 and taking the mean is generally fine.
You can investigate the robustness of the rank ordering by experimenting with different scaling procedures (e.g., 0, 0, 0, 1, 1 is common where you want to assess the percentage happy or very happy; or agreeing or strongly agreeing). From my experience, such variants in scaling will give you almost identical question orderings. You could also explore optimal scaling principal components or some form of polytomous IRT approach is you wanted to be sophisticated.
A table with three columns would be fine: rank, item text, mean. You could also do the same thing with question on the x axis and mean on the y axis.
| null | CC BY-SA 2.5 | null | 2010-09-12T02:48:34.503 | 2010-09-12T02:48:34.503 | null | null | 183 | null |
2583 | 2 | null | 2352 | 7 | null | >
This paper is somewhat controversial, and somewhat ignored
Not really, it's well regarded where the theory of model selection is concerned, though it's certainly misinterpreted. The real issue is how relevant it is to the practice of modeling in the wild. Suppose you perform the simulations for the cases you propose to investigate and determine that LOOCV is indeed inconsistent. The only reason you'd get that is because you already knew the "true" model and could hence determine that the probability of recovering the "true" model does not converge to 1. For modeling in the wild, how often is this true (that the phenomena are described by linear models and the "true" model is a subset of those in consideration)?
Shao's paper is certainly interesting for advancing the theoretical framework. It even provides some clarity: if the "true" model is indeed under consideration, then we have the consistency results to hang our hats on. But I'm not sure how interesting actual simulations for the cases you describe would be. This is largely why most books like EOSL don't focus as much on Shao's result, but instead on prediction/generalization error as a criterion for model selection.
EDIT: The very short answer to your question is: Shao's results are applicable when you're performing least squares estimation, quadratic loss function. Not any wider. (I think there was an interesting paper by Yang (2005?) which investigated whether you could have consistency and efficiency, with a negative answer.)
| null | CC BY-SA 2.5 | null | 2010-09-12T03:25:19.597 | 2010-09-12T04:19:37.793 | 2010-09-12T04:19:37.793 | 251 | 251 | null |
2584 | 2 | null | 2547 | 4 | null | For a concrete example, consider the mean ages for the Congo (DRC) and Japan. One is devastated by civil war, the other is well developed with an ageing population. The mean isn't terribly interesting for an apples to apples comparison. On the other hand, the median can be informative as a measure of central tendency since by definition we have half above, half below. The wikipedia article on [Population Pyramid](http://en.wikipedia.org/wiki/Population_pyramid) might be enlightening (see the sections on youth bulge, ageing populations).
| null | CC BY-SA 2.5 | null | 2010-09-12T05:01:58.287 | 2010-09-12T05:01:58.287 | null | null | 251 | null |
2585 | 1 | null | null | 2 | 227 | I have a large sample of market data - that is the prices and amount of goods being sold by vendors at specific, but inconsistent, points in time. Assuming I am a buyer and the quality of the goods are constant across the entire market (consumable goods by one manufacturer, let's say), how would you graph this data? Keep in mind that outliers may exist (fire sales or inflated asking prices).
Seeing as I'm only concerned with the lowest-priced goods at a specific moment in time, a market average wouldn't do much for me unless I was buying most of the market's supply. I considered a box plot, but that also tends to be mean/median-biased. A simple line/area/bar graph of the lowest prices over time doesn't seem to give enough information, especially if there isn't much volume available at the lowest price.
*I'm not a stats guy, so go easy on me!
| Graphing low-end prices of goods in a market over time | CC BY-SA 2.5 | null | 2010-09-12T05:04:04.770 | 2023-03-09T10:11:48.967 | 2010-09-16T06:31:51.467 | null | 1253 | [
"time-series",
"data-visualization"
] |
2586 | 2 | null | 2585 | 3 | null | You could create a scatterplot with time on the x-axis and price on the y-axis.
You could make the size of the points proportional to the amount sold (or some function of amount sold such as log of amount sold). You could add a line that passed through the minimum values or some estimate of the minimum, where the data was unavailable.
If you are trying to model the minimum price at any given point in time, it seems like you would need to make some assumptions about the duration of any particular price. If you knew the minimum price for each time period you could graph just that data. If there are time points with missing data, you could use some form of missing data imputation.
Local quantile regression might also be of interest.
You might also need to consider issues of invalid data: e.g., very low prices that only last a short period of time or have a minimal quantity of stock that for what ever reason should not be interpolated to future time points.
| null | CC BY-SA 2.5 | null | 2010-09-12T08:12:21.863 | 2010-09-12T08:12:21.863 | null | null | 183 | null |
2587 | 1 | 2595 | null | 5 | 719 | Are the two synonymous? The reason I am asking is that I have a paper that calculates a certain p-value as the upper tail of the hypergeometric distribution:
$\Sigma_{k}^{m} = \frac{\binom{m}{k}\binom{N-m}{n-k}}{\binom{N}{n}}$
I want to implement their proposal using Python. [Scipy offers a hypergeometric distribution](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypergeom.html#scipy.stats.hypergeom) and it can calculate the survival function of that distribution from a given point. So I am wondering, are the two the same? Can I just use that function?
| Is the survival function the same as the upper tail? | CC BY-SA 2.5 | null | 2010-09-12T11:22:28.790 | 2010-09-12T17:00:58.643 | null | null | 977 | [
"distributions"
] |
2588 | 2 | null | 2587 | 0 | null | Yes it is the same as upper tail. See: [http://en.wikipedia.org/wiki/Survival_function](http://en.wikipedia.org/wiki/Survival_function)
| null | CC BY-SA 2.5 | null | 2010-09-12T11:29:59.453 | 2010-09-12T11:29:59.453 | null | null | null | null |
2589 | 2 | null | 2576 | 2 | null | A small additional point to the answers already given:
Without assuming your ordinal data is interval, you can compare any convenient quantiles - e.g. medians.
Or, when comparing X vs Y which are both ordered categorical, you can estimate something like P(Y>X) - P(X>Y) or P(Y>X) + 0.5 * P(Y=X) (etc.), where you estimate probabilities by proportions of course.
| null | CC BY-SA 2.5 | null | 2010-09-12T13:42:31.060 | 2010-09-12T13:42:31.060 | null | null | 805 | null |
2591 | 1 | 2593 | null | 10 | 7569 | If one is testing assumption of homoscedasticity, parametric (Bartlett Test of Homogeneity of Variances, `bartlett.test`) and non-parametric (Figner-Killeen Test of Homogeneity of Variances, `fligner.test`) tests are available. How to tell which kind to use? Should this depend on e.g. normality of the data?
| When to use (non)parametric test of homoscedasticity assumption? | CC BY-SA 2.5 | null | 2010-09-12T15:12:02.687 | 2017-04-23T21:59:03.103 | 2017-04-23T21:59:03.103 | 11887 | 144 | [
"r",
"variance",
"heteroscedasticity",
"misspecification"
] |
2592 | 1 | 2596 | null | 25 | 41915 | After performing principal component analysis (PCA), I want to project a new vector onto PCA space (i.e. find its coordinates in the PCA coordinate system).
I have calculated PCA in R language using `prcomp`. Now I should be able to multiply my vector by the PCA rotation matrix. Should principal components in this matrix be arranged in rows or columns?
| How to project a new vector onto PCA space? | CC BY-SA 3.0 | null | 2010-09-12T16:11:22.923 | 2020-02-03T22:42:25.733 | 2015-07-18T07:47:56.437 | 1739 | 1260 | [
"r",
"pca"
] |
2593 | 2 | null | 2591 | 12 | null | It seems that the FK test is to be prefered in case of strong departure from the normality (to which the Bartlett test is sensible). Quoting the on-line help,
>
The Fligner-Killeen (median) test has
been determined in a simulation study
as one of the many tests for
homogeneity of variances which is most
robust against departures from
normality, see Conover, Johnson &
Johnson (1981).
Generally speaking, the Levene test works well in the ANOVA framework, providing there are small to moderate deviations from the normality. In this case, it outperfoms the Bartlett test. If the distribution are nearly normal, however, the Bartlett test is better. I've also heard of the Brown–Forsythe test as a non-parametric alternative to the Levene test. Basically, it relies on either the median or the trimmed mean (as compared to the mean in the Levene test). According to Brown and Forsythe (1974), a test based on the mean provided the best power for symmetric distributions with moderate tails.
In conclusion, I would say that if there is strong evidence of departure from the normality (as seen e.g., with the help of a Q-Q plot), then use a non-parametric test (FK or BF test); otherwise, use Levene or Bartlett test.
There was also a small discussion about this test for small and large samples in the R Journal, last year, [asympTest: A Simple R Package for Classical Parametric Statistical Tests and Confidence Intervals in Large Samples](http://journal.r-project.org/archive/2009-2/RJournal_2009-2_Coeurjolly~et~al.pdf). It seems that the FK test is also available through the `coin` interface for permutation tests, see the [vignette](http://cran.r-project.org/web/packages/coin/vignettes/coin.pdf).
References
Brown, M. B. and Forsythe, A. B. (1974). Robust Tests for Equality of Variances. JASA, 69, 364-367.
| null | CC BY-SA 2.5 | null | 2010-09-12T16:18:40.850 | 2010-09-12T16:18:40.850 | null | null | 930 | null |
2594 | 2 | null | 2592 | 3 | null | I believe that the eigenvectors (i.e., the principal components) should be arranged as columns.
| null | CC BY-SA 2.5 | null | 2010-09-12T16:24:34.503 | 2010-09-12T16:24:34.503 | null | null | null | null |
2595 | 2 | null | 2587 | 4 | null | Yes. By definition, the survival function is $S(X = x) = Pr(X > x) = 1 - F(X = x)$ where $F$ is the CDF.
| null | CC BY-SA 2.5 | null | 2010-09-12T17:00:58.643 | 2010-09-12T17:00:58.643 | null | null | 251 | null |
2596 | 2 | null | 2592 | 31 | null | Well, @Srikant already gave you the right answer since the rotation (or loadings) matrix contains eigenvectors arranged column-wise, so that you just have to multiply (using `%*%`) your vector or matrix of new data with e.g. `prcomp(X)$rotation`. Be careful, however, with any extra centering or scaling parameters that were applied when computing PCA EVs.
In R, you may also find useful the `predict()` function, see `?predict.prcomp`. BTW, you can check how projection of new data is implemented by simply entering:
```
getS3method("predict", "prcomp")
```
| null | CC BY-SA 2.5 | null | 2010-09-12T17:03:16.110 | 2010-09-12T17:03:16.110 | null | null | 930 | null |
2597 | 1 | 2599 | null | 34 | 23939 | I have found extensive literature proposing all sorts of criteria (e.g. [Glenn et al. 1985](http://vohweb.chem.ucla.edu/voh/classes%5Cwinter08%5C160BID48%5CNumClusters.pdf)(pdf) and [Jung et al. 2002](http://www.cc.gatech.edu/~hpark/papers/cluster_JOGO.pdf)(pdf)). However, most of these are not that easy to implement (at least from my perspective). I am using [scipy.cluster.hierarchy](http://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html#function-reference) to obtain a cluster hierarchy, and I am now trying to decide how to form flat clusters from that. My goal is to discover common patterns in my observations, so I have no reference to compare the obtained clustering to. Can anyone suggest a pragmatic solution?
| What stop-criteria for agglomerative hierarchical clustering are used in practice? | CC BY-SA 2.5 | null | 2010-09-12T19:49:25.443 | 2010-09-13T08:26:14.417 | null | null | 977 | [
"clustering"
] |
2598 | 1 | null | null | 2 | 143 | Let's pretend there are 10 doors in front of me. Behind one of them, there is a treasure and nothing behind all the others. Intuitively, I can easily determine I have 1 chance out of 10 the pick open the right door (the one with the treasure). I repeat this 5 times (i.e. all door closed, open one door). What are my chance to find the treasure the five times?
In brief, I am wondering how to determine the probably that I find the threasure 5 times in row.
Thansk
| Probability to find the treasure five times in row | CC BY-SA 2.5 | null | 2010-09-12T20:18:16.740 | 2010-09-13T12:38:17.643 | null | null | 64 | [
"probability"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.