Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1308 | 1 | null | null | 31 | 11703 | In the traditional Birthday Paradox the question is "what are the chances that two or more people in a group of $n$ people share a birthday". I'm stuck on a problem which is an extension of this.
Instead of knowing the probability that two people share a birthday, I need to extend the question to know what is the probability that $x$ or more people share a birthday. With $x=2$ you can do this by calculating the probability that no two people share a birthday and subtract that from $1$, but I don't think I can extend this logic to larger numbers of $x$.
To further complicate this I also need a solution which will work for very large numbers for $n$ (millions) and $x$ (thousands).
| Extending the birthday paradox to more than 2 people | CC BY-SA 3.0 | null | 2010-08-05T14:00:07.723 | 2016-10-14T21:17:41.107 | 2016-10-14T21:17:41.107 | 919 | 765 | [
"probability",
"combinatorics",
"birthday-paradox"
] |
1309 | 2 | null | 1308 | 2 | null | It is always possible to solve this problem with a monte-carlo solution, although that's far from the most efficient. Here's a simple example of the 2 person problem in R (from [a presentation I gave last year](http://www.meetup.com/nyhackr/calendar/10251302/?from=list&offset=0); I used this as an example of inefficient code), which could be easily adjusted to account for more than 2:
```
birthday.paradox <- function(n.people, n.trials) {
matches <- 0
for (trial in 1:n.trials) {
birthdays <- cbind(as.matrix(1:365), rep(0, 365))
for (person in 1:n.people) {
day <- sample(1:365, 1, replace = TRUE)
if (birthdays[birthdays[, 1] == day, 2] == 1) {
matches <- matches + 1
break
}
birthdays[birthdays[, 1] == day, 2] <- 1
}
birthdays <- NULL
}
print(paste("Probability of birthday matches = ", matches/n.trials))
}
```
| null | CC BY-SA 2.5 | null | 2010-08-05T14:10:26.907 | 2010-08-05T14:54:11.593 | 2010-08-05T14:54:11.593 | 5 | 5 | null |
1311 | 2 | null | 1308 | 1 | null | This is an attempt at a general solution. There may be some mistakes so use with caution!
First some notation:
$P(x,n)$ be the probability that $x$ or more people share a birthday among $n$ people,
$P(y|n)$ be the probability that exactly $y$ people share a birthday among $n$ people.
Notes:
- Abuse of notation as $P(.)$ is being used in two different ways.
- By definition $y$ cannot take the value of 1 as it does not make any sense and $y$ = 0 can be interpreted to mean that no one shares a common birthday.
Then the required probability is given by:
$P(x,n) = 1 - P(0|n) - P(2|n) - P(3|n) .... - P(x-1|n)$
Now,
$P(y|n) = {n \choose y} (\frac{365}{365})^y \ \prod_{k=1}^{k=n-y}(1 -\frac{k}{365})$
Here is the logic: You need the probability that exactly $y$ people share a birthday.
Step 1: You can pick $y$ people in ${n \choose y}$ ways.
Step 2: Since they share a birthday it can be any of the 365 days in a year. So, we basically have 365 choices which gives us $(\frac{365}{365})^y$.
Step 3: The remaining $n-y$ people should not share a birthday with the first $y$ people or with each other. This reasoning gives us $\prod_{k=1}^{k=n-y}(1 -\frac{k}{365})$.
You can check that for $x$ = 2 the above collapses to the standard birthday paradox solution.
| null | CC BY-SA 2.5 | null | 2010-08-05T15:27:49.650 | 2010-08-05T15:34:26.243 | 2010-08-05T15:34:26.243 | null | null | null |
1312 | 2 | null | 1257 | 2 | null | It depends on exactly what you mean - your notation is a bit ambiguous. If I understand your notation correctly, you don't actually have 4 independent random variables - $X_3$ is a deterministic function of $Y_1$ and $Y_2$, and so it shouldn't occur explicitly in the likelihood. $Y_3$ is a function of the deterministic node $X_3$, and hence is a function of $Y_1$ and $Y_2$ when you drop out the deterministic node to form the likelihood, which is:
$f(Y_1)f(Y_2)f(Y_3|Y_1,Y_2)$
provided that $Y_1Y_2>0$ (for consistency with data on $X_3$) and is zero otherwise. Is that what you meant?
| null | CC BY-SA 2.5 | null | 2010-08-05T16:04:34.100 | 2010-08-05T16:04:34.100 | null | null | 643 | null |
1313 | 2 | null | 1278 | 4 | null | If you cannot collect data on a different ward where you don't do the intervention, your conclusions will be weak, because you cannot rule out other causes that act simultaneously (change in weather, season, epidemic of something, etc, etc). However if you observe a large effect, your study would still contribute an interesting piece of evidence.
The rest of your questions are a bit confused. If your outcome is binary: infection yes/no in a bunch of patients (probably adjusted for length of stay so it becomes a rate?), then you could not even do a t-test, so there is no point in discussing its appropriateness. But in the sense that it looks at differences it is similar to a t-test when you have continuous outcomes.
There is a test loosely called "ratio t-test", which is a t-test conducted on log-transformed data that concentrates on ratios instead of differences. So it is in some sense the counterpart of IRR, however I don't think you could actually perform it, because you don't have a continuous outcome variable.
So pick either the IRD or IRR. The difference is usually more important from a public health point of view, while ratios tend to be more impressive especially for rare events.
| null | CC BY-SA 2.5 | null | 2010-08-05T16:54:35.073 | 2010-08-05T16:54:35.073 | null | null | 279 | null |
1315 | 1 | null | null | 22 | 6491 | I've sampled a real world process, network ping times. The "round-trip-time" is measured in milliseconds. Results are plotted in a histogram:
[](https://i.stack.imgur.com/9fL76.png)
[](https://i.stack.imgur.com/Jy5No.png)
Latency has a minimum value, but a long upper tail.
I want to know what statistical distribution this is, and how to estimate its parameters.
Even though the distribution is not a normal distribution, I can still show what I am trying to achieve.
The normal distribution uses the function:

with the two parameters
- μ (mean)
- σ2 (variance)
## Parameter estimation
The formulas for estimating the two parameters are:

Applying these formulas against the data I have in Excel, I get:
- μ = 10.9558 (mean)
- σ2 = 67.4578 (variance)
With these parameters I can plot the "normal" distribution over top my sampled data:

Obviously it's not a normal distribution. A normal distribution has an infinite top and bottom tail, and is symmetrical. This distribution is not symmetrical.
---
>
What principles would I apply; what
flowchart would I apply to determine
what kind of distribution this is?
Given that the distribution has no negative tail, and long positive tail: what distributions match that?
Is there a reference that matches distributions to the observations you're taking?
And cutting to the chase, what is the formula for this distribution, and what are the formulas to estimate its parameters?
---
I want to get the distribution so I can get the "average" value, as well as the "spread":

I am actually plotting the histogram in software, and I want to overlay the theoretical distribution:

Note: Cross-posted from [math.stackexchange.com](https://math.stackexchange.com/questions/1648/how-do-i-figure-out-what-kind-of-distribution-this-is)
---
Update: 160,000 samples:

Months and months, and countless sampling sessions, all give the same distribution. There must be a mathematical representation.
---
Harvey suggested putting the data on a log scale. Here's the probability density on a log scale:

Tags: sampling, statistics, parameter-estimation, normal-distribution
---
It's not an answer, but an addendum to the question. Here's the distribution buckets. I think the more adventurous person might like to paste them into Excel (or whatever program you know) and can discover the distribution.
The values are normalized
```
Time Value
53.5 1.86885613545469E-5
54.5 0.00396197500716395
55.5 0.0299702228922418
56.5 0.0506460012708222
57.5 0.0625879919763777
58.5 0.069683415770654
59.5 0.0729476844872482
60.5 0.0508017392821101
61.5 0.032667605247748
62.5 0.025080049337802
63.5 0.0224138145845533
64.5 0.019703973188144
65.5 0.0183895443728742
66.5 0.0172059354870862
67.5 0.0162839664602619
68.5 0.0151688822994406
69.5 0.0142780608748739
70.5 0.0136924859524314
71.5 0.0132751080821798
72.5 0.0121849420031646
73.5 0.0119419907055555
74.5 0.0117114984488494
75.5 0.0105528076448675
76.5 0.0104219877153857
77.5 0.00964952717939773
78.5 0.00879608287754009
79.5 0.00836624596638551
80.5 0.00813575370967943
81.5 0.00760001495084908
82.5 0.00766853967581576
83.5 0.00722624372375815
84.5 0.00692099722163388
85.5 0.00679017729215205
86.5 0.00672788208763689
87.5 0.00667804592402477
88.5 0.00670919352628235
89.5 0.00683378393531266
90.5 0.00612361860383988
91.5 0.00630427469693383
92.5 0.00621706141061261
93.5 0.00596788059255199
94.5 0.00573115881539439
95.5 0.0052950923837883
96.5 0.00490886211579433
97.5 0.00505214108617919
98.5 0.0045413204091549
99.5 0.00467214033863673
100.5 0.00439181191831853
101.5 0.00439804143877004
102.5 0.00432951671380337
103.5 0.00419869678432154
104.5 0.00410525397754881
105.5 0.00440427095922156
106.5 0.00439804143877004
107.5 0.00408656541619426
108.5 0.0040616473343882
109.5 0.00389345028219728
110.5 0.00392459788445485
111.5 0.0038249255572306
112.5 0.00405541781393668
113.5 0.00393705692535789
114.5 0.00391213884355182
115.5 0.00401804069122759
116.5 0.0039432864458094
117.5 0.00365672850503968
118.5 0.00381869603677909
119.5 0.00365672850503968
120.5 0.00340131816652754
121.5 0.00328918679840026
122.5 0.00317082590982146
123.5 0.00344492480968815
124.5 0.00315213734846692
125.5 0.00324558015523965
126.5 0.00277213660092446
127.5 0.00298394029627599
128.5 0.00315213734846692
129.5 0.0030649240621457
130.5 0.00299639933717902
131.5 0.00308984214395176
132.5 0.00300885837808206
133.5 0.00301508789853357
134.5 0.00287803844860023
135.5 0.00277836612137598
136.5 0.00287803844860023
137.5 0.00265377571234566
138.5 0.00267246427370021
139.5 0.0027472185191184
140.5 0.0029465631735669
141.5 0.00247311961925171
142.5 0.00259148050783051
143.5 0.00258525098737899
144.5 0.00259148050783051
145.5 0.0023485292102214
146.5 0.00253541482376687
147.5 0.00226131592390018
148.5 0.00239213585338201
149.5 0.00250426722150929
150.5 0.0026288576305396
151.5 0.00248557866015474
152.5 0.00267869379415173
153.5 0.00247311961925171
154.5 0.00232984064886685
155.5 0.00243574249654262
156.5 0.00242328345563958
157.5 0.00231738160796382
158.5 0.00256656242602444
159.5 0.00221770928073957
160.5 0.00241705393518807
161.5 0.00228000448525473
162.5 0.00236098825112443
163.5 0.00216787311712744
164.5 0.00197475798313046
165.5 0.00203705318764562
166.5 0.00209311887170926
167.5 0.00193115133996985
168.5 0.00177541332868196
169.5 0.00165705244010316
170.5 0.00160098675603952
171.5 0.00154492107197588
172.5 0.0011150841608213
173.5 0.00115869080398191
174.5 0.00107770703811221
175.5 0.000946887108630378
176.5 0.000853444301857643
177.5 0.000822296699600065
178.5 0.00072885389282733
179.5 0.000753771974633393
180.5 0.000766231015536424
181.5 0.000566886361087923
```
# Bonus Reading
- What Is the Expected Distribution of Website Response Times?
- What Do You Mean? - Revisiting Statistics for Web Response Time Measurements
- Modeling Network Latency
| How do I figure out what kind of distribution represents this data on ping response times? | CC BY-SA 4.0 | null | 2010-08-05T19:26:34.060 | 2023-04-06T17:42:33.837 | 2022-03-08T04:42:19.087 | 775 | 775 | [
"distributions",
"sample-size",
"sample",
"normality-assumption",
"distribution-identification"
] |
1316 | 2 | null | 1315 | 4 | null | Try the [gamma](http://en.wikipedia.org/wiki/Gamma_distribution) distribution which is parametrized as $x \sim Gamma(k,\theta)$. If you see these [pdf plots](http://en.wikipedia.org/wiki/File%3aGamma_distribution_pdf.svg) of the gamma from the wiki you will see that there are some plots that look similar to what you have.
Update- Estimation Process
The estimation via [maximum likelihood](http://en.wikipedia.org/wiki/Gamma_distribution#Maximum_likelihood_estimation) is tricky but possible. I imagine you can start with the approximate solution given by the wiki for [$\hat{\theta}$](http://upload.wikimedia.org/math/3/2/0/3203c979914ef4e2f354617a7723df81.png) and [$\hat{k}$](http://upload.wikimedia.org/math/8/8/1/881fb8c7ad01bb1b1505dcacf092b4e4.png) and if the plots look ok and if needed you can estimate $\hat{k}$ more accurately using the details in the wiki.
| null | CC BY-SA 2.5 | null | 2010-08-05T19:33:56.103 | 2010-08-05T19:51:03.543 | 2010-08-05T19:51:03.543 | null | null | null |
1317 | 2 | null | 726 | 9 | null | 9 out of ten dentists think the 10th dentist is an idiot.
- No idea who said it.
| null | CC BY-SA 2.5 | null | 2010-08-05T19:41:53.000 | 2010-08-05T19:41:53.000 | null | null | 776 | null |
1318 | 2 | null | 1315 | 8 | null | There is no reason to expect that any real world data set will fit a known distributional form...especially from such a known messy data source.
What you want to do with the answers will largely indicate an approach. For example, if you want to know when the ping times have changed significantly, then trending the empirical distribution may be a way to go. If you want to identify outliers, other techniques may be more appropriate.
| null | CC BY-SA 2.5 | null | 2010-08-05T19:51:02.640 | 2010-08-05T19:51:02.640 | null | null | 247 | null |
1319 | 2 | null | 1315 | 11 | null | Let me ask a more basic question: what do you want to do with this distributional information?
The reason I ask is because it may well make more sense to approximate the distribution with some sort of kernel density estimator, rather than insist that it fit into one of the (possibly shifted) exponential family distributions. You can answer almost all of the same sorts of questions that a standard distribution will let you answer, and you don't have to worry (as much) about whether you've selected the correct model.
But if there's a fixed minimum time, and you must have some sort of compactly parameterized distribution to go with it, then just eyeballing it I'd subtract off the minimum and fit a gamma, like others have suggested.
| null | CC BY-SA 2.5 | null | 2010-08-05T20:13:15.683 | 2010-08-05T20:13:15.683 | null | null | 61 | null |
1320 | 2 | null | 726 | 5 | null | No statistican, but useful for the profession:
>
The perfect is the enemy of the good
- Voltaire
| null | CC BY-SA 2.5 | null | 2010-08-05T20:38:41.187 | 2010-08-05T20:38:41.187 | null | null | 778 | null |
1321 | 1 | 1322 | null | 7 | 2806 | What would be the best way to display changes in two scalar variables (x,y) over time (z), in one visualization?
One idea that I had was to plot x and y both on the vertical axis, with z as the horizontal.
Note: I'll be using R and likely ggplot2
| Visualizing two scalar variables over time | CC BY-SA 2.5 | null | 2010-08-05T21:12:50.063 | 2011-08-18T20:30:22.123 | 2010-11-30T16:43:14.843 | 8 | 776 | [
"r",
"time-series",
"data-visualization",
"ggplot2"
] |
1322 | 2 | null | 1321 | 7 | null | The other idea is to plot one series as x and the second as y -- the time dependency will be hidden, but this plots shows correlations pretty well. (Yet time can be shown to some extent by connecting points chronologically; if the series are quite short and continuous it should be readable.)
| null | CC BY-SA 2.5 | null | 2010-08-05T22:01:03.533 | 2010-08-05T22:01:03.533 | null | null | null | null |
1324 | 1 | 1326 | null | 5 | 443 | The title is quite self-explanatory - I'd like to know if there's any other parametric technique apart from repeated-measures ANOVA, that can be utilized in order to compare several (more than 2) repeated measures?
| Parametric techniques for n-related samples | CC BY-SA 2.5 | null | 2010-08-05T22:16:20.007 | 2010-09-16T07:04:59.787 | 2010-09-16T07:04:59.787 | null | 1356 | [
"repeated-measures"
] |
1325 | 2 | null | 1315 | 13 | null | Weibull is sometimes used for modelling ping time. try a weibull distribution. To fit one in R:
```
x<-rweibull(n=1000,shape=2,scale=100)
#generate a weibull (this should be your data).
hist(x)
#this is an histogram of your data.
library(survival)
a1<-survreg(Surv(x,rep(1,1000))~1,dist='weibull')
exp(a1$coef) #this is the ML estimate of the scale parameter
1/a1$scale #this is the ML estimate of the shape parameter
```
If you're wondering for the goofy names (i.e. $scale to get the inverse of the shape)
it's because "survreg" uses another parametrization (i.e. it is parametrized in terms of the "inverse weibull" which is more comon in actuarial sciences).
| null | CC BY-SA 2.5 | null | 2010-08-05T22:17:29.820 | 2010-08-06T16:20:50.017 | 2010-08-06T16:20:50.017 | 603 | 603 | null |
1326 | 2 | null | 1324 | 8 | null | Multilevel/hierarchical linear models can be used for this. Essentially, each repetition of the measure is clustered within the individual; individuals can then be clustered within other hierarchies. For me, at least, it's more intuitive than repeated-measures ANOVA.
The canonical text is [Raudenbush and Bryk](http://rads.stackoverflow.com/amzn/click/076191904X); I'm also really fond of [Gelman and Hill](http://rads.stackoverflow.com/amzn/click/052168689X). [Here's a tutorial I read some time ago](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.110.5130&rep=rep1&type=pdf) - you may or may not find the tutorial itself useful (that's so often a matter of personal taste, training and experience), but the bibliography at the end is good.
I should note that Gelman and Hill doesn't have a ton on multilevel models specifically for repeated measures; I can't remember if that's the case or not for Raudenbush and Bryk.
Edit: Found a book I was looking for - [Applied Longitudinal Data Analysis by Singer and Willett](http://rads.stackoverflow.com/amzn/click/0195152964) has (I believe) an explicit focus on multilevel models for repeated measures. I haven't had a chance to read very far into it, but it might be worth looking into.
| null | CC BY-SA 2.5 | null | 2010-08-05T22:35:46.630 | 2010-08-05T23:06:58.227 | 2010-08-05T23:06:58.227 | 71 | 71 | null |
1327 | 2 | null | 1315 | 3 | null | Looking at it I would say a skew-normal or possibly a binormal distribution may fit it well.
In R you could use the `sn` library to deal with skew-normal distribution and use `nls` or `mle` to do a non-linear least square or a maximum likelihood extimation fit of your data.
===
EDIT: rereading your question/comments I would add something more
If what you're interested into is just drawing a pretty graph over the bars forget about distributions, who cares in the end if you're not doing anything with it. Just draw a B-spline over your data point and you're good.
Also, with this approach you avoid having to implement a MLE fit algorithm (or similar), and you're covered in the case of a distribution that is not skew-normal (or whatever you choose to draw)
| null | CC BY-SA 2.5 | null | 2010-08-05T23:22:01.933 | 2010-08-06T06:19:32.700 | 2010-08-06T06:19:32.700 | 582 | 582 | null |
1328 | 2 | null | 726 | 12 | null | "If you think that statistics has nothing to say about what you do or how you could do it better, then you are either wrong or in need of a more interesting job." - Stephen Senn (Dicing with Death: Chance, Risk and Health, Cambridge University Press, 2003)
| null | CC BY-SA 2.5 | null | 2010-08-06T00:29:25.567 | 2010-08-06T00:29:25.567 | null | null | 781 | null |
1330 | 2 | null | 726 | 77 | null | >
The best thing about being a statistician is that you get to play in everyone's backyard.
-- John Tukey
(This is MY favourite Tukey quote)
| null | CC BY-SA 2.5 | null | 2010-08-06T01:08:05.617 | 2010-12-03T04:02:12.130 | 2010-12-03T04:02:12.130 | 795 | 521 | null |
1331 | 2 | null | 485 | 5 | null | There is a new resources forming these days for talks about R:
[https://www.r-bloggers.com/RUG/](https://www.r-bloggers.com/RUG/)
Compiled by the organizers of "R Users Groups" around the world (right now, mainly around the States).
It is a new project (just a few weeks old), but already got good content on it, and good people wanting to take part in it.
[](https://i.stack.imgur.com/THdFz.jpg)
(source: [r-bloggers.com](https://www.r-bloggers.com/RUG/wp-content/uploads/2010/07/banner.jpg))
| null | CC BY-SA 4.0 | null | 2010-08-06T01:15:52.747 | 2022-12-31T07:32:40.610 | 2022-12-31T07:32:40.610 | 79696 | 253 | null |
1332 | 2 | null | 726 | 30 | null | "It is easy to lie with statistics. It is hard to tell the truth without statistics." - Andrejs Dunkels
| null | CC BY-SA 2.5 | null | 2010-08-06T01:20:36.700 | 2010-08-06T01:20:36.700 | null | null | 521 | null |
1333 | 2 | null | 726 | 137 | null | >
Statisticians, like artists, have the bad habit of falling in love with their models.
-- George Box
| null | CC BY-SA 2.5 | null | 2010-08-06T01:26:54.020 | 2010-10-02T17:10:08.093 | 2010-10-02T17:10:08.093 | 795 | 521 | null |
1334 | 2 | null | 726 | 12 | null | >
"New methods always look better than old ones. Neural nets are better
than logistic regression, support vector machines are better than
neural nets, etc." - Brad Efron
| null | CC BY-SA 3.0 | null | 2010-08-06T01:29:56.867 | 2018-02-11T16:09:54.783 | 2018-02-11T16:09:54.783 | 22387 | 521 | null |
1335 | 2 | null | 1286 | 6 | null | If you wish to trade processing speed for memory (which I think you do), I would suggest the following algorithm:
- Set up a loop from 1 to N Choose K, indexed by i
- Each i can be considered an index to a combinadic, decode as such
- Use the combination to perform your test statistic, store the result, discard the combination
- Repeat
This will give you all N Choose K possible combinations without having to create them explicitly. I have code to do this in R if you'd like it (you can email me at mark dot m period fredrickson at-symbol gmail dot com).
| null | CC BY-SA 2.5 | null | 2010-08-06T01:40:54.227 | 2010-08-06T01:40:54.227 | null | null | 729 | null |
1336 | 2 | null | 726 | 12 | null | >
In the long run, we're all dead.
-- John Maynard Keynes.
A reference to survival analysis?!
| null | CC BY-SA 2.5 | null | 2010-08-06T01:43:46.623 | 2010-12-03T04:05:41.940 | 2010-12-03T04:05:41.940 | 795 | 521 | null |
1337 | 1 | null | null | 186 | 254626 | Well, we've got favourite statistics quotes. What about statistics jokes?
| Statistics Jokes | CC BY-SA 3.0 | null | 2010-08-06T01:53:47.023 | 2021-10-23T10:39:14.333 | 2018-03-08T17:43:38.810 | 2669 | 521 | [
"references",
"humor"
] |
1338 | 2 | null | 1337 | 45 | null | I thought I'd start the ball rolling with my favourite.
"Being a statistician means never having to say you are certain."
| null | CC BY-SA 2.5 | null | 2010-08-06T01:54:55.680 | 2010-08-06T01:54:55.680 | null | null | 521 | null |
1339 | 2 | null | 1207 | 4 | null | You may want to define what you want more clearly (to yourself, if not here). If what you're looking for is the most statistically significant stationary period contained in your noisy data, there's essentially two routes to take:
1) compute a robust autocorrelation estimate, and take the maximum coefficient
2) compute a robust power spectral density estimate, and take the maximum of the spectrum
The problem with #2 is that for any noisy time series, you will get a large amount of power in low frequencies, making it difficult to distinguish. There are some techniques for resolving this problem (i.e. pre-whiten, then estimate the PSD), but if the true period from your data is long enough, automatic detection will be iffy.
Your best bet is probably to implement a robust autocorrelation routine such as can be found in chapter 8.6, 8.7 in Robust Statistics - Theory and Methods by Maronna, Martin and Yohai. Searching Google for "robust durbin-levinson" will also yield some results.
If you're just looking for a simple answer, I'm not sure that one exists. Period detection in time series can be complicated, and asking for an automated routine that can perform magic may be too much.
| null | CC BY-SA 2.5 | null | 2010-08-06T02:48:09.630 | 2010-08-06T02:48:09.630 | null | null | 781 | null |
1340 | 2 | null | 1164 | 23 | null | Anyone trained in statistical data analysis at a reasonable level uses the concepts of robust statistics on a regular basis. Most researchers know enough to look for serious outliers and data recording errors; the policy of removing suspect data points goes back well into the 19th century with Lord Rayleigh, G.G. Stokes, and others of their age. If the question is:
Why don't researchers use the more modern methods for computing location, scale, regression, etc. estimates?
then the answer is given above -- the methods have largely been developed in the last 25 years, say 1985 - 2010. The lag for learning new methods factors in, as well as inertia compounded by the 'myth' that there is nothing wrong with blindly using classical methods. John Tukey comments that just which
robust/resistant methods you use is not important—what is important is that you
use some. It is perfectly proper to use both classical and robust/resistant methods
routinely, and only worry when they differ enough to matter. But when they differ,
you should think hard.
If instead, the question is:
Why don't researchers stop and ask questions about their data, instead of blindly applying highly unstable estimates?
then the answer really comes down to training. There are far too many researchers who were never trained in statistics properly, summed up by the general reliance on p-values as the be-all and end-all of 'statistical significance'.
@Kwak: Huber's estimates from the 1970s are robust, in the classical sense of the word: they resist outliers. And redescending estimators actually date well before the 1980s: the Princeton robustness study (of 1971) included the bisquare estimate of location, a redescending estimate.
| null | CC BY-SA 2.5 | null | 2010-08-06T03:06:41.747 | 2010-08-06T15:49:38.253 | 2010-08-06T15:49:38.253 | 781 | 781 | null |
1341 | 2 | null | 652 | 4 | null | The classic "orange horror" remains an excellent introduction: Exploratory Data Analysis by John Tukey.
[http://www.amazon.com/Exploratory-Data-Analysis-John-Tukey/dp/0201076160](http://rads.stackoverflow.com/amzn/click/0201076160)
| null | CC BY-SA 2.5 | null | 2010-08-06T03:14:53.140 | 2010-08-06T03:14:53.140 | null | null | 781 | null |
1342 | 2 | null | 726 | 4 | null | >
Do not make things easy for yourself
by speaking or thinking of data as if
they were different from what they
are; and do not go off from facing
data as they are, to amuse your
imagination by wishing they were
different from what they are. Such
wishing is pure waste of nerve force,
weakens your intellectual power, and
gets you into habits of mental
confusion.
--Mary Everest Boole
| null | CC BY-SA 2.5 | null | 2010-08-06T03:17:17.273 | 2010-08-06T03:17:17.273 | null | null | null | null |
1343 | 2 | null | 1321 | 6 | null | I sometimes make the x-axis time and plot both scalar variables on the y-axis.
When the two scalar variables are on a different metric, I rescale one or both of the scalar variables so they can be displayed on the same plot.
I use things like colour and shape to discriminate the two scalar variables.
I've often used `xyplot` from `lattice` for this purpose.
Here's an example:
```
require(lattice)
xyplot(dv1 + dv2 ~ iv, data = x, col = c("black", "red"))
```
| null | CC BY-SA 2.5 | null | 2010-08-06T03:22:13.263 | 2010-08-06T03:22:13.263 | null | null | 183 | null |
1344 | 2 | null | 1315 | 6 | null | A simpler approach might be to transform the data. After transforming, it might be close to Gaussian.
One common way to do so is by taking the logarithm of all values.
My guess is that in this case the distribution of the reciprocal of the round-trip times will be more symmetrical and perhaps close to Gaussian. By taking the reciprocal, you are essentially tabulating velocities instead of times, so it still is easy to interpret the results (unlike logarithms or many transforms).
| null | CC BY-SA 2.5 | null | 2010-08-06T03:47:26.320 | 2010-08-06T03:47:26.320 | null | null | 25 | null |
1345 | 2 | null | 1315 | 2 | null | Based on your comment "Really i want to draw the mathematical curve that follows the distribution. Granted it might not be a known distribution; but i can't imagine that this hasn't been investigated before." I am providing a function that sort of fits.
Take a look at [ExtremeValueDistribution](http://reference.wolfram.com/mathematica/ref/ExtremeValueDistribution.html?q=ExtremeValueDistribution&lang=en)
I added an amplitude and made the two betas different. I figure your function's center is closer to 9.5 then 10.
New function:
a E^(-E^(((-x + alpha)/b1)) + (-x + alpha)/b2)/((b1 + b2)/2)
{alpha->9.5, b2 -> 0.899093, a -> 5822.2, b1 -> 0.381825}
[Wolfram alpha](http://www.wolframalpha.com/input/?i=plot+11193.8+E%5E%28-E%5E%281.66667+%2810+-+x%29%29+%2B+1.66667+%2810+-+x%29%29+%2Cx+0..16%2C+y+from+0+to+4500):
plot 11193.8 E^(-E^(1.66667 (10 - x)) + 1.66667 (10 - x)) ,x 0..16, y from 0 to 4500
Some points around 10ms:
{{9, 390.254}, {10, 3979.59}, {11, 1680.73}, {12, 562.838}}
Tail does not fit perfectly though. The tail can be fit better if b2 is lower and the peak is chosen to be closer to 9.
| null | CC BY-SA 2.5 | null | 2010-08-06T04:11:29.997 | 2010-08-06T04:11:29.997 | null | null | 782 | null |
1346 | 2 | null | 1337 | 140 | null | I saw this posted as a comment on here somewhere:
[http://xkcd.com/552/](http://xkcd.com/552/)

A: I used to think correlation implied causation. Then I took a statistics class. Now I don't.
B: Sounds like the class helped.
A: Well, maybe.
Title text: Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'.
| null | CC BY-SA 3.0 | null | 2010-08-06T04:50:59.280 | 2014-08-29T04:17:00.007 | 2020-06-11T14:32:37.003 | -1 | 287 | null |
1347 | 2 | null | 726 | 19 | null | "Extraordinary claims demand extraordinary evidence."
Often attributed to Carl Sagan, but he was paraphrasing sceptic Marcello Truzzi. Doubtless the concept is even more ancient.
David Hume said, "A wise man, therefore, proportions his belief to the evidence".
One could argue this is not a quote about statistics. However, applied statistics is ultimately in the business of evaluating the quality of evidence for or against some proposition.
| null | CC BY-SA 2.5 | null | 2010-08-06T05:15:22.583 | 2010-08-06T05:15:22.583 | null | null | 521 | null |
1348 | 2 | null | 652 | 1 | null | An old favourite of mine as an introduction to biostatistics is Armitage & Berry's (& now Matthew's):
Statistical Methods in Medical Research
| null | CC BY-SA 2.5 | null | 2010-08-06T05:27:57.743 | 2010-08-06T05:27:57.743 | null | null | 521 | null |
1349 | 2 | null | 1252 | 22 | null | I have previously found UCLA's "Choosing the Correct Statistical Test" to be helpful:
[https://stats.idre.ucla.edu/other/mult-pkg/whatstat/](https://stats.idre.ucla.edu/other/mult-pkg/whatstat/)
It also gives examples of how to do the analysis in SAS, Stata, SPSS and R.
| null | CC BY-SA 4.0 | null | 2010-08-06T05:33:06.860 | 2020-03-04T23:41:10.463 | 2020-03-04T23:41:10.463 | 113546 | 521 | null |
1350 | 1 | 1407 | null | 2 | 4184 | I am working with a large data set (approximately 50K observations) and trying to running a Maximum likelihood estimation on 5 unknowns in Stata.
I encountered an error message of "Numerical Overflow". How can I overcome this?
I am trying to run a Stochastic Frontier analysis using the built in Stata command "frontier". The dependent variable is log of output and the independent variable is log of intermediate inputs, capital, labour, and utlities.
| How to get around Numerical Overflow in Stata? | CC BY-SA 2.5 | null | 2010-08-06T06:32:06.513 | 2010-10-08T16:07:49.240 | 2010-10-08T16:07:49.240 | 8 | 189 | [
"large-data",
"stata",
"computational-statistics"
] |
1351 | 2 | null | 1296 | 5 | null | Assuming you want to pick a distribution for n, p(n) you can apply Bayes law.
You know that the probability of k events occuring given that n have actually occured is governed by a binomial distribtion
$p(k|n) = {n \choose k} p^k (1-p)^{(n-k)}$
The thing you really want to know is the probability of n events having actually occured, given that you observed k. By Bayes lay:
$p(n|k) = \frac{p(k|n)p(n)}{p(k)}$
By applying the theorem of total probability, we can write:
$p(n|k) = \frac{p(k|n)p(n)}{\sum_{n'} p(k|n')p(n')}$
So without further information, about the distribution of $p(n)$ you can't really go any further.
However, if you want to pick a distribution for $p(n)$ for which there is a value $n$ greater than which $p(n) = 0$, or sufficiently close to zero, then you can do a bit better. For example, assume that the distribution of $n$ is uniform in the range $[0,n_{max}]$. this case:
$p(n) = \frac{1}{n_{max}}$
The Bayesian formulation simplifies to:
$p(n|k) = \frac{p(k|n)}{\sum_{n'} p(k|n')}$
As for the final part of the problem, I agree that the best approach is to perform a cumulative summation over $p(n|k)$, to generate the cummulative probability distribution function, and iterate until the 0.95 limit is reached.
Given that this question migrated from SO, toy sample code in python is attached below
```
import numpy.random
p = 0.8
nmax = 200
def factorial(n):
if n == 0:
return 1
return reduce( lambda a,b : a*b, xrange(1,n+1), 1 )
def ncr(n,r):
return factorial(n) / (factorial(r) * factorial(n-r))
def binomProbability(n, k, p):
p1 = ncr(n,k)
p2 = p**k
p3 = (1-p)**(n-k)
return p1*p2*p3
def posterior( n, k, p ):
def p_k_given_n( n, k ):
return binomProbability(n, k, p)
def p_n( n ):
return 1./nmax
def p_k( k ):
return sum( [ p_n(nd)*p_k_given_n(nd,k) for nd in range(k,nmax) ] )
return (p_k_given_n(n,k) * p_n(n)) / p_k(k)
observed_k = 80
p_n_given_k = [ posterior( n, observed_k, p ) for n in range(0,nmax) ]
cp_n_given_k = numpy.cumsum(p_n_given_k)
for n in xrange(0,nmax):
print n, p_n_given_k[n], cp_n_given_k[n]
```
| null | CC BY-SA 2.5 | null | 2010-08-06T07:44:34.903 | 2010-08-06T07:44:34.903 | null | null | 789 | null |
1352 | 1 | 1384 | null | 11 | 3570 | In an average (median?) conversation about statistics you will often find yourself discussing this or that method of analyzing this or that type of data. In my experience, careful study design with special thought with regards to the statistical analysis is often neglected (working in biology/ecology, this seems to be a prevailing occurrence). Statisticians often find themselves in a gridlock with insufficient (or outright wrong) collected data. To paraphrase Ronald Fisher, they are forced to do a post-mortem on the data, which often leads to weaker conclusions, if at all.
I would like to know which references you use to construct a successful study design, preferably for a wide range of methods (e.g. t-test, GLM, GAM, ordination techniques...) that helps you avoid pitfalls mentioned above.
| References for how to plan a study | CC BY-SA 2.5 | null | 2010-08-06T08:06:13.193 | 2017-03-06T08:11:55.837 | 2010-09-16T06:58:05.970 | null | 144 | [
"experiment-design"
] |
1353 | 2 | null | 1296 | 12 | null | I would choose to use the [negative binomial distribution](http://www.math.ntu.edu.tw/~hchen/teaching/StatInference/notes/lecture16.pdf), which returns the probability that there will be X failures before the k_th success, when the constant probability of a success is p.
Using an example
```
k=17 # number of successes
p=.6 # constant probability of success
```
the mean and sd for the failures are given by
```
mean.X <- k*(1-p)/p
sd.X <- sqrt(k*(1-p)/p^2)
```
The distribution of the failures X, will have approximately that shape
```
plot(dnbinom(0:(mean.X + 3 * sd.X),k,p),type='l')
```
So, the number of failures will be (with 95% confidence) approximately between
```
qnbinom(.025,k,p)
[1] 4
```
and
```
qnbinom(.975,k,p)
[1] 21
```
So you inerval would be [k+qnbinom(.025,k,p),k+qnbinom(.975,k,p)] (using the example's numbers [21,38] )
| null | CC BY-SA 2.5 | null | 2010-08-06T09:01:29.013 | 2010-08-06T10:55:02.630 | 2010-08-06T10:55:02.630 | 339 | 339 | null |
1354 | 2 | null | 1350 | 1 | null | How many variables do you have? Are you using a built in command?
Five years ago I used mlogit (multinomial logit) in Stata 8 to model 60,000 observations and about 40 variables with no difficulty.
I'd recommend searching the Stata websites (http://www.stata.com/) and even asking technical support.
For example, here are two answers:
[http://www.stata.com/statalist/archive/2007-04/msg00390.html](http://www.stata.com/statalist/archive/2007-04/msg00390.html)
http://statalist.1588530.n2.nabble.com/st-Reproducing-xtlogit-with-xtmelogit-td3418488.html
Stata provide the best technical support of any stats software I know.
| null | CC BY-SA 2.5 | null | 2010-08-06T09:34:37.720 | 2010-08-06T09:34:37.720 | null | null | 521 | null |
1355 | 1 | 1364 | null | 8 | 3283 | I want to predict the results of a simple card game, to judge on average, how long a game will last.
My 'simple' game is;
- Cards are dealt from a randomised
deck to n players (typically 2-4)
- Each player gets five cards
- The top
card from the deck is turned over
- Each player takes it in turns to
either place a card of the same face
value (i.e 1-10, J, Q, K, A), the
same suit (i.e Hearts, Diamonds,
Spades, Clubs) or any suit of magic
card (a jack)
- If the player can place
a card they do, otherwise they must
take a card from the deck
- Play
continues in turn until all but one
player has no cards left
I'm guessing that I could write code to play a mythical game and report the result, then run that code thousands of times.
Has anyone done this ? Can they suggest code that does a similar job (my favoured language is R, but anything would do) ? Is there a better way ?
| How could I predict the results of a simple card game? | CC BY-SA 2.5 | null | 2010-08-06T09:48:14.730 | 2016-06-03T07:21:02.127 | 2010-08-06T10:01:52.910 | 8 | 114 | [
"r",
"probability",
"games"
] |
1356 | 2 | null | 1337 | 113 | null | Two statisticians were traveling in an airplane from LA to New York.
About an hour into the flight, the pilot announced that they had lost
an engine, but don’t worry, there are three left. However, instead of 5
hours it would take 7 hours to get to New York.
A little later, he
announced that a second engine failed, and they still had two left,
but it would take 10 hours to get to New York.
Somewhat later, the
pilot again came on the intercom and announced that a third engine
had died. Never fear, he announced, because the plane could fly on a
single engine. However, it would now take 18 hours to get to New
York.
At this point, one statistician turned to the other and said, “Gee, I
hope we don’t lose that last engine, or we’ll be up here forever!”
| null | CC BY-SA 2.5 | null | 2010-08-06T09:52:51.527 | 2010-08-06T09:52:51.527 | null | null | 114 | null |
1357 | 1 | null | null | 4 | 2877 | I am trying to compare it to Euclidean distance and Pearson correlation
| Is mutual information invariant to scaling, i.e. multiplying all elements by a nonzero constant? | CC BY-SA 2.5 | null | 2010-08-06T10:48:52.543 | 2011-04-29T00:26:49.170 | 2011-04-29T00:26:49.170 | 3911 | null | [
"correlation",
"mutual-information"
] |
1358 | 1 | 1365 | null | 14 | 989 | In circular statistics, the expectation value of a random variable $Z$ with values on the circle $S$ is defined as
$$
m_1(Z)=\int_S z P^Z(\theta)\textrm{d}\theta
$$
(see [wikipedia](http://en.wikipedia.org/wiki/Circular_statistics#Moments)).
This is a very natural definition, as is the definition of the variance
$$
\mathrm{Var}(Z)=1-|m_1(Z)|.
$$
So we didn't need a second moment in order to define the variance!
Nonetheless, we define the higher moments
$$
m_n(Z)=\int_S z^n P^Z(\theta)\textrm{d}\theta.
$$
I admit that this looks rather natural as well at first sight, and very similar to the definition in linear statistics. But still I feel a little bit uncomfortable, and have the following
Questions:
1.
What is measured by the higher moments defined above (intuitively)? Which properties of the distribution can be characterized by their moments?
2.
In the computation of the higher moments we use multiplication of complex numbers, although we think of the values of our random variables merely as vectors in the plane or as angles. I know that complex multiplication is essentially addition of angles in this case, but still:
Why is complex multiplication a meaningful operation for circular data?
| Intuition for higher moments in circular statistics | CC BY-SA 2.5 | null | 2010-08-06T10:57:18.820 | 2011-04-29T00:27:53.567 | 2011-04-29T00:27:53.567 | 3911 | 650 | [
"mathematical-statistics",
"moments",
"intuition",
"circular-statistics"
] |
1359 | 2 | null | 1352 | 3 | null | In general, I would say any book that has DOE (design of experiments) in the title would fit the bill (and there are MANY).
My rule of thumb for such resource would be to start with the [wiki page](http://en.wikipedia.org/wiki/Design_of_experiments), in particular to your question, notice the [Principles of experimental design, following Ronald A. Fisher](http://en.wikipedia.org/wiki/Design_of_experiments#Principles_of_experimental_design.2C_following_Ronald_A._Fisher)
But a more serious answer would be domain specific (clinical trial has a huge manual, but for a study on mice, you'd probably go with some other field-related book)
| null | CC BY-SA 2.5 | null | 2010-08-06T11:09:39.877 | 2010-08-06T11:09:39.877 | null | null | 253 | null |
1360 | 2 | null | 726 | 36 | null | >
Those who ignore Statistics are condemned to reinvent it.
-- Brad Efron
| null | CC BY-SA 2.5 | null | 2010-08-06T11:11:00.890 | 2010-12-03T04:03:04.030 | 2010-12-03T04:03:04.030 | 795 | 778 | null |
1361 | 2 | null | 726 | 20 | null | >
My thesis is simply this: probability does not exist.
- Bruno de Finetti
| null | CC BY-SA 2.5 | null | 2010-08-06T11:15:17.060 | 2010-08-06T11:15:17.060 | null | null | 778 | null |
1362 | 2 | null | 1352 | 3 | null | My rule of thumb is "repeat more than you think it's sufficient".
| null | CC BY-SA 2.5 | null | 2010-08-06T11:17:09.273 | 2010-08-06T15:23:24.753 | 2010-08-06T15:23:24.753 | null | null | null |
1363 | 2 | null | 726 | 2 | null | >
...Statistics used as a catalyst to engineering creation will, I believe, always result in the fastest and most economical progress.
--George Box 1992
| null | CC BY-SA 2.5 | null | 2010-08-06T11:23:15.363 | 2010-12-03T04:03:46.730 | 2010-12-03T04:03:46.730 | 795 | 114 | null |
1364 | 2 | null | 1355 | 11 | null | The easiest way is just to simulate the game lots of times. The R code below simulates a single game.
```
nplayers = 4
#Create an empty data frame to keep track
#of card number, suit and if it's magic
empty.hand = data.frame(number = numeric(52),
suit = numeric(52),
magic = numeric(52))
#A list of players who are in the game
players =list()
for(i in 1:nplayers)
players[[i]] = empty.hand
#Simulate shuffling the deck
deck = empty.hand
deck$number = rep(1:13, 4)
deck$suit = as.character(rep(c("H", "C", "S", "D"), each=13))
deck$magic = rep(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0), each=4)
deck = deck[sample(1:52, 52),]
#Deal out five cards per person
for(i in 1:length(players)){
r = (5*i-4):(5*i)
players[[i]][r,] = deck[r,]
}
#Play the game
i = 5*length(players)+1
current = deck[i,]
while(i < 53){
for(j in 1:length(players)){
playersdeck = players[[j]]
#Need to test for magic and suit also - left as an exercise!
if(is.element(current$number, playersdeck$number)){
#Update current card
current = playersdeck[match(current$number,
playersdeck$number),]
#Remove card from players deck
playersdeck[match(current$number, playersdeck$number),] = c(0,
0, 0)
} else {
#Add card to players deck
playersdeck[i,] = deck[i,]
i = i + 1
}
players[[j]] = playersdeck
#Has someone won or have we run out of card
if(sum(playersdeck$number) == 0 | i > 52){
i = 53
break
}
}
}
#How many cards are left for each player
for(i in 1:length(players))
{
cat(sum(players[[i]]$number !=0), "\n")
}
```
Some comments
- You will need to add a couple of lines for magic cards and suits, but data structure is already there. I presume you didn't want a complete solution? ;)
- To estimate the average game length, just place the above code in a function and call lots of times.
- Rather than dynamically increasing a vector when a player gets a card, I find it easier just to create a sparse data frame that is more than sufficient. In this case, each player has a data frame with 52 rows, which they will never fill (unless it's a 1 player game).
- There is a small element of strategy with this game. What should you do if you can play more than one card. For example, if 7H comes up, and you have in your hand 7S, 8H and the JC. All three of these cards are "playable".
| null | CC BY-SA 2.5 | null | 2010-08-06T12:13:05.263 | 2010-08-09T16:31:07.027 | 2010-08-09T16:31:07.027 | 8 | 8 | null |
1365 | 2 | null | 1358 | 9 | null | The moments are the Fourier coefficients of the probability measure $P^Z$. Suppose (for the sake of intuition) that $Z$ has a density. Then the argument (angle from $1$ in the complex plane) of $Z$ has a density on $[0,2\pi)$, and the moments are the coefficients when that density is expanded in a Fourier series. Thus the usual intuition about Fourier series applies -- these measure the strengths of frequencies in that density.
As for your second question, I think you already gave the answer: "complex multiplication is essentially addition of angles in this case".
| null | CC BY-SA 2.5 | null | 2010-08-06T12:38:35.230 | 2010-08-06T12:38:35.230 | null | null | 89 | null |
1366 | 2 | null | 1315 | 4 | null | Another approach, that is more justified by network considerations, is to try to fit a sum of independent exponentials with different parameters. A reasonable assumption would be that each node in the path of the ping the delay would be an independent exponential, with different parameters. A reference to the distributional form of the sum of independent exponentials with differing parameters is [http://www.math.bme.hu/~balazs/sumexp.pdf](http://www.math.bme.hu/~balazs/sumexp.pdf).
You should probably also look at the ping times vs the number of hops.
| null | CC BY-SA 2.5 | null | 2010-08-06T12:46:57.243 | 2010-08-06T12:46:57.243 | null | null | 247 | null |
1367 | 2 | null | 1337 | 96 | null | One passed by Gary Ramseyer:
Statistics play an important role in genetics. For instance, statistics prove that numbers of offspring is an inherited trait. If your parent didn't have any kids, odds are you won't either.
| null | CC BY-SA 2.5 | null | 2010-08-06T13:52:50.747 | 2010-08-06T13:52:50.747 | null | null | 634 | null |
1368 | 2 | null | 1337 | 21 | null | A statistic professor plans to travel to a conference by plane. When he passes the security check, they discover a bomb in his carry-on-baggage. Of course, he is hauled off immediately for interrogation.
"I don't understand it!" the interrogating officer exclaims. "You're an accomplished professional, a caring family man, a pillar of your parish - and now you want to destroy that all by blowing up an airplane!"
"Sorry", the professor interrupts him. "I had never intended to blow up the plane."
"So, for what reason else did you try to bring a bomb on board?!"
"Let me explain. Statistics shows that the probability of a bomb being on an airplane is 1/1000. That's quite high if you think about it - so high that I wouldn't have any peace of mind on a flight."
"And what does this have to do with you bringing a bomb on board of a plane?"
"You see, since the probability of one bomb being on my plane is 1/1000, the chance that there are two bombs is 1/1000000. If I already bring one, the chance of another bomb being around is actually 1/1000000, and I am much safer..."
| null | CC BY-SA 2.5 | null | 2010-08-06T14:00:40.623 | 2010-08-06T14:00:40.623 | null | null | null | null |
1369 | 1 | 1372 | null | 0 | 208 | I have a given distance with a standard deviation. I have simulated now a few 100 distances and would like to draw from these distances a sample of 10-20 resembling the original distribution. Is there any standardized way of doing so?
| Sampling according to a normal distribution | CC BY-SA 2.5 | null | 2010-08-06T14:01:57.110 | 2010-08-06T15:14:46.647 | 2010-08-06T14:14:37.210 | 791 | 791 | [
"sample"
] |
1370 | 2 | null | 1357 | 7 | null | I think the answer is yes to your question. I will show this for the discrete case only and I think the basic idea carries over to the continuous case. MI is defined as:
$I(X;Y) = \sum_{y\in Y}\sum_{x\in X}\Bigg(p(x,y) log(\frac{p(x,y)}{p(x)p(y)})\Bigg)$
Define:
$Z_x = \alpha X$
and
$Z_y = \alpha Y$.
So, the question is: Does $I(Z_x;Z_y)$ equal $I(X;Y)$?
Since scaling is a one-to-one transformation it must be that:
$p(z_x) = p(x)$,
$p(z_y) = p(y)$ and
$p(z_x,z_y) = p(x,y)$
Therefore, the mutual information remains the same and hence the answer is to your question is yes.
| null | CC BY-SA 2.5 | null | 2010-08-06T14:10:01.230 | 2010-08-06T14:10:01.230 | null | null | null | null |
1371 | 2 | null | 1337 | 137 | null | George Burns said that "If you live to be one hundred, you've got it made. Very few people die past that age."
| null | CC BY-SA 2.5 | null | 2010-08-06T14:12:21.147 | 2010-08-06T14:12:21.147 | null | null | 666 | null |
1372 | 2 | null | 1369 | 2 | null | You mean you want to draw 10-20 numbers from a normal distribution? In R, use `rnorm` function; for a generic solution, see [Wikipedia](http://en.wikipedia.org/wiki/Normal_distribution#Generating_values_from_normal_distribution).
| null | CC BY-SA 2.5 | null | 2010-08-06T15:14:46.647 | 2010-08-06T15:14:46.647 | null | null | null | null |
1373 | 2 | null | 1357 | 1 | null | Intuitive explanation is such: multiplying by constant does not change information content of X and Y, so also their mutual information -- and thus it is invariant to scaling. Still Srikant gave you a strict proof of this fact.
| null | CC BY-SA 2.5 | null | 2010-08-06T15:22:43.663 | 2010-08-06T15:22:43.663 | null | null | null | null |
1374 | 2 | null | 1337 | 11 | null | How many statisticians does it take to change a light bulb?
5–7, with p-value 0.01
| null | CC BY-SA 2.5 | null | 2010-08-06T16:10:48.470 | 2010-08-06T17:54:34.830 | 2010-08-06T17:54:34.830 | null | null | null |
1375 | 2 | null | 1337 | 34 | null | Here is a list of many fun statistics jokes ([link](http://www.se16.info/hgb/statjoke.htm))
Here are just a few:
---
Did you hear the one about the statistician? Probably....
---
It is proven that the celebration of birthdays is healthy. Statistics show that those people who celebrate the most birthdays become the oldest. -- S. den Hartog, Ph D. Thesis Universtity of Groningen.
---
A statistician is a person who draws a mathematically precise line from an unwarranted assumption to a foregone conclusion.
---
The average statistician is just plain mean.
---
And there is also the one from [a TED talk](http://www.ted.com/talks/peter_donnelly_shows_how_stats_fool_juries.html):
"A friend asked my wife what I do. She answered that I model. Model what, she was asked - he models genes, she answered."
| null | CC BY-SA 3.0 | null | 2010-08-06T18:42:26.487 | 2013-06-27T19:56:07.393 | 2013-06-27T19:56:07.393 | 6981 | 253 | null |
1376 | 1 | 1397 | null | 6 | 1036 | I am looking for a robust version of Hotelling's $T^2$ test for the mean of a vector. As data, I have a $m\ \times\ n$ matrix, $X$, each row an i.i.d. sample of an $n$-dimensional RV, $x$. The null hypothesis I wish to test is $E[x] = \mu$, where $\mu$ is a fixed $n$-dimensional vector. The classical Hotelling test appears to be susceptible to non-normality in the distribution of $x$ (just as the 1-d analogue, the Student t-test is susceptible to skew and kurtosis).
what is the state of the art robust version of this test? I am looking for something relatively fast and conceptually simple. There was a paper in COMPSTAT 2008 on the topic, but I do not have access to the proceedings. Any help?
| Robust version of Hotelling $T^2$ test | CC BY-SA 2.5 | null | 2010-08-06T19:02:08.477 | 2022-12-11T10:13:47.333 | 2010-09-16T06:57:55.643 | null | 795 | [
"robust"
] |
1377 | 2 | null | 1337 | 68 | null | "If you torture data enough it will confess" one of my professors
| null | CC BY-SA 2.5 | null | 2010-08-06T20:07:17.070 | 2010-08-06T20:07:17.070 | null | null | 236 | null |
1378 | 1 | null | null | 10 | 1157 | I have a dataset that contains ~7,500 blood tests from ~2,500 individuals. I'm trying to find out if variability in the blood tests increases or decreases with the time between two tests. For example - I draw your blood for the baseline test, then immediately draw a second sample. Six months later, I draw another sample. One might expect the difference between the baseline and the immediate repeat tests to be smaller than the difference between the baseline and the six-month test.
Each point on the plot below reflects the difference between two tests. X is the number of days between two tests; Y is the size of the difference between the two tests. As you can see, tests aren't evenly distributed along X - the study wasn't designed to address this question, really. Because the points are so heavily stacked at the mean, I've included 95% (blue) and 99% (red) quantile lines, based on 28-day windows. These are obviously pulled around by the more extreme points, but you get the idea.
[alt text http://a.imageshack.us/img175/6595/diffsbydays.png](http://a.imageshack.us/img175/6595/diffsbydays.png)
It looks to me like the variability is fairly stable. If anything, it's higher when the test is repeated within a short period - that's terribly counterintuitive. How can I address this in a systematic way, accounting for varying n at each time point (and some periods with no tests at all)? Your ideas are greatly appreciated.
Just for reference, this is the distribution of the number of days between test and retest:
[alt text http://a.imageshack.us/img697/6572/testsateachtimepoint.png](http://a.imageshack.us/img697/6572/testsateachtimepoint.png)
| Estimating variability over time | CC BY-SA 2.5 | null | 2010-08-06T21:54:11.230 | 2010-08-17T06:29:42.563 | null | null | 71 | [
"repeated-measures",
"variability"
] |
1379 | 2 | null | 1202 | 1 | null | you could compute a [Kolmogorov-Smirnov](http://en.wikipedia.org/wiki/Kolmogorov_Smirnov_Test) statistic based on your binned data. This would work by first computing an empirical CDF based on your bins (just a cumulative sum with rescaling), then compute the $\infty$-norm of the differences.
I don't know R well enough to give you code in R, but can quote the Matlab very simply:
```
%let base be the 1 x 1024 vector of binned observed data
%let dists be the 1000 x 1024 matrix of binned distributions to be checked
emp_base = cumsum(base,2);emp_base = emp_base ./ emp_base(end); %cumsum, then normalize
emp_dists = cumsum(dists,2);
emp_dists = bsxfun(@rdivide,emp_dists,emp_dists(:,end)); %normalize for the top sum.
emp_diff = bsxfun(@minus,emp_base,emp_dists); %subtract the cdfs; R does this transparently, IIRC
KS_stat = max(abs(emp_diff),[],2); %take the maximum absolute difference along dim 2
%KS_stat is now a 1000 x 1 vector of the KS statistic. you can convert to a p-value as well.
%but you might as well just rank them.
[dum,sort_idx] = sort(KS_stat); %Matlab does not have a builtin ranking; PITA!
dist_ranks = nan(numel(sort_idx),1);
dist_ranks(sort_idx) = (1:numel(sort_idx))';
%dist_ranks are now the ranks of each of the distributions (ignoring ties!)
%if you want the best match, it is indexed by sort_idx(1);
```
the `bsxfun` nonsense here is Matlab's way of doing proper vector recycling, which R (and numpy, IIRC) does transparently.
| null | CC BY-SA 2.5 | null | 2010-08-06T22:10:44.290 | 2010-08-07T16:00:25.753 | 2010-08-07T16:00:25.753 | 795 | 795 | null |
1380 | 1 | 1636 | null | 3 | 2734 | (migrating from math overflow, where no answers were posted)
suppose I have $K$ different methods for forecasting a binary random variable, which I test on independent sets of data, resulting in $K$ contingency tables of values $n_{ijk}$ for $i,j=1,2$ and $k=1,2,...,K$. How can I compare these methods based on the contingency tables? The general case would be nice, but $K=2$ is also very interesting.
I can think of a few approaches:
- compute some statistic on each of the tables, and compare those random variables (I'm not sure if this is a standard problem or not),
- something like Goodman's improvement of Stouffer's method, but I cannot access this paper, and was hoping for something a little more recent (more likely to have the latest-greatest, plus computer simulations).
any ideas?
| Comparing multiple contingency tables, independent data | CC BY-SA 2.5 | null | 2010-08-06T22:16:01.897 | 2010-10-01T01:34:16.313 | 2010-09-30T21:20:58.353 | 930 | 795 | [
"forecasting",
"contingency-tables"
] |
1381 | 2 | null | 1001 | 5 | null | The Baumgartner-Weiss-Schindler statistic is a modern alternative to the K-S test, and appears to be more powerful in certain situations. A few links:
- A Nonparametric Test for the General Two-Sample Problem (the original B.W.S. paper)
- M. Neuhauser, 'Exact Tests Based on the Baumgartner-Weiss-Schindler Statistic--A Survey', Statistical Papers, Vol 46 (2005), pp. 1-30. (perhaps not relevant to your large sample case...)
- H. Murakami, 'K-Sample Rank Test Based on Modified Baumgartner Statistic and its Power Comparison', J. Jpn. Comp. Statist. Vol 19 (2006), pp. 1-13.
- M. Neuhauser, 'One-Sided Two-Sample and Trend Tests Based on a Modified Baumgartner-Weiss-Schindler Statistic', J. Nonparametric Statistics, Vol 13 (2001) pp 729-739.
edit: in the years since I posted this answer, I have implemented the BWS test in R in the [BWStest package](https://cran.r-project.org/web/packages/BWStest/index.html). Use is as simple as:
```
require(BWStest)
set.seed(12345)
# under the null:
x <- rnorm(200)
y <- rnorm(200)
hval <- bws_test(x, y)
```
| null | CC BY-SA 4.0 | null | 2010-08-06T22:34:27.163 | 2022-04-17T17:42:24.263 | 2022-04-17T17:42:24.263 | 79696 | 795 | null |
1382 | 2 | null | 1337 | 78 | null | A statistics major was completely hung over the day of his final exam. It was a true/false test, so he decided to flip a coin for the answers. The statistics professor watched the student the entire two hours as he was flipping the coin … writing the answer … flipping the coin … writing the answer. At the end of the two hours, everyone else had left the final except for the one student. The professor walks up to his desk and interrupts the student, saying, “Listen, I have seen that you did not study for this statistics test, you didn’t even open the exam. If you are just flipping a coin for your answer, what is taking you so long?”
The student replies bitterly (as he is still flipping the coin), “Shhh! I am checking my answers!”
I've posted a few others on [my blog](http://robjhyndman.com/researchtips/statistical-jokes/).
| null | CC BY-SA 2.5 | null | 2010-08-07T02:33:35.130 | 2010-08-07T02:33:35.130 | null | null | 159 | null |
1383 | 1 | 1594 | null | 0 | 1196 | There's a lot of work done in statistics,
while state-of-art in lossless data compression is apparently this:
[http://mattmahoney.net/dc/dce.html#Section_4](http://mattmahoney.net/dc/dce.html#Section_4)
Please suggest good methods/models applicable for data compression.
To be specific:
1) How to estimate the probability of the next bit in a bit string?
2) How to integrate predictions of different models?
Update:
>
you should include a better description of what data you
want to compress...
Why, I'm talking about universal compression obviously.
For data with known structure its really not a mathematical problem,
so there's no sense to discuss it here.
In other words, the first question is: given a string of bits,
what do we do to determine the probability of the next bit, as
precisely as possible?
>
otherwise we will have 10 different answers trying to
summarize different part of the huge theory of compression
I'd written quite a few statistical compressors, and I'm
not interested in that.
I'm asking how a statistician would approach this task,
detect correlations in given data, and compute a probability
estimation for the next bit.
>
In addition, the two point you give to be more specific
are not detailed enough to be understood.
What's not detailed in there? I'm even talking about bits,
not some vague "symbols". I'd note though, that I'm talking
about "probability of a bit" because computing a probability
of bit==0 or bit==1 is a matter of convenience.
Also, I'm obviously not talking about some "random data compression",
or methods with infinite complexity, like "Kolmogorov compression".
Again, I want to know how a good statistician would approach this
problem, given a string of bits.
Here's an example, if you need one: hxxp://encode.ru/threads/482-Bit-guessing-game
| Suggest a method for statistical data compression | CC BY-SA 2.5 | null | 2010-08-07T03:47:26.330 | 2010-08-14T12:10:58.203 | 2010-08-12T13:39:10.303 | 8 | 799 | [
"modeling",
"compression"
] |
1384 | 2 | null | 1352 | 5 | null |
- I agree with the point that statistics consultants are often brought in later on a project when it's too late to remedy design flaws. It's also true that many statistics books give scant attention to study design issues.
- You say you want designs "preferably for a wide range of methods (e.g. t-test, GLM, GAM, ordination techniques...". I see designs as relatively independent of statistical method: e.g., experiments (between subjects and within subjects factors) versus observational studies; longitudinal versus cross-sectional; etc. There are also a lot of issues related to measurement, domain specific theoretical knowledge, and domain specific study design principles that need to be understood in order to design a good study.
- In terms of books, I'd be inclined to look at domain specific books. In psychology (where I'm from) this means books on psychometrics for measurement, a book on research methods, and a book on statistics, as well as a range of even more domain specific research method books. You might want to check out Research Methods Knowledge Base for a free online resource for the social sciences.
- Published journal articles are also a good guide to what is best practice in a particular domain.
| null | CC BY-SA 2.5 | null | 2010-08-07T03:55:36.433 | 2010-08-08T11:31:31.680 | 2010-08-08T11:31:31.680 | 183 | 183 | null |
1385 | 1 | 1404 | null | 13 | 3179 | My question is directed to techniques to deal with incomplete data during the classifier/model training/fitting.
For instance, in a dataset w/ a few hundred rows, each row having let's say five dimensions and a class label as the last item, most data points will look like this:
[0.74, 0.39, 0.14, 0.33, 0.34, 0]
A few might look something like this:
[0.21, 0.68, ?, 0.82, 0.58, 1]
So it's those types of data points that are the focus of this Question.
My initial reason for asking this question was a problem directly in front of me; however, before posting my Question, i thought it might be more useful if i re-phrased it so the answers would be useful to a larger portion of the Community.
As a simple heuristic, let's divide these data-handling techniques based on when during the processing flow they are employed--before input to the classifier or during (i.e., the technique is inside the classifier).
The best example i can think of for the latter is the clever 'three-way branching' technique used in Decision Trees.
No doubt, the former category is far larger. The techniques i am aware of all fall into one of the groups below.
While recently reviewing my personal notes on "missing data handling" i noticed that i had quite an impressive list of techniques. I just maintain these notes for general peace of mind and in case a junior colleague asks me how to deal with missing data. In actual practice, i don't actually use any of them, except for the last one.
- Imputation: a broad rubric for a set of techniques which whose common
denominator (i believe) is that the
missing data is supplied directly by
the same data set--substitution
rather than estimation/prediction.
- Reconstruction: estimate the missing data points using an
auto-associative network (just a
neural network in which the sizes of
the input and output layers are
equal--in other words, the output
has the same dimension as the
input); the idea here is to train
this network on complete data, then
feed it incomplete patterns, and
read the missing values from the
output nodes.
- Bootstrapping: (no summary necessary i shouldn't think, given
it's use elsewhere in statistical
analysis).
- Denial: quietly remove the data points with missing/corrupt elements
from your training set and pretend
they never existed.
| Techniques for Handling Incomplete/Missing Data | CC BY-SA 2.5 | null | 2010-08-07T05:07:27.083 | 2010-09-16T06:47:49.967 | 2010-09-16T06:47:49.967 | null | 438 | [
"missing-data"
] |
1386 | 1 | 1390 | null | 19 | 10736 | I am trying to test the null $E[X] = 0$, against the local alternative $E[X] > 0$, for a random variable $X$, subject to mild to medium skew and kurtosis of the random variable. Following suggestions by Wilcox in 'Introduction to Robust Estimation and Hypothesis Testing', I have looked at tests based on the trimmed mean, the median, as well as the M-estimator of location (Wilcox' "one-step" procedure). These robust tests do outperform the standard t-test, in terms of power, when testing with a distribution that is non-skewed, but leptokurtotic.
However, when testing with a distribution that is skewed, these one-sided tests are either far too liberal or far too conservative under the null hypothesis, depending on whether the distribution is left- or right-skewed, respectively. For example, with 1000 observations, the test based on the median will actually reject ~40% of the time, at the nominal 5% level. The reason for this is obvious: for skewed distributions, the median and the mean are rather different. However, in my application, I really need to test the mean, not the median, not the trimmed mean.
Is there a more robust version of the t-test that actually tests for the mean, but is impervious to skew and kurtosis?
Ideally the procedure would work well in the no-skew, high-kurtosis case as well. The 'one-step' test is almost good enough, with the 'bend' parameter set relatively high, but it is less powerful than the trimmed mean tests when there is no skew, and has some troubles maintaining the nominal level of rejects under skew.
background: the reason I really care about the mean, and not the median, is that the test would be used in a financial application. For example, if you wanted to test whether a portfolio had positive expected log returns, the mean is actually appropriate because if you invest in the portfolio, you will experience all the returns (which is the mean times the number of samples), instead of $n$ duplicates of the median. That is, I really care about the sum of $n$ draws from the R.V. $X$.
| Robust t-test for mean | CC BY-SA 3.0 | null | 2010-08-07T05:18:58.967 | 2020-10-12T20:08:19.260 | 2012-06-29T05:51:10.347 | 183 | 795 | [
"hypothesis-testing",
"t-test",
"finance",
"robust"
] |
1387 | 2 | null | 1337 | 7 | null | there was the one about the two statisticians who tried to use grant money to pay for their bill at a strip club. They were vindicated when it was explained they were performing a 'posterior analysis'. (groan)
| null | CC BY-SA 2.5 | null | 2010-08-07T06:03:39.740 | 2010-08-07T06:03:39.740 | null | null | 795 | null |
1388 | 2 | null | 1337 | 222 | null | >
A statistician's wife had twins. He
was delighted. He rang the minister
who was also delighted. "Bring them to
church on Sunday and we'll baptize
them," said the minister. "No,"
replied the statistician. "Baptize
one. We'll keep the other as a
control."
STATS: The Magazine For Students of Statistics, Winter 1996, Number 15
| null | CC BY-SA 2.5 | null | 2010-08-07T07:15:20.137 | 2010-08-07T07:15:20.137 | null | null | null | null |
1389 | 1 | 1392 | null | 5 | 9720 | I came across an error of numerical overflow when running a maximum likelihood estimation on a log-linear specification.
What does numerical overflow mean?
| What is numerical overflow? | CC BY-SA 2.5 | null | 2010-08-07T07:23:07.937 | 2010-08-07T22:38:33.683 | null | null | 189 | [
"estimation",
"maximum-likelihood"
] |
1390 | 2 | null | 1386 | 5 | null | Why are you looking at non-parametric tests? Are the assumptions of the t-test violated? Namely, ordinal or non-normal data and inconstant variances? Of course, if your sample is large enough you can justify the parametric t-test with its greater power despite the lack of normality in the sample. Likewise if your concern is unequal variances, there are corrections to the parametric test that yield accurate p-values (the Welch correction).
Otherwise, comparing your results to the t-test is not a good way to go about this, because the t-test results are biased when the assumptions are not met. The Mann-Whitney U is an appropriate non-parametric alternative, if that's what you really need. You only lose power if you are using the non-parametric test when you could justifiably use the t-test (because the assumptions are met).
And, just for some more background, go here: [Student's t Test for Independent Samples](http://www.jerrydallal.com/LHSP/STUDENT.HTM).
| null | CC BY-SA 4.0 | null | 2010-08-07T07:23:55.127 | 2020-10-12T20:08:19.260 | 2020-10-12T20:08:19.260 | 236645 | 485 | null |
1391 | 2 | null | 1386 | 13 | null | I agree that if you want to actually test whether the group means are different (as opposed to testing differences between group medians or trimmed means, etc.), then you don't want to use a nonparametric test that tests a different hypothesis.
- In general p-values from a t-test tend to be fairly accurate given moderate departures of the assumption of normality of residuals.
Check out this applet to get an intuition on this robustness: http://onlinestatbook.com/stat_sim/robustness/index.html
- If you're still concerned about the violation of the normality assumption,
you might want to bootstrap.
e.g., http://biostat.mc.vanderbilt.edu/wiki/pub/Main/JenniferThompson/ms_mtg_18oct07.pdf
- You could also transform the skewed dependent variable to resolve issues with departures from normality.
| null | CC BY-SA 2.5 | null | 2010-08-07T07:34:44.433 | 2010-08-07T09:39:58.333 | 2010-08-07T09:39:58.333 | 183 | 183 | null |
1392 | 2 | null | 1389 | 9 | null | It means that the algorithm generated a variable that is greater than the maximum allowed for that type of variable. That is due to the fact that computers use a finite number of bits to represent numbers, so it is not possible to represent ANY number, but only a limited subset of them.
The actual value depends on the type of variable and the architecture of the system.
Why that happens during a MLE I'm not sure, my best call would be that you should change the starting parameters.
| null | CC BY-SA 2.5 | null | 2010-08-07T07:35:58.240 | 2010-08-07T07:35:58.240 | null | null | 582 | null |
1393 | 2 | null | 1268 | 6 | null | There is a reasonably new area of research called Matrix Completion, that probably does what you want. A really nice introduction is given in this [lecture](http://videolectures.net/mlss09us_candes_mccota/) by Emmanuel Candes
| null | CC BY-SA 2.5 | null | 2010-08-07T08:12:09.650 | 2010-08-07T09:29:45.423 | 2010-08-07T09:29:45.423 | 352 | 352 | null |
1395 | 1 | 1402 | null | 5 | 1392 | Can anyone recommend me an open source graphic library to create forest and funnel plots?
I was aiming at using it on a Java desktop application.
| Libraries for forest and funnel plots | CC BY-SA 2.5 | 0 | 2010-08-07T13:53:16.873 | 2010-08-11T10:53:09.990 | 2010-08-11T10:53:09.990 | 8 | 807 | [
"data-visualization",
"funnel-plot",
"java"
] |
1396 | 2 | null | 1389 | 3 | null | You can probably avoid your overflow problems by working with the log of the likelihood function rather than the likelihood function itself. Both have the same maximum.
| null | CC BY-SA 2.5 | null | 2010-08-07T14:36:40.593 | 2010-08-07T14:36:40.593 | null | null | 319 | null |
1397 | 2 | null | 1376 | 5 | null | Sure: two answers
a) If by robustness, you mean robust to outliers, then run Hottelling's T-test using a robust estimation of scale/scatter: you will find all the explications and R code here:
[http://www.statsravingmad.com/blog/statistics/a-robust-hotelling-test/](http://www.statsravingmad.com/blog/statistics/a-robust-hotelling-test/)
b) if by robustness you mean optimal under large group of distributions, then you should go for a sign based T2 (ask if this what you want, by the tone of your question i think not).
PS: this is the paper you want;
Roelant, E., Van Aelst, S., and Willems, G. (2008), “Fast Bootstrap for Robust Hotelling Tests,” COMPSTAT 2008: Proceedings in Computational Statistics (P. Brito, Ed.) Heidelberg: Physika-Verlag, to appear.
| null | CC BY-SA 2.5 | null | 2010-08-07T14:40:50.007 | 2010-08-07T15:39:38.530 | 2010-08-07T15:39:38.530 | 603 | 603 | null |
1399 | 1 | 1401 | null | 13 | 2303 | I'm interested in obtaining a bootstrapped confidence interval on quantity X, when this quantity is measured 10 times in each of 10 individuals.
One approach is to obtain the mean per individual, then bootstrap the means (eg. resample the means with replacement).
Another approach is to do the following on each iteration of the bootstrapping procedure: within each individual, resample that individual's 10 observations with replacement, then compute a new mean for that individual, and finally compute a new group mean. In this approach, each individual observed in the original data set always contribute to the group mean on each iteration of the bootstrap procedure.
Finally, a third approach is to combine the above two approaches: resample individuals then resample within those individuals. This approach differs from the preceding approach in that it permits the same individual to contribute multiply to the group mean on each iteration, though because each contribution is generated via an independent resampling procedure, these contributions may be expected to vary slightly from eachother.
In practice, I find that these approaches yield different estimates for the confidence interval (ex. with one data set, I find that the third approach yields much larger confidence intervals than the first two approaches), so I'm curious what each might be interpreted to represent.
| Obtaining and interpreting bootstrapped confidence intervals from hierarchical data | CC BY-SA 2.5 | null | 2010-08-07T18:10:02.220 | 2010-08-23T20:39:21.760 | 2010-08-13T00:57:22.773 | 364 | 364 | [
"confidence-interval",
"bootstrap"
] |
1401 | 2 | null | 1399 | 7 | null | Your first approach is about a between S CI. If you wanted to measure within S then that's the wrong approach.
The second approach would generate a within S CI that would only apply to those 10 individuals.
The last approach is the correct one for the within S CI. Any increases in the CI are because your CI is more representative of a CI that could be applied to the population instead of those 10 S's.
| null | CC BY-SA 2.5 | null | 2010-08-07T18:31:32.663 | 2010-08-07T18:40:21.040 | 2010-08-07T18:40:21.040 | 601 | 601 | null |
1402 | 2 | null | 1395 | 5 | null | Well, i use [graphviz](http://www.graphviz.org), which has Java bindings [(Grappa](http://www2.research.att.com/~john/Grappa/)).
Although the dot language (graphviz's syntax) is simple, i prefer to use graphviz as a library through the excellent and production-stable python bindings, [pygraphviz](http://networkx.lanl.gov/pygraphviz/), and [networkx](http://networkx.lanl.gov/).
Here's the code for a simple 'funnel diagram' using those tools; it's not the most elaborate diagram, but it is complete--it initializes the graph object, creates all of the necessary components, styles them, renders the graph, and writes it to file.
```
import networkx as NX
import pygraphviz as PV
G = PV.AGraph(strict=False, directed=True) # initialize graph object
# create graph components:
node_list = ["Step1", "Step2", "Step3", "Step4"]
edge_list = [("Step1, Step2"), ("Step2", "Step3"), ("Step3", "Step4")]
G.add_nodes_from(node_list)
G.add_edge("Step1", "Step2")
G.add_edge("Step2", "Step3")
G.add_edge("Step3", "Step4")
# style them:
nak = "fontname fontsize fontcolor shape style fill color size".split()
nav = "Arial 11 white invtrapezium filled cornflowerblue cornflowerblue 1.4".split()
nas = dict(zip(nak, nav))
for k, v in nas.iteritems() :
G.node_attr[k] = v
eak = "fontname fontsize fontcolor dir arrowhead arrowsize arrowtail".split()
eav = "Arial 10 red4 forward normal 0.8 inv".split()
eas = dict(zip(eak, eav))
for k, v in eas.iteritems() :
G.edge_attr[k] = v
n1 = G.get_node("Step1")
n1.attr['fontsize'] = '11'
n1.attr['fontcolor'] = 'red4'
n1.attr['label'] = '1411'
n1.attr['shape'] = 'rectangle'
n1.attr['width'] = '1.4'
n1.attr['height'] = '0.05'
n1.attr['color'] = 'firebrick4'
n4 = G.get_node("Step4")
n4.attr['shape'] = 'rectangle'
# it's simple to scale graph features to indicate 'flow' conditions, e.g., scale
# each container size based on how many items each holds in a given time snapshot:
# (instead of setting node attribute ('width') to a static quantity, you would
# just bind 'n1.attr['width']' to a variable such as 'total_from_container_1'
n1 = G.get_node("Step2")
n1.attr['width'] = '2.4'
# likewise, you can do the same with edgewidth (i.e., make the arrow thicker
# to indicate higher 'flow rate')
e1 = G.get_edge("Step1", "Step2")
e1.attr['label'] = ' 1411'
e1.attr['penwidth'] = 2.6
# and you can easily add labels to the nodes and edges to indicate e.g., quantities:
e1 = G.get_edge("Step2", "Step3")
e1.attr['label'] = ' 392'
G.write("conv_fnl.dot") # save the dot file
G.draw("conv_fnl.png") # save the rendered diagram
```
[alt text http://a.imageshack.us/img148/390/convfunnel.png](http://a.imageshack.us/img148/390/convfunnel.png)
| null | CC BY-SA 2.5 | null | 2010-08-07T21:14:56.077 | 2010-08-11T04:32:31.583 | 2010-08-11T04:32:31.583 | 438 | 438 | null |
1403 | 2 | null | 1389 | 1 | null | As stated by nico, numerical overflow is when computation finds a number that is too great for the limited number of bits allocated by software to store the number. For example, if your software uses 32 bits to store integers, then computing an integer that is greater than 2,147,483,648 (or smaller than -2,147,483,648) will cause overflow.
One common reason for numerical overflow is trying to divide by a very small (close to zero) number. If the absolute values of your numbers are not too large, Look at your data and try to figure out where you might be dividing by a very small number.
| null | CC BY-SA 2.5 | null | 2010-08-07T22:28:44.363 | 2010-08-07T22:38:33.683 | 2010-08-07T22:38:33.683 | 666 | 666 | null |
1404 | 2 | null | 1385 | 4 | null | I gave this answer to [another question](https://stats.stackexchange.com/questions/1268/svd-dimensionality-reduction-for-time-series-of-different-length/1393#1393), but it might apply here too.
"There is a reasonably new area of research called Matrix Completion, that probably does what you want. A really nice introduction is given in this [lecture](http://videolectures.net/mlss09us_candes_mccota/) by Emmanuel Candes"
Essentially, if your dataset has low rank (or approximately low rank) i.e. you have 100 rows, but the actual matrix has some small rank, say 10 (or only 10 large singular values), then you can use Matrix Completion to fill the missing data.
| null | CC BY-SA 2.5 | null | 2010-08-07T22:57:34.860 | 2010-08-07T22:57:34.860 | 2017-04-13T12:44:51.060 | -1 | 352 | null |
1405 | 1 | null | null | 8 | 20253 | I am attempting to compare two diagnostic odds ratios (DORs). I would like to know of a statistical test which will allow me to do this. Please help! Thank you!
| Statistical test for difference between two odds ratios? | CC BY-SA 2.5 | null | 2010-08-08T00:44:57.630 | 2019-09-12T07:26:51.450 | 2011-04-29T00:28:51.273 | 3911 | null | [
"hypothesis-testing"
] |
1406 | 2 | null | 726 | 39 | null | This is unlikely to be a popular quote, but anyway,
>
If your experiment needs statistics, you ought to have done a better experiment.
Ernest Rutherford
| null | CC BY-SA 2.5 | null | 2010-08-08T04:15:18.393 | 2010-08-08T04:15:18.393 | null | null | 352 | null |
1407 | 2 | null | 1350 | 7 | null | After a day of searching, I found out that the issue was due to starting values. Thought I should just post the answer for future reference.
The frontier command in Stata obtains its starting values using method of moments. The initial values might have produced negative infinity for the log likelihood. To get around the problem I needed to specify the starting values myself, which were obtained from a linear regression.
| null | CC BY-SA 2.5 | null | 2010-08-08T04:50:18.187 | 2010-08-08T04:50:18.187 | null | null | 189 | null |
1408 | 2 | null | 942 | 6 | null | Most commonly used and implemented discrete wavelet basis functions (as distinct from the CWT described in Robin's answer) have two nice properties that make them useful for anomaly detection:
- They're compactly supported.
- They act as band-pass filters with the pass-band determined by their support.
What this means in practical terms is that your discrete wavelet decomposition looks at local changes in the signal across a variety of scales and frequency bands. If you have (for instance) large-magnitude, high-frequency noise superimposed across a function that displays a low-magnitude shift over a longer period, the wavelet transform will efficiently separate these two scales and let you see the baseline shift that many other techniques will miss; a shift in this baseline can suggest a disease outbreak or some other change of interest. In a lot of ways, you can treat the decomposition itself as a smoother (and there's been quite a bit of work done on efficient shrinkage for wavelet coefficients in nonparametric estimation, see e.g. pretty much anything on wavelets by Donoho). Unlike pure frequency-based methods, the compact support means that they're capable of handling non-stationary data. Unlike purely time-based methods, they allow for some frequency-based filtering.
In practical terms, to detect anomalies or change points, you would apply a discrete wavelet transform (probably the variant known either as the "Maximum Overlap DWT" or "shift invariant DWT", depending on who you read) to the data, and look at the lower-frequency sets of coefficients to see if you have significant shifts in the baseline. This will show you when a long-term change is occurring underneath any day-to-day noise. Percival and Walden (see references below) derive a few tests for statistically significant coefficients that you could use to see if a shift like this is significant or not.
An excellent reference work for discrete wavelets is Percival and Walden, "Wavelet Methods for Time Series Analysis". A good introductory work is "Introduction to wavelets and wavelet transforms, a primer" by Burrus, Gopinath, and Guo. If you're coming from an engineering background, then "Elements of wavelets for engineers and scientists" is a good introduction from a signal-processing point of view.
(Edited to include Robin's comments)
| null | CC BY-SA 2.5 | null | 2010-08-08T05:55:39.797 | 2010-08-08T19:22:55.600 | 2010-08-08T19:22:55.600 | 61 | 61 | null |
1409 | 2 | null | 1395 | 5 | null | The `rmeta` package in R can produce forest and funnel plots.
[http://cran.r-project.org/web/packages/rmeta/index.html](http://cran.r-project.org/web/packages/rmeta/index.html)
| null | CC BY-SA 2.5 | null | 2010-08-08T11:30:18.780 | 2010-08-08T11:30:18.780 | null | null | 183 | null |
1410 | 2 | null | 570 | 2 | null | I suppose you could do a multidimensional scaling of the correlation or covariance matrix. It's not exactly structural equation modelling, but it might highlight patterns and structure in the correlation or covariance matrix. This could then be formalised with an appropriate model.
| null | CC BY-SA 2.5 | null | 2010-08-08T14:28:10.037 | 2010-08-08T14:28:10.037 | null | null | 183 | null |
1411 | 2 | null | 3 | 15 | null | [ggobi](http://www.ggobi.org/) "is an open source visualization program for exploring high-dimensional data."
Mat Kelcey has a good [5 minute intro to ggobi](http://matpalm.com/blog/2010/06/04/5-minute-ggobi-demo/).
| null | CC BY-SA 2.5 | null | 2010-08-08T14:33:24.690 | 2010-08-08T14:33:24.690 | null | null | 183 | null |
1412 | 1 | 1414 | null | 4 | 1634 | Background: In some cognitive psychology research areas N-alternative forced choice tasks are common. The most common of these is a two alternative forced choice (2AFC). This usually takes the form of participants being given a stimulus and asked to make one of two judgement, e.g. the target stimuli is present/absent, the stimulus on the left is the same/different than the the one on the right, etc. Designs in which the experimental data is from a 2AFC but there is only one data point per subject are rare, but do exist, e.g. some eye-witness identification research. Since the dependent variable (correct/incorrect) is binary, these experiments are reasonable places to use logistic regression.
My question is this: since chance performance is 50% in a 2AFC trial, is it still reasonable to use the standard logistic link function? Specifically, the logistic function has a minimum value approaching 0% correct, but in practice participants in a 2AFC should be correct at least 50% of the time due to chance. I imagine the following case in which it may present a problem: an independent variable is assessing the difficulty of the discrimination (e.g. difficulty 1, easy - 5, hard; please note this is introduced in ordinal terms only for ease of comprehension - for the sake of this discussion consider this variable as being interval) - participants got 50% correct at 5 and 4, 75% correct at 3, 85% correct at 2, and 99% correct at 1. Would using a standard logistic link function cause us to underestimate the slope? [I think so, but please correct me if I'm wrong, see below]
Edit: Those who have answered my question so far have expressed that the way in which I set up the problem was unclear. I'm providing the sample below to help clear things up.
```
library(psyphy)
make.data <- function(zero,one)
{
return(c(rep(0,zero),rep(1,one)))
}
center <- function(x) {return(scale(x,scale=FALSE))}
logit.data <- data.frame(Score=c(make.data(50,50),make.data(50,50),make.data(25,75),make.data(15,85),make.data(1,99)), Difficulty=rep(5:1,each=100))
logit.data$Difficulty2 <- center(logit.data$Difficulty)^2
standard <- glm(Score~center(Difficulty),data=logit.data,family=binomial) #standard link function
standard.2 <- glm(Score~center(Difficulty)+Difficulty2,data=logit.data,family=binomial) #standard link function, but better with a quadradic
revised.link <- glm(Score~center(Difficulty),data=logit.data,family=binomial(mafc.logit(2)))
AIC(base)
AIC(base.2)
AIC(revised.link)
coef(base)
coef(base.2)
coef(revised.link)
#plot
plot(diffs,plogis(coef(standard)[1] +coef(standard)[2]*center(diffs)),xlab="Difficulty",ylab="Pr(Correct)",ylim=c(0,1),col="blue",type="l");abline(.5,0,col="Orange");lines(diffs,plogis(coef(standard.2)[1]+coef(standard.2)[2]*center(diffs)+coef(standard.2)[3]*center(diffs)^2),col="Cyan");lines(diffs,(p2afc(coef(revised.link)[1]+coef(revised.link)[2]*center(diffs))),col="Green");lines(5:1,c(.55,.60,.75,.85,.99),col="Black")
```

In the above image the orange horizontal line marks 50% correct responses. The jagged black line represents the data supplied to the estimation equation (note the values for 4 and 5 disappear behind the orange 50% marker). The blue line is the equation produced by a standard logistic link. Note that it estimates below 50% accuracy when discrimination is most difficult (5). The cyan line is the standard logistic link with a quadratic term. The green line is a non-standard link that takes into account that the data comes from a 2AFC experiment where performance is very unlikely to fall below 50%.
Note that the AIC for a model fit using a non-standard link function is superior to the standard logistic link function. Also note that the slope for the standard equation is less than the slope for the standard equation with the quadratic term (which more accurately reflects the real data). Thus, using a logistic function blindly on 2AFC data does (at least) appear to underestimate the slope.
Is there a problem with my demonstration that means that I am not seeing what I think I am seeing? If I'm correct, then what other consequences (if any) are there of using the generic logistic function with 2AFC data [presumably extensible to NAFC cases]?
| Consequences of an improper link function in N alternative forced choice procedures (e.g. 2AFC)? | CC BY-SA 2.5 | null | 2010-08-08T18:10:40.850 | 2017-12-21T02:09:02.167 | 2010-11-20T07:29:05.783 | 196 | 196 | [
"logistic",
"link-function"
] |
1413 | 1 | 291134 | null | 7 | 3084 | It seems like the current revision of lmer does not allow for custom link functions.
- If one needs to fit a logistic
linear mixed effect model with a
custom link function what options
are available in R?
- If none - what options are available in other
statistics/programming packages?
- Are there conceptual reasons lmer
does not have custom link functions,
or are the constraints purely
pragmatic/programmatic?
| Mixed regression models and custom link functions in R? | CC BY-SA 2.5 | null | 2010-08-08T18:14:35.537 | 2017-07-12T10:42:27.217 | null | null | 196 | [
"r",
"regression",
"mixed-model",
"link-function"
] |
1414 | 2 | null | 1412 | 1 | null | My question is this: since chance performance is 50% in a 2AFC trial, is it still reasonable to use the standard logistic link function?
yes.
Think of it this way: suppose you fit a logistic regression where your $y$ variable takes value 1 if subject i has flue, 0 otherwise.
So long as neither $y_i=1$ nor $y_i=0$ are rare events, then flue incidence (i.e. $n^{-1}\sum_{i=1}^ny_i$) is not relevant, it will be absorbed by the intercept of your model.
but in practice participants in a 2AFC should be correct 50% of the time due to chance
if this statement is true and all your exogenous variables have been de-meaned, then,
you can expect your estimated constant to be $logit^{-1}(0.5)\approx0.05$
Best,
| null | CC BY-SA 2.5 | null | 2010-08-08T18:27:45.137 | 2010-08-08T18:35:16.010 | 2010-08-08T18:35:16.010 | 603 | 603 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.