Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
966 | 2 | null | 964 | 3 | null | This is often used for a price to return transformation based on assuming continuously compounded returns. The Campbell, Lo, and MacKinlay book (Econometrics of Financial Markets, 1997) lays it out quite nicely:
Define r_t as the log of gross returns 1 + R_t:
```
r_t := log(1 + R_t)
```
which is the same as the log of the previous prices
```
log(P_t / P_{t-1})
```
which is the same as
```
p_t - p_{t-1}
```
which is what you asked about.
| null | CC BY-SA 2.5 | null | 2010-07-29T10:22:37.673 | 2010-07-29T10:22:37.673 | null | null | 334 | null |
967 | 2 | null | 726 | 26 | null | >
I keep saying that the sexy job in the next 10 years will be statisticians. And I'm not kidding.
[Hal Varian](http://en.wikipedia.org/wiki/Hal_Varian)
| null | CC BY-SA 3.0 | null | 2010-07-29T10:50:53.227 | 2011-08-15T05:59:45.090 | 2011-08-15T05:59:45.090 | 4479 | 215 | null |
968 | 2 | null | 726 | 108 | null | >
All generalizations are false,
including this one.
Mark Twain
| null | CC BY-SA 2.5 | null | 2010-07-29T10:57:18.647 | 2010-07-29T10:57:18.647 | null | null | 1356 | null |
969 | 2 | null | 726 | 18 | null | >
An ecologist is a statistician who likes to be outside.
-- apparently a good friend of [Murray Cooper](http://r.789695.n4.nabble.com/Why-software-fails-in-scientific-research-td1573062.html#a2275423).
| null | CC BY-SA 2.5 | null | 2010-07-29T11:25:13.733 | 2010-12-03T04:04:09.337 | 2010-12-03T04:04:09.337 | 795 | 144 | null |
970 | 2 | null | 951 | 1 | null | In general there is none. A Poisson process has inter-arrival times that are exponentially distributed, which does not have heavy tails.
| null | CC BY-SA 2.5 | null | 2010-07-29T11:41:41.960 | 2010-07-29T11:41:41.960 | null | null | 247 | null |
971 | 2 | null | 961 | 13 | null | At first I think that the Fisher test is used correctly.
Count data are better handled using log-linear models (not logit, to ensure that the fitted values are bounded below). In R you can specify `family=poisson` (which sets errors = Poisson and link = log). The log link ensures that all the fitted values are positive, while the Poisson errors take account of the fact that the data are integer and have variances that are equal to their means.
e.g. `glm(y~x,poisson)` and the model is fitted with a log link and Poisson errors (to account for the non-normality).
In cases where there is overdispersion (the residual deviance should be equal to the residual degrees of freedom, if the Poisson errors assumption is appropriate), instead of using `quasipoisson` as the error family, you could fit a negative binomial model.
(This involves the function `glm.nb` from package `MASS`)
In your case you could fit and compare models using commands like the following:
```
observed <- as.vector(data)
Ts<-factor(rep(c("T1","T2","T3"),each=3))
Gs<-factor(rep(c("G1","G2","G3"),3))
model1<-glm(observed~Ts*Gs,poisson)
#or and a model without the interaction terms
model2<-glm(observed~Ts+Gs,poisson)
#you can compare the two models using anova with a chi-squared test
anova(model1,model2,test="Chi")
summary(model1)
```
Always make sure that your minimal model contains all the nuisance variables.
As for how do we know who is different from who, there are some plots that may help you. R function `assocplot` produces an association plot indicating deviations from independence of rows and columns in a two dimensional contingency table.
Here are the same data plotted as a mosaic plot
```
mosaicplot(data, shade = TRUE)
```
| null | CC BY-SA 3.0 | null | 2010-07-29T11:41:49.370 | 2015-01-31T08:17:04.373 | 2015-01-31T08:17:04.373 | 339 | 339 | null |
972 | 2 | null | 602 | 1 | null | There are no general heuristics, you should make a grid search, especially since the value of nu must be between 0-1.
| null | CC BY-SA 2.5 | null | 2010-07-29T11:43:54.703 | 2010-07-29T11:43:54.703 | null | null | 566 | null |
973 | 1 | 1020 | null | 37 | 19205 | What are the freely available data set for classification with more than 1000 features (or sample points if it contains curves)?
There is already a community wiki about free data sets:
[Locating freely available data samples](https://stats.stackexchange.com/questions/7/locating-freely-available-data-samples/)
But here, it would be nice to have a more focused list that can be used more conveniently, also I propose the following rules:
- One post per dataset
- No link to set of dataset
- each data set must be associated with
a name (to figure out what it is about) and a link to the dataset (R datasets can be named with package name)
the number of features (let say it is p) the size of the dataset (let say it is n) and the number of labels/class (let say it is k)
a typical error rate from your experience (state the used algorithm in to words) or from the litterature (in this last case link the paper)
| Free data set for very high dimensional classification | CC BY-SA 2.5 | null | 2010-07-29T12:02:28.347 | 2013-07-03T01:34:41.217 | 2017-04-13T12:44:21.160 | -1 | 223 | [
"machine-learning",
"classification",
"dataset",
"large-data"
] |
975 | 2 | null | 927 | 5 | null | Another good podcast is [In our time](http://www.bbc.co.uk/radio4/features/in-our-time/) by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal with Mathematics and Statistics. Take a look at the podcast archive for [Science subjects](http://www.bbc.co.uk/radio4/features/in-our-time/archive/science).
| null | CC BY-SA 2.5 | null | 2010-07-29T12:44:50.820 | 2010-07-29T12:44:50.820 | null | null | 8 | null |
977 | 1 | 978 | null | 4 | 1562 | When we are monitoring movements of structures we normally install monitoring points onto the structure before we do any work which might cause movement. This gives us chance to take a few readings before we start doing the work to 'baseline' the readings.
Quite often the data is quite variable (the variations in the reading can easily be between 10 and 20% of the fianl movement). The measurements are also often affected by the environment in which they are taken so one set of measurements taken on one project may not have the same accuracy as measurements on another project.
Is there any statisitcal method, or rule of thumb that can be applied to say how many baseline readings need to be taken to give a certain accuracy before the first reading is taken? Are there any rules of humb that can be applied to this situation?
| How many measurements are needed to 'baseline' a measurement? | CC BY-SA 2.5 | null | 2010-07-29T13:43:53.010 | 2010-07-30T15:37:06.190 | null | null | 210 | [
"variance",
"measurement"
] |
978 | 2 | null | 977 | 5 | null | I think you should look at [power calculations](http://en.wikipedia.org/wiki/Statistical_power). These are often used to decide the sample size of survey or clinical trial. Taken from wikipedia:
>
A priori power analysis is conducted
prior to the research study, and is
typically used to determine an
appropriate sample size to achieve
adequate power.
| null | CC BY-SA 2.5 | null | 2010-07-29T13:51:07.953 | 2010-07-29T13:51:07.953 | null | null | 8 | null |
980 | 1 | null | null | 13 | 38313 | I haven't studied statistics for over 10 years (and then just a basic course), so maybe my question is a bit hard to understand.
Anyway, what I want to do is reduce the number of data points in a series. The x-axis is number of milliseconds since start of measurement and the y-axis is the reading for that point.
Often there is thousands of data points, but I might only need a few hundreds. So my question is: How do I accurately reduce the number of data points?
What is the process called? (So I can google it)
Are there any prefered algorithms (I will implement it in C#)
Hope you got some clues. Sorry for my lack of proper terminology.
---
Edit: More details comes here:
The raw data I got is heart rate data, and in the form of number of milliseconds since last beat. Before plotting the data I calculate number of milliseconds from first sample, and the bpm (beats per minute) at each data point (60000/timesincelastbeat).
I want to visualize the data, i.e. plot it in a line graph. I want to reduce the number of points in the graph from thousands to some hundreds.
One option would be to calculate the average bpm for every second in the series, or maybe every 5 seconds or so. That would have been quite easy if I knew I would have at least one sample for each of those periods (seconds of 5-seconds-intervals).
| How do I reduce the number of data points in a series? | CC BY-SA 2.5 | null | 2010-07-29T14:04:20.853 | 2018-11-16T16:36:41.907 | 2010-07-31T00:49:57.463 | 159 | null | [
"data-visualization"
] |
981 | 2 | null | 980 | 11 | null | You have two problems: too many points and how to smooth over the remaining points.
Thinning your sample
If you have too many observations arriving in real time, you could always use [simple random sampling](http://en.wikipedia.org/wiki/Simple_random_sample) to thin your sample. Note, for this too be true, the number of points would have to be very large.
Suppose you have N points and you only want n of them. Then generate n random numbers from a discrete uniform U(0, N-1) distribution. These would be the points you use.
If you want to do this sequentially, i.e. at each point you decide to use it or not, then just accept a point with probability p. So if you set p=0.01 you would accept (on average) 1 point in a hundred.
If your data is unevenly spread and you only want to thin dense regions of points, then just make your thinning function a bit more sophisticated. For example, instead of p, what about:
$$1-p \exp(-\lambda t)$$
where $\lambda$ is a positive number and $t$ is the time since the last observation. If the time between two points is large, i.e. large $t$, the probability of accepting a point will be one. Conversely, if two points are close together, the probability of accepting a point will be $1-p$.
You will need to experiment with values of $\lambda$ and $p$.
Smoothing
Possibly something like a simple moving average type scheme. Or you could go for something more advanced like a [kernel smoother](http://en.wikipedia.org/wiki/Kernel_smoother) (as others suggested). You will need to be careful that you don't smooth too much, since I assume that a sudden drop should be picked up very quickly in your scenario.
There should be C# libraries available for this sort of stuff.
Conclusion
Thin if necessary, then smooth.
| null | CC BY-SA 4.0 | null | 2010-07-29T14:15:09.800 | 2018-11-16T16:36:41.907 | 2018-11-16T16:36:41.907 | 78863 | 8 | null |
982 | 2 | null | 980 | 10 | null | Well, I think the word you're looking for is "sampling," but I'm not sure why you want to do it. Thousands of data points isn't very many. Or are you looking just to plot a smaller number of equally-spaced points? That's usually called "binning."
Is your goal to generate a visualization? In that case, you might want to keep the raw data, plot it as a scattergraph, then overlay some sort of central tendency (regression line, spline, whatever) to communicate whatever the takehome message ought to be.
Or is your goal to numerically summarize the results in some way? In that case, you might want to explain your problem in more detail!
| null | CC BY-SA 2.5 | null | 2010-07-29T14:18:02.383 | 2010-07-29T14:18:02.383 | null | null | 6 | null |
983 | 2 | null | 977 | 3 | null | It really depends on the amount of variance relative to the size of a measurement that you care about. If you need to be able to tell the difference between a mean of 2 and a mean of 0, and your data look like this:
```
-4.4 3.8 -2.0 -5.1 0.2 7.1 0.9 -5.4 2.8 0.5
```
Then you're going to need a lot more data! But if you only care if the average is less than or more than 10, then that much data is adequate.
@cgillespie gives the technically correct response. You need to have some idea what size of effect you care about, as well as some idea how much variance your measurements have. If the equations of power analysis are more than you can deal with, you can always use random numbers in an Excel spreadsheet! Generate columns of random numbers with a normal distribution and various means and variances, then figure out whether the confidence intervals (2 * standard deviation / sqrt(N)) around the mean of each set of numbers include differences that you might are about. Do that a bunch of times. That'll give you a good idea of how many measurements you need, and it's not too hard to explain to others.
| null | CC BY-SA 2.5 | null | 2010-07-29T14:26:53.593 | 2010-07-29T14:26:53.593 | null | null | 6 | null |
984 | 2 | null | 980 | 1 | null | You're not providing enough information. Why do you want to reduce the data points. A few thousand is nothing these days.
Given that you want the same result each time you view the same data perhaps you want to simply bin averages. You have variable spacing on your x-axis. Maybe you're trying to make that consistent? In that case you would set a bin width of perhaps 50 msec, or 100, and then average all the points in there. Make the bin width as large as you need to reduce the data points to the size of the set you want.
It's really a hard question to answer without a reason for why you're getting rid of data.
| null | CC BY-SA 2.5 | null | 2010-07-29T14:29:16.240 | 2010-07-29T14:29:16.240 | null | null | 601 | null |
986 | 2 | null | 961 | 4 | null | You can use multinom from nnet package for multinomial regression.
Post-hoc tests you can use linearHypothesis from car package.
You can conduct test of independence using linearHypothesis (Wald test) or anova (LR test).
| null | CC BY-SA 2.5 | null | 2010-07-29T15:20:56.393 | 2010-07-29T15:20:56.393 | null | null | 419 | null |
990 | 2 | null | 726 | 7 | null | >
You may be too vague to be wrong and
that's really bad cause that's just
obscuring the issue.
Bruce Sterling
| null | CC BY-SA 2.5 | null | 2010-07-29T15:43:35.187 | 2010-07-29T15:43:35.187 | null | null | 3807 | null |
991 | 2 | null | 964 | 1 | null | usually, you plot such a series to check the extend to which it exhibits heteroskedasticity.
Depending on the answer, you may have to model the residuals, even if you are only interested in the mean.
On the top of my head, this article is a practical example:
[http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBIQFjAA&url=http%3A%2F%2Fdss.ucsd.edu%2F~jhamilto%2FJHamilton_Engle.pdf&ei=CKJRTN-uO4XeOO_LhL4E&usg=AFQjCNEP4dL3_uRf28371SnBS4lhhnYsSw&sig2=tJRjwUetH8XTHuijcXYmbg](http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBIQFjAA&url=http%3A%2F%2Fdss.ucsd.edu%2F~jhamilto%2FJHamilton_Engle.pdf&ei=CKJRTN-uO4XeOO_LhL4E&usg=AFQjCNEP4dL3_uRf28371SnBS4lhhnYsSw&sig2=tJRjwUetH8XTHuijcXYmbg)
| null | CC BY-SA 2.5 | null | 2010-07-29T15:45:39.227 | 2010-07-29T15:45:39.227 | null | null | 603 | null |
992 | 2 | null | 913 | 1 | null | Try a bivariate robust regression
(see [http://cran.r-project.org/web/packages/rrcov/vignettes/rrcov.pdf](http://cran.r-project.org/web/packages/rrcov/vignettes/rrcov.pdf) for an intro).
If your data points are all positive, you might want to try to regress log(y) on log(x).
Note that log() is not a substitute for a robust regression, but it sometimes makes the results more interpretable.
| null | CC BY-SA 2.5 | null | 2010-07-29T15:50:24.717 | 2010-07-29T15:50:24.717 | null | null | 603 | null |
993 | 2 | null | 924 | 6 | null | A (two-sided) Fisher's Exact test gives p-value = 0.092284.
```
function p = fexact(k, x, m, n)
%FEXACT Fisher's Exact test.
% Y = FEXACT(K, X, M, N) calculates the P-value for Fisher's
% Exact Test.
% K, X, M and N must be nonnegative integer vectors of the same
% length. The following must also hold:
% X <= N <= M, X <= K <= M and K + N - M <= X. Here:
% K is the number of items in the group,
% X is the number of items in the group with the feature,
% M is the total number of items,
% N is the total number of items with the feature,
if nargin < 4
help(mfilename);
return;
end
nr = length(k);
if nr ~= length(x) | nr ~= length(m) | nr ~= length(n)
help(mfilename);
return;
end
na = nan;
v = na(ones(nr, 1));
mi = max(0, k + n - m);
ma = min(k, n);
d = hygepdf(x, m, k, n) * (1 + 5.8e-11);
for i = 1:nr
y = hygepdf(mi(i):ma(i), m(i), k(i), n(i));
v(i) = sum(y(y <= d(i)));
end
p = max(min(v, 1), 0);
p(isnan(v)) = nan;
```
For your example, try `fexact(1e6, 3, 2e6, 13)`.
| null | CC BY-SA 2.5 | null | 2010-07-29T15:57:21.187 | 2010-07-29T16:28:09.103 | 2010-07-29T16:28:09.103 | 506 | 506 | null |
994 | 2 | null | 421 | 8 | null | [The Drunkard's Walk: How Randomness Rules Our Lives](http://rads.stackoverflow.com/amzn/click/0713999225) by Leonard Mlodinow is an excellent book for laypeople. Enjoyable and educational.
It might not be a textbook, but it makes you think about the world in the right way.
| null | CC BY-SA 2.5 | null | 2010-07-29T16:00:17.130 | 2010-07-29T16:00:17.130 | null | null | null | null |
995 | 2 | null | 614 | 4 | null | Collaborative Statistics is CC BY: [http://cnx.org/content/col10522/latest/](http://cnx.org/content/col10522/latest/)
| null | CC BY-SA 2.5 | null | 2010-07-29T16:02:18.907 | 2010-07-29T16:02:18.907 | null | null | null | null |
996 | 2 | null | 213 | 20 | null | You can find a pedagogical summary of the various methods available in [(1)](http://download.springer.com/static/pdf/216/chp%253A10.1007%252F978-3-642-35494-6_4.pdf?auth66=1386938034_5f6480148bf543fd3993a8b2ff1ea06a&ext=.pdf)
For some --recent-- numerical comparisons of the various methods listed there, you can check
[(2)](http://link.springer.com/article/10.1007%2Fs00362-013-0544-8#page-1) and [(3)](http://www.sciencedirect.com/science/article/pii/S0167947313002661).
there are many older (and less exhaustive) numerical comparisons, typically found in books. You will find one on pages 142-143 of (4), for example.
Note that all the methods discussed here have an open source R implementation, mainly through the [rrcov](http://cran.r-project.org/web/packages/rrcov/vignettes/rrcov.pdf)
package.
- (1) P. Rousseeuw and M. Hubert (2013) High-Breakdown Estimators of
Multivariate Location and Scatter.
- (2) M. Hubert, P. Rousseeuw, K. Vakili (2013).
Shape bias of robust covariance estimators: an empirical study.
Statistical Papers.
- (3) K. Vakili and E. Schmitt (2014). Finding multivariate outliers with FastPCS. Computational Statistics & Data Analysis.
- (4) Maronna R. A., Martin R. D. and Yohai V. J. (2006).
Robust Statistics: Theory and Methods. Wiley, New York.
| null | CC BY-SA 3.0 | null | 2010-07-29T16:13:41.440 | 2013-12-11T18:58:18.030 | 2013-12-11T18:58:18.030 | 603 | 603 | null |
997 | 2 | null | 924 | 10 | null | The huge denominators throw off one's intuition. Since the sample sizes are identical, and the proportions low, the problem can be recast: 13 events occurred, and were expected (by null hypothesis) to occur equally in both groups. In fact the split was 3 in one group and 10 in the other. How rare is that? The binomial test answers.
Enter this line into R:
binom.test(3,13,0.5,alternative="two.sided")
The two-tail P value is 0.09229, identical to four digits to the results of Fisher's test.
Looked at that way, the results are not surprising. The problem is equivalent to this one: If you flipped a coin 13 times, how surprising would it be to see three or fewer, or ten or more, heads. One of those outcomes would occur 9.23% of the time.
| null | CC BY-SA 2.5 | null | 2010-07-29T16:50:11.297 | 2010-08-22T01:43:00.833 | 2010-08-22T01:43:00.833 | 25 | 25 | null |
998 | 2 | null | 924 | 0 | null | In addition to the other answers:
If you have 1,000,000 observations and when your event comes up only a few times, you are likely to want to look at a lot of different events.
If you look at 100 different events you will run into problems if you work with p<0.05 as criteria for significance.
| null | CC BY-SA 2.5 | null | 2010-07-29T18:06:31.570 | 2010-07-29T18:06:31.570 | null | null | 3807 | null |
999 | 2 | null | 138 | 4 | null | If you are coming from a SAS or SPSS background, check out:
[http://sites.google.com/site/r4statistics/](http://sites.google.com/site/r4statistics/)
This is the companion site to the book, R for SAS and SPSS Users by Robert Muenchen and a free version of the book can be found here.
| null | CC BY-SA 2.5 | null | 2010-07-29T18:33:43.370 | 2010-07-29T18:33:43.370 | null | null | null | null |
1000 | 2 | null | 951 | 4 | null | Well, if you have a point process that you try modeling as a Poisson process, and find it has heavy tails, there are several possibilities. What are the key assumptions for a Poisson Process:
-There is a constant rate function
-Events are memoryless, that is P(E in (t,t+d)) is independent of t and when other events are.
-The waiting time until the next event is exponentially distributed (kinda what the previous two are saying)
So, how can you violate these assumptions to get heavy tails?
-Non-constant rate function. If the rate function switches between, say, two values, you'll have too many short wait-times, and too many long wait-times, given the overall rate function. This can show itself as having heavy tails.
-The waiting time is not exponentially distributed. In which case, you don't have a Poisson process. You have some other sort of point process.
Note that in the extreme case, any point process can be modeled by a NHPP - put a delta function at each event, and set the rate to 0 elsewhere. I think we can all agree that this is a poor model, having little predictive power. So if you are interested in a NHPP, you'll want to think a bit about whether that is the right model, or whether you are overly-adjusting a model to fit your data.
| null | CC BY-SA 2.5 | null | 2010-07-29T18:39:45.200 | 2010-07-29T18:39:45.200 | null | null | 549 | null |
1001 | 1 | 3078 | null | 5 | 2970 | I have distributions from two different data sets and I would like to
measure how similar their distributions (in terms of their bin
frequencies) are. In other words, I am not interested in the correlation of
data point sequences but rather in the their distributional properties with respect to similarity. Currently I can only observe a similarity in eye-balling which is not enough. I don't want to assume causality and I don't want to predict at this point. So, I assume that correlation is the way to go.
Spearman's Correlation Coefficient is used to compare non-normal data and since I don't know anything about the real underlying distribution in my data, I think it would be a save bet. I wonder if this measure can also be used to
compare distributional data rather than the data poitns that are
summarized in a distribution. Here the example code in R that exemplifies
what I would like to check:
```
aNorm <- rnorm(1000000)
bNorm <- rnorm(1000000)
cUni <- runif(1000000)
ha <- hist(aNorm)
hb <- hist(bNorm)
hc <- hist(cUni)
print(ha$counts)
print(hb$counts)
print(hc$counts)
# relatively similar
n <- min(c(NROW(ha$counts),NROW(hb$counts)))
cor.test(ha$counts[1:n], hb$counts[1:n], method="spearman")
# quite different
n <- min(c(NROW(ha$counts),NROW(hc$counts)))
cor.test(ha$counts[1:n], hc$counts[1:n], method="spearman")
```
Does this make sense or am I violating some assumptions of the coefficient?
Thanks,
R.
| Is Spearman's correlation coefficient usable to compare distributions? | CC BY-SA 3.0 | null | 2010-07-29T18:46:47.247 | 2022-04-17T17:42:24.263 | 2012-01-18T13:23:58.143 | null | 608 | [
"distributions",
"spearman-rho",
"paired-data"
] |
1003 | 2 | null | 614 | 13 | null | Try IPSUR, Introduction to Probability and Statistics Using R by G. Jay Kerns. It's "free, in the GNU sense of the word".
[http://ipsur.r-forge.r-project.org/book/](http://ipsur.r-forge.r-project.org/book/)
It's definitely open source - on the download page you can download the LaTeX source or the lyx source used to generate this.
| null | CC BY-SA 3.0 | null | 2010-07-29T18:59:01.417 | 2016-10-06T19:36:07.787 | 2016-10-06T19:36:07.787 | 122650 | 36 | null |
1004 | 2 | null | 1001 | 10 | null | Rather use [Kolmogorov–Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test), which is exactly what you need. R function `ks.test` implements it.
Also check [this question](https://stats.stackexchange.com/questions/411/motivation-for-kolmogorov-distance-between-distributions).
| null | CC BY-SA 2.5 | null | 2010-07-29T18:59:32.213 | 2010-07-29T19:14:52.727 | 2017-04-13T12:44:33.237 | -1 | null | null |
1005 | 2 | null | 138 | 11 | null | Try IPSUR, Introduction to Probability and Statistics Using R. It's a free book, free in the GNU sense of the word.
[http://ipsur.r-forge.r-project.org/book/index.php](http://ipsur.r-forge.r-project.org/book/index.php)
It's definitely open source - on the download page you can download the LaTeX source or the lyx source used to generate this.
| null | CC BY-SA 2.5 | null | 2010-07-29T19:01:51.677 | 2010-07-29T19:01:51.677 | null | null | 36 | null |
1006 | 2 | null | 924 | 5 | null | In this case Poisson is good approximation for distribution for number of cases.
There is simple formula to approximate variance of log RR (delta method) .
log RR = 10/3 = 1.2,
se log RR = sqrt(1/3+1/10) = 0.66, so 95%CI = (-0.09; 2.5)
It is not significant difference at 0.05 level using two-sided test.
LR based Chi-square test for Poisson model gives p=0.046 and Wald test p=0.067.
This results are similar to Pearson Chi-square test without continuity correction (Chi2 with correction p=0.096).
Another possibility is chisq.test with option simulate.p.value=T, in this case p=0.092 (for 100 000 simulations).
In this case test statistics is rather discrete, so Fisher test can be conservative.
There is some evidence that difference can be significant. Before final conclusion data collecting process should be taken into account.
| null | CC BY-SA 2.5 | null | 2010-07-29T19:22:17.873 | 2010-07-29T19:22:17.873 | null | null | 419 | null |
1008 | 2 | null | 871 | 2 | null | P value from theoretical point of view is some realization of random variable.
There is some standard (in probability) to use upper case letters for random variables and lower case for realizations.
In table headers we should use P (maybe italicize), in text together with its value p=0.0012 and in text describing for example methodology p-value .
| null | CC BY-SA 2.5 | null | 2010-07-29T19:39:02.233 | 2010-07-29T19:39:02.233 | null | null | 419 | null |
1009 | 2 | null | 899 | 3 | null |
- For data in IQR range you should use
truncated normal distribution (for
example R package gamlss.tr) to
estimate parameters of this
distribution.
- Another approach is using mixture models with 2 or 3 components (distributions). You can fit such models using gamlss.mx package (distributions from package gamlss.dist can be specified for
each component of mixture).
| null | CC BY-SA 2.5 | null | 2010-07-29T20:09:37.277 | 2010-07-29T20:09:37.277 | null | null | 419 | null |
1010 | 2 | null | 899 | 10 | null | If I understand correctly, then you can just fit a mixture of two Normals to the data. There are lots of R packages that are available to do this. This example uses the [mixtools](http://cran.r-project.org/web/packages/mixtools/index.html) package:
```
#Taken from the documentation
library(mixtools)
data(faithful)
attach(faithful)
#Fit two Normals
wait1 = normalmixEM(waiting, lambda = 0.5)
plot(wait1, density=TRUE, loglik=FALSE)
```
This gives:
[Mixture of two Normals http://img294.imageshack.us/img294/4213/kernal.jpg](http://img294.imageshack.us/img294/4213/kernal.jpg)
The package also contains more sophisticated methods - check the documentation.
| null | CC BY-SA 2.5 | null | 2010-07-29T21:20:25.090 | 2010-07-29T21:20:25.090 | null | null | 8 | null |
1011 | 2 | null | 964 | 4 | null | Essentially they're looking for the log of the fold-change from one time point to the next because intuitively, it's easier to think about log-fold changes visually than actual fold-changes.
Log-fold changes make decreases and increases simply a difference in sign, so that a log-2-fold increase is the same distance as a log-2-fold decrease (i.e. |log 2| = |log 0.5|). In addition, many systems exhibit multiplicative effects for independent events, e.g. biological systems and economic systems, where two independent x-fold increases results in an x^2 fold increase, whereas on a log scale, this becomes an additive (and thus easier to see) 2*log(x) increase.
Also, `diff(log(x))` is prettier than `x[-1]/x[-length(x)]`.
| null | CC BY-SA 2.5 | null | 2010-07-29T21:21:02.530 | 2010-07-29T21:21:02.530 | null | null | 378 | null |
1012 | 1 | 1014 | null | 2 | 2658 | A colleague wants to compare models that use either a Gaussian distribution or a uniform distribution and for other reasons needs the standard devation of these two distributions to be equal. In R I can do a simulation...
```
sd(runif(100000000))
sd(runif(100000000,min=0,max=2))
```
and see that the calculated standard deviation is likely to be ~.2887 * the range of the uniform distribution. However, I was wondering if there was an equation that could yield the exact value, and if so, what that formula was.
| What would the calculated value of the standard deviation of a uniform distribution be? | CC BY-SA 2.5 | null | 2010-07-29T21:29:57.590 | 2010-09-29T14:34:45.027 | 2010-08-12T07:19:30.733 | 196 | 196 | [
"distributions",
"uniform-distribution",
"normal-distribution"
] |
1013 | 2 | null | 1012 | 2 | null | The standard deviation of the continous uniform distribution on the interval [0,1] is 12-1/2≈0.288675. The [Wikipedia article](http://en.wikipedia.org/wiki/Uniform_distribution_%28continuous%29) lists of it's more properties.
| null | CC BY-SA 2.5 | null | 2010-07-29T21:47:00.497 | 2010-09-29T14:34:45.027 | 2010-09-29T14:34:45.027 | 56 | 56 | null |
1014 | 2 | null | 1012 | 11 | null | In general, the standard deviation of a continous uniform distribution is (max - min) / sqrt(12).
| null | CC BY-SA 2.5 | null | 2010-07-29T21:54:24.130 | 2010-07-29T21:54:24.130 | null | null | 614 | null |
1015 | 1 | 3667 | null | 8 | 992 | I am trying to calculate the reliability in an elicitation exercise by analysing some test-retest questions given to the experts. The experts elicited a series of probability distributions which were then compared with the true value (found at a later date) by computing the standardized quadratic scores. These scores are the values that I am using to calculate the reliability between the test-retest results.
Which reliability method would be appropriate here? I was looking mostly at Pearson's correlation and Chronbach's alpha (and got some negative values using both methods) but I am not sure this is the right approach.
---
UPDATE:
Background information
The data were collected from a number of students who were asked to predict their own actual exam mark in four chosen modules by giving a probability distribution of the marks. One module was then repeated at a later date (hence the test-retest exercise).
Once the exam was taken, and the real results were available, the standardized quadratic scores were computed. These scores are proper scoring rules used to compare assessed probability distributions with the observed data which might be known at a later stage.
The probability score Q is defined as:
[Quadratic score http://img717.imageshack.us/img717/9424/chart2j.png](http://img717.imageshack.us/img717/9424/chart2j.png)
where k is the total number of elicited probabilities and j is the true outcome.
My question is which reliability method would be more appropriate when it comes to assessing the reliability between the scores of the repeated modules? I calculated Pearson's correlation and Chronbach's alpha (and got some negative values using both methods) but there might be a better approach.
| Reliability in Elicitation Exercise | CC BY-SA 2.5 | null | 2010-07-29T22:03:55.887 | 2010-11-01T16:32:27.973 | 2010-10-18T15:21:53.397 | 930 | 108 | [
"psychometrics",
"reliability",
"elicitation"
] |
1016 | 1 | 1026 | null | 10 | 6877 | I've got a linear regression model with the sample and variable observations and I want to know:
- Whether a specific variable is significant enough to remain included in the model.
- Whether another variable (with observations) ought to be included in the model.
Which statistics can help me out? How can get them most efficiently?
| Is a variable significant in a linear regression model? | CC BY-SA 2.5 | null | 2010-07-29T22:04:32.143 | 2010-09-16T23:06:05.980 | 2010-08-09T10:57:45.670 | 8 | 614 | [
"regression"
] |
1018 | 2 | null | 973 | 3 | null | [Arcene](http://archive.ics.uci.edu/ml/datasets/Arcene)
n=900
p=10000 (3k is artificially added noise)
k=2 (~balanced)
From [NIPS2003](http://www.nipsfsc.ecs.soton.ac.uk/papers/NIPS2003-Datasets.pdf).
| null | CC BY-SA 2.5 | null | 2010-07-29T22:30:13.867 | 2010-07-30T18:00:06.813 | 2010-07-30T18:00:06.813 | 190 | null | null |
1019 | 2 | null | 973 | 3 | null | [Dexter](http://archive.ics.uci.edu/ml/datasets/Dexter)
n=2600
p=20000 (10k+53 is artificial noise)
k=2 (balanced)
From [NIPS2003](http://www.nipsfsc.ecs.soton.ac.uk/papers/NIPS2003-Datasets.pdf).
| null | CC BY-SA 2.5 | null | 2010-07-29T22:32:44.880 | 2010-07-29T22:41:53.950 | 2010-07-29T22:41:53.950 | null | null | null |
1020 | 2 | null | 973 | 3 | null | [Dorothea](http://archive.ics.uci.edu/ml/datasets/Dorothea)
n=1950
p=100000 (0.1M, half is artificially added noise)
k=2 (~10x unbalanced)
From [NIPS2003](http://www.nipsfsc.ecs.soton.ac.uk/papers/NIPS2003-Datasets.pdf).
| null | CC BY-SA 2.5 | null | 2010-07-29T22:35:28.497 | 2010-07-29T22:41:33.450 | 2010-07-29T22:41:33.450 | null | null | null |
1021 | 2 | null | 973 | 3 | null | [Gisette](http://archive.ics.uci.edu/ml/datasets/Gisette)
n=13500
p=5000 (half is artificially added noise)
k=2 (balanced)
From [NIPS2003](http://www.nipsfsc.ecs.soton.ac.uk/papers/NIPS2003-Datasets.pdf).
| null | CC BY-SA 2.5 | null | 2010-07-29T22:38:21.253 | 2010-07-29T22:38:21.253 | null | null | null | null |
1023 | 1 | 1029 | null | 12 | 1417 | Introductory, advanced, and even obscure, please.
Mostly to test myself. I like to make sure I know what the heck I'm talking about :)
Thanks
| Where can I find good statistics quizzes? | CC BY-SA 2.5 | null | 2010-07-29T23:04:08.707 | 2010-08-17T20:09:56.013 | null | null | 74 | [
"teaching"
] |
1024 | 2 | null | 1016 | 3 | null | For part 1, you're looking for the [F-test](http://en.wikipedia.org/wiki/F-test#Regression_problems). Calculate your residual sum of squares from each model fit and calculate an F-statistic, which you can use to find p-values from either an F-distribution or some other null distribution that you generate yourself.
| null | CC BY-SA 2.5 | null | 2010-07-29T23:25:12.603 | 2010-07-29T23:25:12.603 | null | null | 378 | null |
1025 | 2 | null | 980 | 7 | null | Calculating averages leads to a different dataset than simply reducing the number of data points.
If one heartbeat per minute is much faster than the other heart beats you will lose the signal through your smoothing process.
If you summary 125-125-0-125-125 as 100 than the story that the data tells is different through your smoothing.
Sometimes the heart even skips beats and I believe that's an event that interesting for however wants to look at plotted heart rate data.
I would therefore propose that you calculate the distance between two points with a formula like `d=sqrt((time1-time2)^2 + (bpm1-bpm2))`.
You set a minimum distance in your program.
Then you iterate through your data and after every point you delete all following points for which d is smaller than your minimum distance.
As the unit of time and bpm isn't the same you might want to think about how you can find a way to scale the units meaningfully. To do this task right you should speak to the doctors who in the end have to interpret your graphs and ask them what information they consider to be essential.
| null | CC BY-SA 2.5 | null | 2010-07-29T23:45:46.407 | 2010-07-31T02:09:30.503 | 2010-07-31T02:09:30.503 | 3807 | 3807 | null |
1026 | 2 | null | 1016 | 27 | null | Statistical significance is not usually a good basis for determining whether a variable should be included in a model. Statistical tests were designed to test hypotheses, not select variables. I know a lot of textbooks discuss variable selection using statistical tests, but this is generally a bad approach. See Harrell's book [Regression Modelling Strategies](http://rads.stackoverflow.com/amzn/click/0387952322) for some of the reasons why. These days, variable selection based on the AIC (or something similar) is usually preferred.
| null | CC BY-SA 2.5 | null | 2010-07-30T00:00:15.037 | 2010-07-30T00:00:15.037 | null | null | 159 | null |
1027 | 2 | null | 1015 | 0 | null | You would use Cronbach alpha if you do not know the true value but if you do know the true value then it seems a bit pointless to use Cronbach alpha. The use of Pearson correlation also seems a bit odd as you do not actually have a paired set of values. I would suggest using something like the [Mean Squared Error (MSE)](http://en.wikipedia.org/wiki/Mean_squared_error). Suppose that you have N experts and that the expected estimate for the expert i is given by $\hat{\theta_i}$ and your true value is $\theta$. Then,
$MSE = \frac{\sum_i (\hat{\theta_i} - \theta)^2}{N}$
| null | CC BY-SA 2.5 | null | 2010-07-30T00:44:09.457 | 2010-07-30T00:44:09.457 | null | null | null | null |
1028 | 1 | null | null | 14 | 9844 | I am comparing two distributions with KL divergence which returns me a non-standardized number that, according to what I read about this measure, is the amount of information that is required to transform one hypothesis into the other. I have two questions:
a) Is there a way to quantify a KL divergence so that it has a more meaningful interpretation, e.g. like an effect size or a R^2? Any form of standardization?
b) In R, when using KLdiv (flexmix package) one can set the 'esp' value (standard esp=1e-4) that sets all points smaller than esp to some standard in order to provide numerical stability. I have been playing with different esp values and, for my data set, I am getting an increasingly larger KL divergence the smaller a number I pick. What is going on? I would expect that the smaller the esp, the more reliable the results should be since they let more 'real values' become part of the statistic. No? I have to change the esp since it otherwise does not calculate the statistic but simply shows up as NA in the result table ...
| How to interpret KL divergence quantitatively? | CC BY-SA 4.0 | null | 2010-07-30T00:55:20.603 | 2022-04-29T14:52:25.830 | 2022-04-29T14:52:25.830 | 60613 | 608 | [
"distributions",
"kullback-leibler",
"information-geometry"
] |
1029 | 2 | null | 1023 | 7 | null | I wrote a post compiling links of Practice Questions for Statistics in Psychology (Undergraduate Level).
[http://jeromyanglim.blogspot.com/2009/12/practice-questions-for-statistics-in.html](http://jeromyanglim.blogspot.com/2009/12/practice-questions-for-statistics-in.html)
The questions would fall into the introductory category.
| null | CC BY-SA 2.5 | null | 2010-07-30T02:42:38.523 | 2010-07-30T02:42:38.523 | null | null | 183 | null |
1030 | 2 | null | 1028 | 7 | null | The KL(p,q) divergence between distributions p(.) and q(.) has an intuitive information theoretic interpretation which you may find useful.
Suppose we observe data x generated by some probability distribution p(.). A lower bound on the average codelength in bits required to state the data generated by p(.) is given by the entropy of p(.).
Now, since we don't know p(.) we choose another distribution, say, q(.) to encode (or describe, state) the data. The average codelength of data generated by p(.) and encoded using q(.) will necessarily be longer than if the true distribution p(.) was used for the coding. The KL divergence tells us about the inefficiencies of this alternative code. In other words, the KL divergence between p(.) and q(.) is the average number of extra bits required to encode data generated by p(.) using coding distribution q(.). The KL divergence is non-negative and equal to zero iff the actual data generating distribution is used to encode the data.
| null | CC BY-SA 2.5 | null | 2010-07-30T03:53:08.320 | 2010-07-30T03:53:08.320 | null | null | 530 | null |
1031 | 2 | null | 1028 | 9 | null | KL has a deep meaning when you visualize a set of dentities as a manifold within the fisher metric tensor, it gives the geodesic distance between two "close" distributions. Formally:
$ds^2=2KL(p(x, \theta ),p(x,\theta + d \theta))$
The following lines are here to explain with details what is meant by this las mathematical formulae.
Definition of the Fisher metric.
Consider a parametrized family of probability distributions $D=(f(x, \theta ))$ (given by densities in $R^n$), where $x$ is a random variable and theta is a parameter in $R^p$. You may all knnow that the fisher information matrix $F=(F_{ij})$ is
$F_{ij}=E[d(\log f(x,\theta))/d \theta_i d(\log f(x,\theta))/d \theta_j]$
With this notation $D$ is a riemannian manifold and $F(\theta)$ is a Riemannian metric tensor. (The interest of this metric is given by cramer Rao lower bound theorem)
You may say ... OK mathematical abstraction but where is KL ?
It is not mathematical abstraction, if $p=1$ you can really imagine your parametrized density as a curve (instead of a subset of a space of infinite dimension) and $F_{11}$ is connected to the curvature of that curve...
(see the seminal paper of Bradley Efron: [Defining the Curvature of a Statistical Problem (with Applications to Second Order Efficiency)](https://doi.org/10.1214/aos/1176343282))
The geometric answer to part of point a/ in your question : the squared distance $ds^2$ between two (close) distributions $p(x,\theta)$ and $p(x,\theta+d \theta)$ on the manifold (think of geodesic distance on earth of two points that are close, it is related to the curvature of the earth) is given by the quadratic form:
$ds^2= \sum F_{ij} d \theta^i d \theta^j$
and it is known to be twice the Kullback Leibler Divergence:
$ds^2=2KL(p(x, \theta ),p(x,\theta + d \theta))$
If you want to learn more about that I suggest reading the paper from Amari:
[Differential Geometry of Curved Exponential Families-Curvatures and Information Loss](https://doi.org/10.1214/aos/1176345779)
(I think there is also a book from Amari about Riemannian geometry in statistic but I don't remember the name)
| null | CC BY-SA 4.0 | null | 2010-07-30T05:29:11.857 | 2022-04-29T14:38:29.267 | 2022-04-29T14:38:29.267 | -1 | 223 | null |
1032 | 2 | null | 414 | 7 | null | You will find many applications of Mathematical Statistics in '[Mathematical Statistics and Data Analysis](https://rads.stackoverflow.com/amzn/click/com/0534399428)' by John A. Rice. The 'Application Index' lists all applications discussed in the text.
| null | CC BY-SA 4.0 | null | 2010-07-30T08:04:50.453 | 2023-02-11T10:06:28.840 | 2023-02-11T10:06:28.840 | 362671 | 531 | null |
1033 | 2 | null | 887 | 0 | null | "how is stdev(S) related to the standard deviation of the entire population?"
I don't know if the "Confidence Interval" concept might be what you are looking for?
Stdev(S) is an Estimate of the standard deviation of the entire population. To see how good an estimate, confidence intervals could be computed, and these would be dependent on the sample size.
See for e.g., Simulation and the Monte Carlo Method, Rubinstein & Kroese.
| null | CC BY-SA 2.5 | null | 2010-07-30T09:49:02.353 | 2010-07-30T09:49:02.353 | null | null | null | null |
1035 | 2 | null | 913 | 4 | null | Another solution to your problem (without transforming variables) is regression with error distribution other then Gaussian for example Gamma or skewed t-Student.
Gamma is in GLM family, so there is a lot of software to fit model with this error distribution.
| null | CC BY-SA 2.5 | null | 2010-07-30T09:54:10.323 | 2010-07-30T09:54:10.323 | null | null | 419 | null |
1036 | 2 | null | 125 | 32 | null | Sivia and Skilling, Data analysis: a Bayesian tutorial (2ed) 2006 246p 0198568320
[books.goo](http://books.google.com/books?id=zN-yliq6eZ4C&dq=isbn:0198568320&source=gbs_navlinks_s):
>
Statistics lectures have been a source
of much bewilderment and frustration
for generations of students. This book
attempts to remedy the situation by
expounding a logical and unified
approach to the whole subject of data
analysis. This text is intended as a
tutorial guide for senior
undergraduates and research students
in science and engineering ...
I don't know the other recommendations though.
| null | CC BY-SA 2.5 | null | 2010-07-30T10:03:37.523 | 2010-07-30T10:03:37.523 | null | null | 557 | null |
1038 | 2 | null | 652 | 1 | null |
- Wilcox, Rand R. - BASIC STATISTICS - Understanding
Conventional Methods and Modern
Insights, Oxford University Press,
2009
- Hoff, Peter D. - A First Course in
Bayesian Statistical Methods,
Springer, 2009
- Dalgaard, Peter - Introductory
Statistics with R, Second Edition, Springer, 2008
also take a glance at [this link](https://stackoverflow.com/questions/192369/books-for-learning-the-r-language/2270793#2270793), though it's R-specific, there are plenty of books that can guide you through basic statistical techniques.
| null | CC BY-SA 2.5 | null | 2010-07-30T12:11:18.743 | 2010-07-30T12:11:18.743 | 2017-05-23T12:39:26.150 | -1 | 1356 | null |
1039 | 2 | null | 125 | 10 | null | If you're looking for an elementary text, i.e. one that doesn't have a calculus prerequisite, there's Don Berry's [Statistics: A Bayesian Perspective](http://rads.stackoverflow.com/amzn/click/0534234720).
| null | CC BY-SA 2.5 | null | 2010-07-30T12:15:45.737 | 2010-07-30T12:15:45.737 | null | null | 319 | null |
1040 | 1 | 1041 | null | 4 | 1107 | Consider the following model
$Y_i = f(X_i) + e_i$
from which we observe n iid data points $\left( X_i, Y_i \right)_{i=1}^n$. Suppose that $X_i \in \mathbb{R}^d$ is a $d$ dimensional feature vector. And suppose that a ordinary least squares estimate is fit to data, that is,
$\hat \beta = {\rm arg} \min_{\beta \in \mathbb{R}^d} \sum_i (Y_i - \sum_j X_{ij} \beta_j)^2$
Since a wrong model is estimated, what is the interpretation for the confidence interval around estimated coefficients?
More generally, does it make sense to estimate confidence intervals around parameters in a misspecified model? And what does the confidence interval tell us in such a case?
| What is the interpretation/meaning of confidence intervals in misspecified models? | CC BY-SA 2.5 | null | 2010-07-30T14:24:17.350 | 2017-04-23T18:02:00.423 | 2017-04-23T18:02:00.423 | 11887 | 168 | [
"confidence-interval",
"estimation",
"modeling",
"model-selection",
"misspecification"
] |
1041 | 2 | null | 1040 | 3 | null | The confidence interval that you obtain is conditional on the model being correct and the interpretation is also conditional on the model being the correct one. If you know that the model is incorrect then obviously you would not use it to compute the confidence interval.
In reality, you do not know the true model and so you have no way to tell if you have a misspecified model (although you do have ways to assess misspecification, e.g., examine if residuals are normally distributed, diagnostic plots of fitted vs observed values etc). So, to my mind, the real question is if the model is misspecified, to what extent can you rely on confidence intervals as a way to assess where the true parameter is. I suspect that the answer is specific to the degree of misspecification that is coming from f(x) i.e., the degree to which f(x) departs from the assumptions of OLS.
| null | CC BY-SA 2.5 | null | 2010-07-30T14:38:39.530 | 2010-07-30T14:38:39.530 | null | null | null | null |
1043 | 2 | null | 977 | 1 | null | OK, so your data is very expensive to get. If you have some indication of the shape of the data then perhaps a bootstrap / bayesian / optimization (sticking in keywords :)) approach would work best. See the optim command in R as an example. You would need to know some things though. For example, could we assume something like normality? If so then fitting a small number of data points to the normal distribution will likely give you a much better estimate of your parameters than simple mean and sdev values.
| null | CC BY-SA 2.5 | null | 2010-07-30T15:37:06.190 | 2010-07-30T15:37:06.190 | null | null | 601 | null |
1044 | 2 | null | 73 | 6 | null | Sweave lets you embed R code in a LaTeX document. The results of executing the code, and optionally the source code, become part of the final document.
So instead of, for example, pasting an image produced by R into a LaTeX file, you can paste the R code into the file and keep everything in one place.
| null | CC BY-SA 2.5 | null | 2010-07-30T15:43:28.517 | 2010-07-30T15:43:28.517 | null | null | 319 | null |
1045 | 1 | 1051 | null | 4 | 1215 | The wiki article on [credible intervals](http://en.wikipedia.org/wiki/Credible_interval) has the following statement:
>
credible intervals and confidence intervals treat nuisance parameters in radically different ways.
What is the radical difference that the wiki talks about?
Credible intervals are based on the posterior distribution of the parameter and confidence interval is based on the maximum likelihood associated with the data generating process. It seems to me that how credible and confidence intervals are computed is not dependent on whether the parameters are nuisance or not. So, I am a bit puzzled by this statement.
PS: I am aware of alternative approaches to dealing with nuisance parameters under frequentist inference but I think they are less common than standard maximum likelihood. (See this question on the difference between [partial, profile and marginal likelihoods](https://stats.stackexchange.com/questions/622/what-is-the-difference-between-a-partial-likelihood-profile-likelihood-and-margi).)
| Is there a radical difference in how bayesian and frequentist approaches treat nuisance parameters? | CC BY-SA 2.5 | null | 2010-07-30T16:06:54.980 | 2010-07-30T23:53:35.657 | 2017-04-13T12:44:54.643 | -1 | null | [
"confidence-interval",
"credible-interval"
] |
1046 | 2 | null | 825 | 6 | null | If you are using GNU/Linux previous answers by Shane and Dirk are great.
If you need a solution for windows, there is one in this post:
[Parallel Multicore Processing with R (on Windows)](http://www.r-statistics.com/2010/04/parallel-multicore-processing-with-r-on-windows/)
Although the package is not yet on CRAN. it can be downloaded from that link.
| null | CC BY-SA 2.5 | null | 2010-07-30T16:58:42.397 | 2010-07-30T16:58:42.397 | null | null | 253 | null |
1047 | 1 | 1048 | null | 42 | 35196 | I'm comparing a sample and checking whether it distributes as some, discrete, distribution. However, I'm not enterily sure that Kolmogorov-Smirnov applies. [Wikipedia](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) seems to imply it does not. If it does not, how can I test the sample's distribution?
| Is Kolmogorov-Smirnov test valid with discrete distributions? | CC BY-SA 2.5 | null | 2010-07-30T17:00:57.573 | 2018-08-28T00:33:33.743 | 2018-08-28T00:33:33.743 | 805 | 614 | [
"hypothesis-testing",
"discrete-data",
"kolmogorov-smirnov-test"
] |
1048 | 2 | null | 1047 | 19 | null | It does not apply to discrete distributions. See [http://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm) for example.
Is there any reason you can't use a chi-square goodness of fit test?
see [http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm) for more info.
| null | CC BY-SA 2.5 | null | 2010-07-30T17:10:09.663 | 2010-07-30T17:10:09.663 | null | null | 247 | null |
1049 | 2 | null | 138 | 8 | null | One resource is 'Some hints for the R beginner' at
[http://www.burns-stat.com/pages/Tutor/hints_R_begin.html](http://www.burns-stat.com/pages/Tutor/hints_R_begin.html)
| null | CC BY-SA 2.5 | null | 2010-07-30T17:10:24.607 | 2010-07-30T17:10:24.607 | null | null | null | null |
1050 | 2 | null | 170 | 16 | null | Here's a fresh one: [Introduction to Probability and Statistics Using R ](https://rdrr.io/cran/IPSUR/f/inst/doc/IPSUR.pdf). It's R-specific, though, but it's a great one. I haven't read it yet, but it seems fine so far...
| null | CC BY-SA 4.0 | null | 2010-07-30T20:00:45.253 | 2021-05-31T03:50:57.847 | 2021-05-31T03:50:57.847 | 287839 | 1356 | null |
1051 | 2 | null | 1045 | 5 | null | The fundamental difference is that in maximum likelihood based methods we can't integrate the nuisance parameters out (because the likelihood function is not a PDF and doesn't obey probability laws).
In maximum likelihood methods, the ideal way to deal with nuisance parameters is through marginal/conditional likelihoods, but these are defined differently from the question you linked. (There is a notion of an integrated (marginal/conditional) likelihood function as in the linked question, but this is not strictly the marginal likelihood function.)
Say you have a parameter of interest, $\theta$, a nuisance parameter, $\lambda$. Suppose a transformation of your data $X$ to $(Y, Z)$ exists such that either $Y$ or $Y|Z$ depends only on $\theta$. If $Y$ depends on $\theta$, then the joint density can be written
$f(Y, Z; \theta, \lambda) = f_{Y}(Y; \theta) f_{Z|Y}(Z|Y; \theta, \lambda)$.
In the latter case, we have
$f(Y, Z; \theta, \lambda) = f_{Y|Z}(Y|Z; \theta) f_{Z}(Z; \theta, \lambda)$.
In either case, the factor depending on $\theta$ alone is of interest. In the former, it's the basis for the definition of the marginal likelihood and in the latter, the conditional likelihood. The important point here is to isolate a component that depends on $\theta$ alone.
If we can't find such a transformation, we look at other likelihood functions to eliminate the nuisance. We usually start with a profile likelihood. To eliminate bias in the MLE, we try to obtain approximations for marginal or conditional likelihoods, usually through a "modified profile likelihood" function (yet another likelihood function!).
There are many details, but the short story is that the likelihood methods treat nuisance parameters quite differently than Bayesian methods. In particular, the estimated likelihoods don't account for uncertainty in the nuisance. Bayesian methods do account for it through the specification of a prior.
There are arguments in favor of an integrated likelihood function and lead to something resembling the Bayesian framework. If you're interested, I can dig up some references.
| null | CC BY-SA 2.5 | null | 2010-07-30T21:15:43.290 | 2010-07-30T23:53:35.657 | 2010-07-30T23:53:35.657 | 251 | 251 | null |
1052 | 1 | null | null | 15 | 12681 | my question particularly applies to network reconstruction
| What is the major difference between correlation and mutual information? | CC BY-SA 2.5 | null | 2010-07-30T22:38:06.510 | 2010-07-31T03:04:03.400 | 2010-07-31T02:05:21.060 | null | null | [
"correlation",
"mutual-information"
] |
1053 | 1 | 1056 | null | 45 | 10662 | I am looking for a good book/tutorial to learn about survival analysis. I am also interested in references on doing survival analysis in R.
| References for survival analysis | CC BY-SA 3.0 | null | 2010-07-31T00:51:52.653 | 2022-05-12T10:46:29.413 | 2015-11-04T18:17:45.123 | 22468 | 172 | [
"r",
"survival",
"references"
] |
1054 | 1 | 1058 | null | 5 | 1763 | What is the equivalent command in R for the `stcox` command in Stata?
| R command for stcox in Stata | CC BY-SA 3.0 | null | 2010-07-31T00:59:11.343 | 2013-07-20T22:57:03.343 | 2013-07-20T22:57:03.343 | 22047 | 172 | [
"r",
"survival",
"stata"
] |
1055 | 2 | null | 1052 | 25 | null | Correlation measures the linear relationship (Pearson's correlation) or monotonic relationship (Spearman's correlation) between two variables, X and Y.
Mutual information is more general and measures the reduction of uncertainty in Y after observing X. It is the KL distance between the joint density and the product of the individual densities. So MI can measure non-monotonic relationships and other more complicated relationships.
| null | CC BY-SA 2.5 | null | 2010-07-31T01:08:08.763 | 2010-07-31T03:04:03.400 | 2010-07-31T03:04:03.400 | 159 | 159 | null |
1056 | 2 | null | 1053 | 21 | null | I like:
- Survival Analysis: Techniques for Censored and Truncated Data (Klein & Moeschberger)
- Modeling Survival Data: Extending the Cox Model (Therneau)
The first does a good job of straddling theory and model building issues. It's mostly focused on semi-parametric techniques, but there is reasonable coverage of parametric methods. It doesn't really provide any R or other code examples, if that's what you're after.
The second is heavy with modeling on the Cox PH side (as the title might indicate). It's by the author of the [survival](http://cran.r-project.org/web/views/Survival.html) package in R and there are plenty of R examples and mini-case studies. I think both books complement each other, but I'd recommend the first for getting started.
A quick way to get started in R is David Diez's [guide](http://www.statgrad.com/teac/surv/R_survival.pdf).
| null | CC BY-SA 3.0 | null | 2010-07-31T01:41:53.177 | 2017-01-18T22:35:34.987 | 2017-01-18T22:35:34.987 | 251 | 251 | null |
1057 | 2 | null | 1052 | 4 | null | To add to Rob's answer ... with respect to reverse engineering a network, MI may be preferred over correlation when you want to extract causal rather than associative links in your network. Correlation networks are purely associative. But for MI, you need more data and computing power.
| null | CC BY-SA 2.5 | null | 2010-07-31T02:05:44.990 | 2010-07-31T02:05:44.990 | null | null | 251 | null |
1058 | 2 | null | 1054 | 9 | null | In package [survival](http://cran.r-project.org/web/packages/survival/index.html), it's `coxph`. John Fox has a nice introduction to using coxph in R:
- Cox Proportional-Hazards Regression for Survival Data
| null | CC BY-SA 2.5 | null | 2010-07-31T02:08:06.560 | 2010-09-19T07:23:51.393 | 2010-09-19T07:23:51.393 | 930 | 251 | null |
1059 | 2 | null | 726 | 37 | null | >
"Million to one chances crop up nine times out of ten."
-[Terry Pratchett](http://www.goodreads.com/quotes/95458-scientists-have-calculated-that-the-chances-of-something-so-patently)
| null | CC BY-SA 3.0 | null | 2010-07-31T05:49:03.187 | 2012-11-03T06:15:39.000 | 2012-11-03T06:15:39.000 | 9007 | 183 | null |
1060 | 1 | 1061 | null | 7 | 1347 | I'll use an example so that you can reproduce the results
```
# mortality
mort = ts(scan("http://www.stat.pitt.edu/stoffer/tsa2/data/cmort.dat"),start=1970, frequency=52)
# temperature
temp = ts(scan("http://www.stat.pitt.edu/stoffer/tsa2/data/temp.dat"), start=1970, frequency=52)
#pollutant particulates
part = ts(scan("http://www.stat.pitt.edu/stoffer/tsa2/data/part.dat"), start=1970, frequency=52)
temp = temp-mean(temp)
temp2 = temp^2
trend = time(mort)
```
Now, fit a model for mortality data
```
fit = lm(mort ~ trend + temp + temp2 + part, na.action=NULL)
```
What I want now is to reproduce the result of the AIC command
```
AIC(fit)
[1] 3332.282
```
According to R's help file for AIC, AIC = -2 * log.likelihood + 2 * npar.
If I'm correct I think that log.likelihood is given using the following formula:
```
n = length(mort)
RSS = anova(fit)[length(anova(fit)[,2]),2] # there must be better ways to get this, anyway
(log.likelihood <- -n/2*(log(2*pi)+log(RSS/n)+1))
[1] -1660.135
```
This is approximately equal to
```
logLik(fit)
'log Lik.' -1660.141 (df=6)
```
As far as I can tell, the number of parameters in the model are 5 (how can I get this number programmatically ??). So AIC should be given by:
```
-2 * log.likelihood + 2 * 5
[1] 3330.271
```
Ooops, it seems like I should have used 6 instead of 5 as the number of parameters. What is wrong with those calculations?
| Why does AIC formula in R appear to use one extra parameter than expected? | CC BY-SA 3.0 | null | 2010-07-31T09:39:47.323 | 2011-04-29T05:03:59.523 | 2011-04-29T05:03:59.523 | 183 | 339 | [
"r",
"time-series",
"modeling",
"aic"
] |
1061 | 2 | null | 1060 | 11 | null | ```
> -2*logLik(fit)+2*(length(fit$coef)+1)
[1] 3332.282
```
(you forgot; you have 6 parameter because $\sigma_{\epsilon}$ also has to be estimated!
| null | CC BY-SA 3.0 | null | 2010-07-31T10:04:04.443 | 2011-04-29T01:10:38.227 | 2011-04-29T01:10:38.227 | 3911 | 603 | null |
1062 | 1 | null | null | 6 | 3747 | In general inference, why orthogonal parameters are useful, and why is it worth trying to find a new parametrization that makes the parameters orthogonal ?
I have seen some textbook examples, not so many, and would be interested in more concrete examples and/or motivation.
| Orthogonal parametrization | CC BY-SA 2.5 | null | 2010-07-31T14:19:13.850 | 2020-09-27T06:27:40.240 | 2020-09-27T06:27:40.240 | 7290 | 368 | [
"multivariate-analysis",
"information-geometry"
] |
1063 | 1 | 1065 | null | 19 | 92828 | My stats has been self taught, but a lot of material I read point to a dataset having mean 0 and standard deviation of 1.
If that is the case then:
- Why is mean 0 and SD 1 a nice property to have?
- Why does a random variable drawn from this sample equal 0.5? The chance of drawing 0.001 is the same as 0.5 so this should be flat distribution...
- When people talk about Z Scores what do they actually mean here?
| Why are mean 0 and standard deviation 1 distributions always used? | CC BY-SA 3.0 | null | 2010-07-31T14:29:15.543 | 2022-03-30T16:27:40.373 | 2012-08-20T04:54:05.637 | 2116 | 353 | [
"probability"
] |
1064 | 2 | null | 726 | 80 | null | A nice one I came about:
>
I think it's much more interesting to live not knowing than to have answers which might be wrong.
By Richard Feynman ([link](https://www.youtube.com/watch?v=I1tKEvN3DF0))
| null | CC BY-SA 3.0 | null | 2010-07-31T15:36:12.410 | 2016-07-10T03:35:26.313 | 2016-07-10T03:35:26.313 | null | 253 | null |
1065 | 2 | null | 1063 | 11 | null |
- At the beginning the most useful answer is probably that mean of 0 and sd of 1 are mathematically convenient. If you can work out the probabilities for a distribution with a mean of 0 and standard deviation of 1 you can work them out for any similar distribution of scores with a very simple equation.
- I'm not following this question. The mean of 0 and standard deviation of 1 usually applies to the standard normal distribution, often called the bell curve. The most likely value is the mean and it falls off as you get farther away. If you have a truly flat distribution then there is no value more likely than another. Your question here is poorly formed. Were you looking at questions about coin flips perhaps? Look up binomial distribution and central limit theorem.
- "mean here"? Where? The simple answer for z-scores is that they are your scores scaled as if your mean were 0 and standard deviation were 1. Another way of thinking about it is that it takes an individual score as the number of standard deviations that score is from the mean. The equation is calculating the (score - mean) / standard deviation. The reasons you'd do that are quite varied but one is that in intro statistics courses you have tables of probabilities for different z-scores (see answer 1).
If you looked up z-score first, even in wikipedia, you would have gotten pretty good answers.
| null | CC BY-SA 3.0 | null | 2010-07-31T15:46:05.483 | 2012-08-20T04:56:16.753 | 2012-08-20T04:56:16.753 | 2116 | 601 | null |
1066 | 1 | 1068 | null | 2 | 497 | My question is actually quite short, but I'll have to start by describing the context since I am not sure how to directly ask it.
Consider the following "game":
We have a segment of length n ("large segment") and m integers ("lengths"), all considerably smaller than n. For each of the m lengths we draw a random sub-segment of its length on the large segment. For example, if the large segment is of size 1000 (i.e. 1..1000) and we are given lengths 20, 10, 50, than a possible solution would be: 31..50, 35..44, 921..970 (sub-segments of lengths 20, 10 and 50 respectively).
Notes:
1. This is just a toy example. We usually have many more lengths so there are many overlaps and each position in the large segment is covered by multiple sub-segments.
2. Remember that the lengths are given; only their mapping to the large segment is random.
3. Drawing a sub-segment of length k is done bu simply drawing a number from a uniform distribution over 1..n-k (a sub-segment of size k can start at position 1, 2, ... n-k).
Now, we conduct many simulations of the process an d record the data. We finally examine for each position the distribution of number of sub-segments covering this position. If we look at positions that are relatively far from the edges of the large segment, the distribution in each such position is normal, and all the distributions look the same.
The "problem" is that the positions at the ends do not look normal at all. This is not surprising, since, for example, if we are now drawing a sub-segment of length 10, the only way the very first position in the large segment will be covered is if we draw 1, whereas, for example, the 10th position will be covered if we draw 1,2,3,..10.
What I am trying to figure out is what is the kind of distribution we see in the "edge" positions (it's not normal, but I think it usually looks like a normal distribution with its tail cut in one direction), and also how can I approximate this distribution density function from my simulations. For the "center" positions, I just estimate the mean and standard deviation and since I beleive the distributions there are normal - I can use the normal density function. This alos makes me think if I really need to treat the positions in a categorical way - "near the edges" and "not near the edges", or whether there are actually the same in some sort (some generalization of the normal distribution?).
Thank you, and sorry again for the length of the post.
| Approximating density function for a non-normal distribution | CC BY-SA 2.5 | null | 2010-07-31T18:20:26.487 | 2012-05-18T12:35:06.850 | 2010-09-30T21:24:14.807 | 930 | 634 | [
"distributions",
"normality-assumption"
] |
1068 | 2 | null | 1066 | 4 | null | I'm not sure I understand your question exactly, but I assume you are looking for the probability mass at each point, where an event is defined as a subsegment covering a particular position. If this is true, I believe you should be able to work out the exact probability mass function.
For each subsegment of length K the distribution at a particular location between K and N-K+1 is uniform and proportional to K. The distribution at the tails is stepwise increasing from 1 to K. This can be seen by just working out one example.
Given multiple subsegments of different sizes, simply add these functions up. You can then normalize everything so the weights sum to one if you want a proper distribution function. Given a length and a list of subsegment lengths, this probability mass function should be easy to code up in the language of your choice.
If you are interested in the number of segments covering any point, this is simply the sum of Bernoulli random variables with different p, defined at each point using the same pmf. The simplest approach would be to enumerate the possibilities from 1 to K.
| null | CC BY-SA 2.5 | null | 2010-07-31T18:50:56.847 | 2010-07-31T19:06:43.163 | 2010-07-31T19:06:43.163 | 493 | 493 | null |
1069 | 2 | null | 1062 | 7 | null | This is a good, if underspecified question.
Simply put, obtaining an [orthogonal](http://en.wikipedia.org/wiki/Orthogonal_coordinates) parametrization allows for parameters of interest to be conveniently related to other parameters, particularly in establishing needed minimizations. Whether or not this is useful depends on what you are trying to do (in the case of some physics problems, for instance, orthogonal parametrization may obscure the symmetries of interest).
In the case of statistical inference, orthogonal parametrization can allow the use of statistics by way of minimization (or its dual) on orthogonal parameters. For instance, [Cox and Reid](http://www.jstor.org/pss/2345476) use the orthogonality of nuisance parameters (and their appropriately applied maximum likelihood estimates) to construct a generalization of a liklihood ratio statistic for a parameter of interest.
To see how orthogonality allows for this requires an understanding of the properties of commonly used mathematical spaces and the construction of estimators, which is essentially an issue of [information geometry](http://en.wikipedia.org/wiki/Information_geometry). See [Information Geometry, Bayesian Inference, Ideal Estimates and Error Decomposition](http://omega.albany.edu:8008/ignorance/zhu98.pdf) for a lucid, but technical description of orthogonality and its role in statistical inference.
| null | CC BY-SA 2.5 | null | 2010-07-31T19:47:20.103 | 2010-10-12T16:05:37.587 | 2010-10-12T16:05:37.587 | 39 | 39 | null |
1070 | 2 | null | 1062 | 9 | null | In Maximum Likelihood, the term orthogonal parameters is used when you can achieve a clean factorization of a multi-parameter likelihood function. Say your data have two parameters $\theta$ and $\lambda$. If you can rewrite the joint likelihood:
$L(\theta, \lambda) = L_{1}(\theta) L_{2}(\lambda)$
then we call $\theta$ and $\lambda$ orthogonal parameters. The obvious case is when you have independence, but this is not necessary for the definition as long as factorization can be achieved. Orthogonal parameters are desirable because, if $\theta$ is of interest, then you can perform inference using $L_{1}$.
When we don't have orthogonal parameters, we try to find factorizations like
$L(\theta, \lambda) = L_{1}(\theta) L_{2}(\theta, \lambda)$
and perform inference using $L_1$. In this case, we must argue that the information loss due to excluding $L_{2}$ is low. This leads to the concept of [marginal likelihood](https://stats.stackexchange.com/questions/1045/is-there-a-radical-difference-in-how-bayesian-and-frequentist-approaches-treat-nu/1051#1051).
| null | CC BY-SA 2.5 | null | 2010-07-31T20:17:03.217 | 2010-08-01T04:31:00.753 | 2017-04-13T12:44:46.433 | -1 | 251 | null |
1072 | 2 | null | 1016 | 4 | null | I second Rob's comment. An increasingly prefered alternative is to include all your variables and shrink them towards 0. See Tibshirani, R. (1996). Regression shrinkage and selection via the lasso.
[http://www-stat.stanford.edu/~tibs/lasso/lasso.pdf](http://www-stat.stanford.edu/~tibs/lasso/lasso.pdf)
| null | CC BY-SA 2.5 | null | 2010-07-31T21:22:55.277 | 2010-07-31T21:22:55.277 | null | null | 603 | null |
1073 | 2 | null | 373 | 25 | null | To answer the original question: Our intuition fails because of the narrative. By relating the story in the same order as the tv script, we get confused. It gets much easier if we think about what is going to happen in advance. The quiz-master will reveal a goat, so our best chance is to select a door with a goat and then switch. The storyline puts a lot of emphasis on the loss caused by our action in that one out of three chance that we happen to select the car.
---
The original answer:
Our aim is to eliminate both goats. We do this by marking one goat ourselves. The quizmaster is then forced to choose between revealing the car or the other goat. Revealing the car is out of the question, so the quizmaster will reveal and eliminate the one goat we did not know about. We then switch to the remaining door, thereby eliminating the goat we marked with our first choice, and get the car.
This strategy only fails if we do not mark a goat, but the car instead. But that is unlikely: there are two goats and only one car.
So we have a chance of 2 in 3 to win the car.
| null | CC BY-SA 3.0 | null | 2010-07-31T21:33:59.133 | 2015-08-04T11:14:48.160 | 2015-08-04T11:14:48.160 | 638 | 638 | null |
1074 | 2 | null | 373 | 2 | null | The lesson? Reformulate the question, and search for a strategy instead of looking at the situation. Turn the thing on its head, work backwards...
People are generally bad at working with chance. Animals typically fare better, once they discover that either A or B gives a higher payout on average; they stick to the choice with the better average. (don't have a reference ready - sorry.)
The first thing people are tempted to do when seeing a 80/20 distribution, is to spread their choices to match the pay-out: 80% on the better choice, and 20% on the other.
This will result in a pay-out of 68%.
Again, there is a valid scenario for people to choose such a strategy: If the odds shift over time, there's a good reason for sending out a probe and try the choice with the lower chance of success.
An important part of mathematical statistics actually studies the behaviour of processes to determine whether they are random or not.
| null | CC BY-SA 2.5 | null | 2010-07-31T22:01:41.933 | 2010-07-31T22:01:41.933 | null | null | 638 | null |
1079 | 2 | null | 146 | 7 | null | Careful... just because the PCs are by construction orthogonal to each other does not mean that there is not a pattern or that one PC can not appear to "explain" something about the other PCs.
Consider 3D data (X,Y,Z) describing a large number of points distributed evenly on the surface of an American football (it is an ellipsoid -- not a sphere -- for those who have never watched American football). Imagine that the football is in an arbitrary configuration so that neither X nor Y nor Z is along the long axis of the football.
Principal components will place PC1 along the long axis of the football, the axis that describes the most variance in the data.
For any point in the PC1 dimension along the long axis of the football, the planar slice represented by PC2 and PC3 should describe a circle and the radius of this circular slice depends on the PC1 dimension. It is true that regressions of PC2 or PC3 on PC1 should give a zero coefficient globally, but not over smaller sections of the football.... and it is clear that a 2D graph of PC1 and PC2 would show an "interesting" limiting boundary that is two-valued, nonlinear, and symmetric.
| null | CC BY-SA 2.5 | null | 2010-08-01T06:26:34.787 | 2010-08-01T23:22:03.657 | 2010-08-01T23:22:03.657 | 87 | 87 | null |
1080 | 2 | null | 373 | 2 | null | I think there are several things going on.
For one, the setup implies more information then the solution takes into account. That it is a game show, and the host is asking us if we want to switch.
If you assume the host does not want the show to spend extra money (which is reasonable), then you would assume he would try to convince you to change if you had the right door.
This is a common sense way of looking at the problem that can confuse people, however I do think the main issue is not understanding how the new choice is different then the first (which is more clear in the 100 door case).
| null | CC BY-SA 2.5 | null | 2010-08-01T06:53:28.040 | 2010-08-01T06:53:28.040 | null | null | 572 | null |
1081 | 1 | 1106 | null | 5 | 1202 | I have been reading Zuur, Ieno and Smith (2007) Analyzing ecological data, and on page 262, they try to explain how nMDS (non-metric multidimensional scaling) algorithm works. As my background is in biology and not math or statistics per se, I'm having hard time understanding a few points and would ask you if you could elaborate on them. I'm reproducing the entire algorithm list for clarity, and I hope I'm not breaking any laws by doing so.
- Choose a measure of association and calculate the distance matrix D.
- Specify m, the number of axes.
- Construct a starting configuration E. This can be done using PCoA.
- Regress the configuration on D: D_ij = (alpha) + (beta)E_ij + (epsilon)_ij.
- Measure the relationship between the m dimensional configuration and the real dinstances by fitting a non-parametric (monotonic) regression curve in the Shepard diagram. A monotonic regression is constrained to increase. If a parametric regression line is used, we obtain PCoA.
- The discrepancy from the fitted curve is called STRESS.
- Using non-linear optimization routines, obtain a new estimation of E and go from step 4 until convergence.
Questions:
In 4., we regress the configuration to D. Where do we use the estimated parameters (alpha), (beta) and (epsilon)? Are these used to measure distance from the regression (Shepard diagram) in this new configuration
In regard to number 7, can you talk a little about non-linear optimisation routines? My internet search came up pretty much empty in terms of a layman's explanation. I'm interested in knowing what this routine tries to achieve (in nMDS). And I guess the next question depends on knowing these routines: what represents convergence? What converges to where?
Can someone add "nmds" tag? I can't create new tags yet...
| Help me understand nMDS algorithm | CC BY-SA 2.5 | null | 2010-08-01T07:16:03.853 | 2010-09-30T21:23:34.420 | 2010-09-30T21:23:34.420 | 930 | 144 | [
"nonparametric",
"multidimensional-scaling"
] |
1082 | 1 | null | null | 5 | 1877 | I am trying to assess the significance of the obtained MI matrix. The initial input was a array of 3000 genes by 45 timepoints. MI was computed resulting in a array of 3600 by 3600. I am thus comparing my results to a shuffled matrix with the same dimensions. I permutate the columns 100 times, thus have 100 results for each element in the matrix. At this stage shall I take the mean for each value in the cell and then overall mean of the matrix MI values to estimate the threshold cutoff? Is taking the mean plus 3SD sensible? Ideally comparison of probability density function between my model and the random should show large discrepancy.
| How to define the significance threshold for mutual information in terms of probability of that value occurring in surrogate set? | CC BY-SA 2.5 | null | 2010-08-01T11:48:48.670 | 2010-08-01T18:41:27.523 | null | null | null | [
"correlation"
] |
1083 | 2 | null | 95 | 3 | null | Generally, by not allowing for assymetry, you expect the effect of shocks to last longers: i.e. the half-life increases (the half life is the number of units of time, after a 1 S.D. shock to $\epsilon_{t-1}$ for $\hat{\sigma}t|I{t-1}$ to come back to the its unconditional value.)
Here is a code snipped that downloads stock data, fits (e)Garch and computes half lifes, in R:
```
install.packages("rgarch",repos="http://R-Forge.R-project.org")
install.packages("fGarch")
install.packages("fImport")
library(rgarch)
library(fImport)
library(fGarch)
d1<-yahooSeries(symbols="ibm",nDaysBack=1000,frequency=c("daily"))[,4]
dprice1<-diff(log(as.numeric(d1[length(d1):1])))
spec1<-ugarchspec(variance.model=list(model="eGARCH",garchOrder=c(1,1)),mean.model=list(armaOrder=c(0,0),include.mean=T))
spec2<-ugarchspec(variance.model=list(model="fGARCH",submodel="GARCH",garchOrder=c(1,1)),mean.model=list(armaOrder=c(0,0),include.mean=T))
fit1<-ugarchfit(data=dprice1,spec=spec1)
fit2<-ugarchfit(data=dprice1,spec=spec2)
halflife(fit1)
halflife(fit2)
```
The reason for this is that generally speaking, negative spells tend to be more persistent.
If you don't control for this, you will generally bias the $\beta$ (i.e. persistance parameters) downwards.
| null | CC BY-SA 2.5 | null | 2010-08-01T11:56:51.260 | 2010-08-01T13:38:50.213 | 2010-08-01T13:38:50.213 | 603 | 603 | null |
1084 | 1 | 1086 | null | 4 | 1806 | I'm trying to visualize a set of data that represents human body mass over time, taken from (usually) daily weighings.
Because body mass tends to fluctuate +/- 3 pounds based on hydration I would like to draw a strongly smoothed line graph to minimize the fluctuation.
Any help on what the equation would look like is much appreciated, or even just some names/links to send me in the right direction.
EDIT:
I need to code the visualization in Javascript, so I need understanding of the math involved, rather than a library that will do it for me.
| Equation to calculate a smooth line given an irregular time series? | CC BY-SA 2.5 | null | 2010-08-01T12:08:02.313 | 2010-08-13T11:29:34.633 | 2010-08-13T11:29:34.633 | 159 | 642 | [
"data-visualization",
"smoothing"
] |
1086 | 2 | null | 1084 | 6 | null | There are several methods of estimating a smooth trend line. Some of the most popular are:
- Moving average smoother
- Local linear regression (loess being a popular robust implementation)
- Smoothing splines
- Regression splines (of various flavors).
My preference is to use penalized regression splines which you can fit using the [mgcv](http://cran.r-project.org/web/packages/mgcv/) package in R.
Update: since you want to code it yourself, you might find it simplest to start with a moving average smoother (for each day, just take an average of observations from the k days on either side with k to be selected so as to give the appropriate level of smoothness).
| null | CC BY-SA 2.5 | null | 2010-08-01T12:29:28.977 | 2010-08-01T12:48:13.933 | 2010-08-01T12:48:13.933 | 159 | 159 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.