Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1624 | 1 | 1633 | null | 4 | 760 | How are the values in a z-table computed? Is it possible to compute confidence without looking up a z-table?
| Based on z-score, is it possible to compute confidence without looking at a z-table? | CC BY-SA 2.5 | null | 2010-08-13T00:48:15.973 | 2022-11-26T12:32:37.333 | 2010-08-13T08:35:39.487 | 8 | 850 | [
"confidence-interval",
"z-statistic"
]
|
1625 | 2 | null | 1557 | 7 | null | See also [Jennrich, RJ, Oman, SD "How much does Stein estimation help in multiple linear regression?" Technometrics, 28, 113-121, 1986.](http://www.jstor.org/pss/1270447)
| null | CC BY-SA 2.5 | null | 2010-08-13T01:08:39.567 | 2010-08-13T01:26:17.920 | 2010-08-13T01:26:17.920 | 159 | null | null |
1627 | 2 | null | 396 | 6 | null | Take a look at the R graphics library, ggplot2. Details are at the web page [http://had.co.nz/ggplot2/](http://had.co.nz/ggplot2/) This package generates very good default plots, that follow the Tufte principles, Cleveland's guidelines and Ihaka's color package.
| null | CC BY-SA 2.5 | null | 2010-08-13T01:30:32.060 | 2010-08-13T01:30:32.060 | null | null | null | null |
1628 | 2 | null | 1610 | 7 | null | Based on the principle of [Occam's razor](http://en.wikipedia.org/wiki/Occam%27s_razor), Type I errors (rejecting the null hypothesis when it is true) are "arguably" worse than Type II errors (not rejecting the null hypothesis when it is false).
If you believe such an argument:
- Type I errors are of primary concern
- Type II errors are of secondary concern
Note: I'm not endorsing this value judgement, but it does help me remember Type I from Type II.
| null | CC BY-SA 2.5 | null | 2010-08-13T01:38:42.727 | 2010-08-13T01:48:32.673 | 2010-08-13T01:48:32.673 | 183 | 183 | null |
1629 | 2 | null | 1601 | 11 | null | The first step should be to ask why your variables are non-normally distributed. This can be illuminating.
Common findings from my experience:
- Ability tests (e.g., exams, intelligence tests, admission tests) tend to be negatively skewed when there are ceiling effects and positively skewed when there are floor effects. Both findings suggest that the difficulty level of the test is not optimised for the sample, either being too easy or too difficult to optimally differentiate ability. It also implies that the latent variable of interest could still be normally distributed, but that the structure of the test is inducing a skew in the measured variable.
- Ability tests often have outliers in terms of low scorers. In short there are many ways to do poorly on a test. In particular this can sometimes be seen on exams where there are a small percentage of students where some combination of lack of aptitude and lack of effort have combined to create very low test scores. This implies that the latent variable of interest probably has a few outliers.
- In relation to self-report tests (e.g., personality, attitude tests, etc.) skew often occurs when the sample is inherently high on the scale (e.g., distributions of life satisfaction are negatively skewed because most people are satisfied) or when the scale has been optimised for a sample different to the one the test is being applied to (e.g., applying a clinical measure of depression to a non-clinical sample).
This first step may suggest design modifications to the test. If you are aware of these issues ahead of time, you can even design your test to avoid them, if you see them as problematic.
The second step is to decide what to do in the situation where you have non-normal data.
Note transformations are but one possible strategy.
I'd reiterate the general advice from a [previous answer regarding non-normality](https://stats.stackexchange.com/questions/1386/a-robust-t-test-for-the-mean/1391#1391):
- Many procedures that assume normality of residuals are robust to modest violations of normality of residuals
- Bootstrapping is generally a good strategy
- Transformations are another good strategy. Note that from my experience the kinds of mild skew that commonly occur with ability and self-report psychological tests can usually be fairly readily transformed to a distribution approximating normality using a log, sqrt, or inverse transformation (or the reversed equivalent).
| null | CC BY-SA 2.5 | null | 2010-08-13T02:18:09.137 | 2010-08-13T02:18:09.137 | 2017-04-13T12:44:53.777 | -1 | 183 | null |
1630 | 2 | null | 1444 | 105 | null | No-one mentioned the inverse hyperbolic sine transformation. So for completeness I'm adding it here.
This is an alternative to the Box-Cox transformations and is defined by
\begin{equation}
f(y,\theta) = \text{sinh}^{-1}(\theta y)/\theta = \log[\theta y + (\theta^2y^2+1)^{1/2}]/\theta,
\end{equation}
where $\theta>0$. For any value of $\theta$, zero maps to zero. There is also a two parameter version allowing a shift, just as with the two-parameter BC transformation. [Burbidge, Magee and Robb (1988)](http://www.jstor.org/pss/2288929) discuss the IHS transformation including estimation of $\theta$.
The IHS transformation works with data defined on the whole real line including negative values and zeros. For large values of $y$ it behaves like a log transformation, regardless of the value of $\theta$ (except 0). The limiting case as $\theta\rightarrow0$ gives $f(y,\theta)\rightarrow y$.
It looks to me like the IHS transformation should be a lot better known than it is.
| null | CC BY-SA 2.5 | null | 2010-08-13T04:27:15.563 | 2010-08-13T04:27:15.563 | null | null | 159 | null |
1631 | 2 | null | 1624 | 3 | null | A z-table gives you values of the cumulative distribution function for the standard (i.e., mean = 0, standard deviation = 1) normal distribution. The integral needs to be estimated numerically. See [here for a discussion](http://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution_function).
Of course, to answer your question literally, you could use some other lookup system other than a table, such as the `pnorm` [function](http://stat.ethz.ch/R-manual/R-devel/library/stats/html/Normal.html) in R.
Example:
```
> pnorm(2)
[1] 0.9772499
```
Help for `pnorm` also provides the following reference on how to calculate the cdf of a normal distribution:
- Cody, W. D. (1993) Algorithm 715: SPECFUN – A portable FORTRAN package of special function routines and test drivers. ACM Transactions on Mathematical Software 19, 22–32.
| null | CC BY-SA 2.5 | null | 2010-08-13T05:17:37.523 | 2010-08-13T08:22:26.757 | 2010-08-13T08:22:26.757 | 183 | 183 | null |
1632 | 2 | null | 1595 | 322 | null | It's hard to ignore the wealth of statistical packages available in R/CRAN. That said, I spend a lot of time in Python land and would never dissuade anyone from having as much fun as I do. :) Here are some libraries/links you might find useful for statistical work.
- NumPy/Scipy You probably know about these already. But let me point out the Cookbook where you can read about many statistical facilities already available and the Example List which is a great reference for functions (including data manipulation and other operations). Another handy reference is John Cook's Distributions in Scipy.
- pandas This is a really nice library for working with statistical data -- tabular data, time series, panel data. Includes many builtin functions for data summaries, grouping/aggregation, pivoting. Also has a statistics/econometrics library.
- larry Labeled array that plays nice with NumPy. Provides statistical functions not present in NumPy and good for data manipulation.
- python-statlib A fairly recent effort which combined a number of scattered statistics libraries. Useful for basic and descriptive statistics if you're not using NumPy or pandas.
- statsmodels Statistical modeling: Linear models, GLMs, among others.
- scikits Statistical and scientific computing packages -- notably smoothing, optimization and machine learning.
- PyMC For your Bayesian/MCMC/hierarchical modeling needs. Highly recommended.
- PyMix Mixture models.
- Biopython Useful for loading your biological data into python, and provides some rudimentary statistical/ machine learning tools for analysis.
If speed becomes a problem, consider [Theano](http://deeplearning.net/software/theano/) -- used with good success by the deep learning people.
There's plenty of other stuff out there, but this is what I find the most useful along the lines you mentioned.
| null | CC BY-SA 3.0 | null | 2010-08-13T05:30:09.033 | 2016-12-08T17:34:43.973 | 2016-12-08T17:34:43.973 | 128762 | 251 | null |
1633 | 2 | null | 1624 | 3 | null | The simplest answer is that one either uses numerical quadrature techniques (Simpson's, for instance) on the PDF, or approximations such as those found in [Abramowitz and Stegun](https://personal.math.ubc.ca/%7Ecbm/aands/page_932.htm). Personally however, since one merely needs a few digits of accuracy for the "error function" that is involved in computing z-scores, you might be interested the simple approximations given [here](https://web.archive.org/web/20110719110656/http://homepages.physik.uni-muenchen.de/%7EWinitzki/erf-approx.pdf) by S. Winitzki.
| null | CC BY-SA 4.0 | null | 2010-08-13T05:43:29.447 | 2022-11-26T12:32:37.333 | 2022-11-26T12:32:37.333 | 362671 | 830 | null |
1634 | 2 | null | 1562 | 4 | null | There are two definitions of statistical independence:
1) P(A,B)=P(A)*P(B) <=> 2) P(A|B)=P(A) <=> 2a) P(B|A)=P(B).
(<=> means if and only if)
So to answer your question: both are valid.
Pearson Chi-square test of independence is motivated by definition 1), logistic regression and multinomial regression are motivated by definition 2) of independence.
| null | CC BY-SA 2.5 | null | 2010-08-13T05:51:00.667 | 2010-08-13T05:51:00.667 | null | null | 419 | null |
1635 | 2 | null | 1622 | 3 | null | I'm probably missing something important, but why does the fact that your observed variable is a Sharpe ratio change the statistic you would use to test the difference in Sharpe ratios? I understand that they are already distributed like 2 independent non-central t statistics, but what forces you to treat them that way?
Presumably the central limit theorem would hold even for Sharpe ratios and as such you should be able to apply a parametric test of mean differences, e.g. independent-samples Z.
More importantly, if your data is financial data wouldn't it be better to treat these as paired samples paired by the times at which they were observed?
| null | CC BY-SA 2.5 | null | 2010-08-13T05:51:02.383 | 2010-08-13T05:51:02.383 | null | null | 196 | null |
1636 | 2 | null | 1380 | 3 | null | I may be a little unclear about the question. But here would be my solution computing some "statistic on each of the tables" and comparing those values.
If your contingency tables are like a binomial effect size display (BESD), with clear YES/NO predictions being provided by each of your K methods you'll have a number of tables like this...
Reality
+ -
Pred + 70 30 100
Pred - 30 70 100
100 100 200
I believe you can find the difference between the success rates, e.g. 70/100 – 30/100 = 40/100 = .40, this value can be considered as being equivalent to an effect size r for each of your K methods. As a proof of concept I've included equivalent R code...
```
x <- rep(c(1,0),each=100)
y <- c(rep(1,70),rep(0,30),rep(1,30),rep(0,70))
cor(x,y)
```
You can then compare them using Fisher's Z' transformation for r in the standard way, e.g. [here](http://www.fon.hum.uva.nl/Service/Statistics/Two_Correlations.html). To deal with situations where K is greater than 2 one may want to apply some familywise error correction to the Z' tests, but the exact one selected I leave open for another debate. P.S. I might be remembering incorrectly, but I think you can find more details in Essentials of Behavioral Research: Methods and Data Analysis by Rosenthal & Rosnow, 2007, Ch 11
| null | CC BY-SA 2.5 | null | 2010-08-13T06:09:24.530 | 2010-08-13T06:24:50.753 | 2010-08-13T06:24:50.753 | 196 | 196 | null |
1637 | 1 | 1639 | null | 55 | 71852 | I'm sure I've got this completely wrapped round my head, but I just can't figure it out.
The t-test compares two normal distributions using the Z distribution. That's why there's an assumption of normality in the DATA.
ANOVA is equivalent to linear regression with dummy variables, and uses sums of squares, just like OLS. That's why there's an assumption of normality of RESIDUALS.
It's taken me several years, but I think I've finally grasped those basic facts. So why is it that the t-test is equivalent to ANOVA with two groups? How can they be equivalent if they don't even assume the same things about the data?
| If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? | CC BY-SA 2.5 | null | 2010-08-13T09:41:13.160 | 2013-01-08T15:26:45.957 | 2010-08-13T10:00:01.717 | 8 | 199 | [
"distributions",
"regression",
"normality-assumption",
"t-test",
"anova"
]
|
1638 | 2 | null | 1610 | 6 | null | Hurrah, a question non-technical enough so as I can answer it!
"Type one is a con" [rhyming]- i.e. fools you into thinking that a difference exists when it doesn't. Always works for me.
| null | CC BY-SA 2.5 | null | 2010-08-13T09:50:51.067 | 2010-08-13T09:50:51.067 | null | null | 199 | null |
1639 | 2 | null | 1637 | 37 | null | The t-test with two groups assumes that each group is normally distributed with the same variance (although the means may differ under the alternative hypothesis). That is equivalent to a regression with a dummy variable as the regression allows the mean of each group to differ but not the variance. Hence the residuals (equal to the data with the group means subtracted) have the same distribution --- that is, they are normally distributed with zero mean.
A t-test with unequal variances is not equivalent to a one-way ANOVA.
| null | CC BY-SA 2.5 | null | 2010-08-13T09:52:44.700 | 2010-08-13T09:52:44.700 | null | null | 159 | null |
1641 | 2 | null | 672 | 8 | null | Bayes' theorem is a way to rotate a conditional probability $P(A|B)$ to another conditional probability $P(B|A)$.
A stumbling block for some is the meaning of $P(B|A)$. This is a way to reduce the space of possible events by considering only those events where $A$ definitely happens (or is true). So for instance the probability that a thrown, fair, dice lands showing six, $P(\mbox{dice lands six})$, is 1/6, however the probability that a dice lands six given that it landed an even number, $P(\mbox{dice lands six}|\mbox{dice lands even})$, is 1/3.
You can derive Bayes' theorem yourself as follows. Start with the ratio definition of a conditional probability:
$P(B|A) = \frac{P(AB)}{P(A)}$
where $P(AB)$ is the joint probability of $A$ and $B$ and $P(A)$ is the marginal probability of $A$.
Currently the formula makes no reference to $P(A|B)$, so let's write down the definition of this too:
$P(A|B) = \frac{P(BA)}{P(B)}$
The little trick for making this work is seeing that $P(AB) = P(BA)$ (since a Boolean algebra is underneath all of this, you can easily prove this with a truth table by showing $AB = BA$), so we can write:
$P(A|B) = \frac{P(AB)}{P(B)}$
Now to slot this into the formula for $P(B|A)$, just rewrite the formula above so $P(AB)$ is on the left:
$P(AB) = P(A|B)P(B)$
and hey presto:
$P(B|A) = \frac{P(A|B)P(B)}{P(A)}$
As for what the point is to rotating a conditional probability in this way, consider the common example of trying to infer the probability that someone has a disease given that they have a symptom, i.e., we know that they have a symptom - we can just see it - but we cannot be certain whether they have a disease and have to infer it. I'll start with the formula and work back.
$P(\mbox{disease}|\mbox{symptom}) = \frac{P(\mbox{symptom}|\mbox{disease})P(\mbox{disease})}{P(\mbox{symptom})}$
So to work it out, you need to know the prior probability of the symptom, the prior probability of the disease (i.e., how common or rare are the symptom and disease) and also the probability that someone has a symptom given we know someone has a disease (e.g., via expensive time consuming lab tests).
It can get a lot more complicated than this, e.g., if you have multiple diseases and symptoms, but the idea is the same. Even more generally, Bayes' theorem often makes an appearance if you have a probability theory of relationships between causes (e.g., diseases) and effects (e.g., symptoms) and you need to reason backwards (e.g., you see some symptoms from which you want to infer the underlying disease).
| null | CC BY-SA 2.5 | null | 2010-08-13T11:42:08.157 | 2010-08-13T11:42:08.157 | null | null | 702 | null |
1642 | 2 | null | 1610 | 10 | null | You could reject the idea entirely.
Some authors (Andrew Gelman is one) are shifting to discussing Type S (sign) and Type M (magnitude) errors. You can infer the wrong effect direction (e.g., you believe the treatment group does better but actually does worse) or the wrong magnitude (e.g., you find a massive effect where there is only a tiny, or essentially no effect, or vice versa).
See more at [Gelman's blog](http://www.stat.columbia.edu/~cook/movabletype/archives/2004/12/type_1_type_2_t.html).
| null | CC BY-SA 2.5 | null | 2010-08-13T12:22:25.313 | 2010-08-13T12:22:25.313 | null | null | 702 | null |
1643 | 2 | null | 1637 | 19 | null | I totally agree with Rob's answer, but let me put it another way (using wikipedia):
[Assumptions ANOVA](http://en.wikipedia.org/wiki/Analysis_of_variance#Assumptions_of_ANOVA):
- Independence of cases – this is an assumption of the model that simplifies the statistical analysis.
- Normality – the distributions of the residuals are normal.
- Equality (or "homogeneity") of variances, called homoscedasticity
[Assumptions t-test](http://en.wikipedia.org/wiki/Student%27s_t-test#Assumptions):
- Each of the two populations being compared should follow a normal distribution ...
- ... the two populations being compared should have the same variance ...
- The data used to carry out the test should be sampled independently from the two populations being compared.
Hence, I would refute the question, as they obviously have the same assumptions (although in a different order :-) ).
| null | CC BY-SA 2.5 | null | 2010-08-13T12:24:11.597 | 2010-08-13T12:24:11.597 | null | null | 442 | null |
1644 | 2 | null | 1611 | 22 | null | I don't think it matters very much, as long as the interpretation of the results is performed within the same framework as the analysis. The main problem with frequentist statistics is that there is a natural tendency to treat the p-value of a frequentist significance test as if it was a Bayesian a-posteriori probability that the null hypothesis is true (and hence 1-p is the probability that the alternative hypothesis is true), or treating a frequentist confidence interval as a Bayesian credible interval (and hence assuming there is a 95% probability that the true value lies within a 95% confidence interval for the particular sample of data we have). These sorts of interpretation are natural as it would be the direct answer to the question we would naturally want to ask. It is a trade-off between whether the subjective element of the Bayesian approach (which is itself debatable, see e.g. Jaynes book) is sufficiently abhorrent that it is worth making do with an indirect answer to the key question (and vice versa).
As long as the form of the answer is acceptable, and we can agree on the assumptions made, then there is no reason to prefer one over the other - it is a matter of horses for courses.
I'm still a Bayesian though ;o)
| null | CC BY-SA 2.5 | null | 2010-08-13T12:31:09.510 | 2010-08-13T12:31:09.510 | null | null | 887 | null |
1645 | 1 | 1648 | null | 24 | 18495 | So far, I've been using the Shapiro-Wilk statistic in order to test normality assumptions in small samples.
Could you please recommend another technique?
| Appropriate normality tests for small samples | CC BY-SA 3.0 | null | 2010-08-13T12:42:30.220 | 2021-02-06T14:56:07.777 | 2015-02-22T12:35:34.830 | 22047 | 1356 | [
"hypothesis-testing",
"goodness-of-fit",
"normality-assumption",
"small-sample"
]
|
1646 | 1 | 2524 | null | 8 | 2967 | Imagine that:
- You have a sample of 1000 teams each with 10 members.
- You measured team functioning by asking each team member how well they think their team is functioning using a reliable multi-item numeric scale.
- You want to describe the extent to which the measure of team effectiveness is a property of the team member's idiosyncratic belief or a property of a shared belief about the team.
In this and related situations (e.g., aggregating to organisations), many researchers report the intraclass correlation (e.g., Table 1 in [Campion & Medsker, 1993](http://www.krannert.purdue.edu/faculty/campionm/Relations_Between_Work.pdf)).
Thus, my questions are:
- What descriptive labels would you attach to different values of the intra-class correlation? I.e., the aim is to actually relate the values of the intra-class correlation to qualitative language such as: "When the intraclass correlation is greater than x, it suggests that the attitudes are modestly/moderately/strongly shared across team members."
- Do you think the intraclass correlation is the appropriate statistic or would you use a different strategy?
| Intraclass correlation and aggregation | CC BY-SA 2.5 | null | 2010-08-13T12:44:23.913 | 2014-07-09T12:10:01.610 | 2010-08-13T12:50:59.220 | 183 | 183 | [
"correlation",
"intraclass-correlation",
"aggregation",
"interpretation",
"effect-size"
]
|
1647 | 2 | null | 1645 | 11 | null | There is a whole [Wikipedia category on normality tests](http://en.wikipedia.org/wiki/Category%3aNormality_tests) including:
- the Anderson-Darling test, popular amongst statisticians; and
- the Jarque-Bera test, popular amongst econometricians.
I think A-D is probably the best of them.
| null | CC BY-SA 2.5 | null | 2010-08-13T12:47:16.497 | 2010-08-13T12:47:16.497 | null | null | 159 | null |
1648 | 2 | null | 1645 | 25 | null | The [fBasics](http://cran.r-project.org/web/packages/fBasics/index.html) package in R (part of [Rmetrics](https://www.rmetrics.org/)) includes [several normality tests](http://hosho.ees.hokudai.ac.jp/~kubo/Rdoc/library/fBasics/html/NormalityTests.html), covering many of the popular [frequentist tests](http://en.wikipedia.org/wiki/Normality_test#Frequentist_tests) -- Kolmogorov-Smirnov, Shapiro-Wilk, Jarque–Bera, and D'Agostino -- along with a wrapper for the normality tests in the [nortest](http://cran.r-project.org/web/packages/nortest/index.html) package -- Anderson–Darling, Cramer–von Mises, Lilliefors (Kolmogorov-Smirnov), Pearson chi–square, and Shapiro–Francia. The package documentation also provides all the important references. Here is a demo that shows how to use the [tests from nortest](http://duncanjg.files.wordpress.com/2008/12/ksdemo3.pdf).
One approach, if you have the time, is to use more than one test and check for agreement. The tests vary in a number of ways, so it isn't entirely straightforward to choose "the best". What do other researchers in your field use? This can vary and it may be best to stick with the accepted methods so that others will accept your work. I frequently use the Jarque-Bera test, partly for that reason, and Anderson–Darling for comparison.
You can look at ["Comparison of Tests for Univariate Normality"](http://interstat.statjournals.net/YEAR/2002/articles/0201001.pdf) (Seier 2002) and ["A comparison of various tests of normality"](http://www.informaworld.com/smpp/content~db=all~content=a759350109) (Yazici; Yolacan 2007) for a comparison and discussion of the issues.
It's also trivial to test these methods for comparison in R, thanks to all the [distribution functions](http://cran.r-project.org/doc/manuals/R-intro.html#Probability-distributions). Here's a simple example with simulated data (I won't print out the results to save space), although a more full exposition would be required:
```
library(fBasics); library(ggplot2)
set.seed(1)
# normal distribution
x1 <- rnorm(1e+06)
x1.samp <- sample(x1, 200)
qplot(x1.samp, geom="histogram")
jbTest(x1.samp)
adTest(x1.samp)
# cauchy distribution
x2 <- rcauchy(1e+06)
x2.samp <- sample(x2, 200)
qplot(x2.samp, geom="histogram")
jbTest(x2.samp)
adTest(x2.samp)
```
Once you have the results from the various tests over different distributions, you can compare which were the most effective. For instance, the p-value for the Jarque-Bera test above returned 0.276 for the normal distribution (accepting) and < 2.2e-16 for the cauchy (rejecting the null hypothesis).
| null | CC BY-SA 3.0 | null | 2010-08-13T13:32:27.913 | 2015-02-22T12:37:27.600 | 2015-02-22T12:37:27.600 | 22047 | 5 | null |
1649 | 1 | 1650 | null | 5 | 568 | Say I observe two groups of 10 people, measuring some quantity 100 times in each person. There will presumably be some variability across these 100 measures in each person. Can I use mixed effects analysis to assess whether this within-person variability is, on average, different between the two groups? For example, using traditional statistics, I could compute the standard deviation (SD) within each person then submit these SDs to an anova comparing the groups, but I wonder if this two-stage process can be replaced by a single mixed effects model, consequently obtaining the various advantages of mixed effects modelling (shrinkage, accounting for different numbers of observations per person, etc) as well.
To be clear, here is R code depicting the scenario described above and the SD/anova-based approach:
```
set.seed(1)
group_A_base_sd = 1
group_B_base_sd = 2
within_group_sd_of_sds = .1
n_per_group = 10
obs_per_id = 100
temp = data.frame(
id = 1:(n_per_group*2)
, group = rep(c('A','B'))
)
#generate example data
library(plyr) #to avoid loops (for coding convenience only)
obs_data = ddply(
.data = temp
, .variables = .(id,group)
, .fun = function(x){
#generate a unique sd for this individual
# based on their group's sd plus some
# within-group variability
id_sd = ifelse(
x$group=='A'
, rnorm(
1
, group_A_base_sd
, within_group_sd_of_sds
)
, rnorm(
1
, group_B_base_sd
, within_group_sd_of_sds
)
)
#generate data points with the above generated
# variability
to_return = data.frame(
obs_num = 1:obs_per_id
, measurement = rnorm(obs_per_id,0,id_sd)
)
return(to_return)
}
)
#first step of an anova-based approach:
# compute SDs within each Ss
obs_sds = ddply(
.data = obs_data
, .variables = .(id,group)
, .fun = function(x){
to_return = data.frame(
obs_sd = sd(x$measurement)
)
}
)
#second step of an anova-based approach:
# compute the anova on the SDs
summary(
aov(
formula = obs_sd~group
, data = obs_sds
)
)
```
| Using mixed effects modelling to estimate and compare variability | CC BY-SA 2.5 | null | 2010-08-13T13:58:38.567 | 2010-08-17T06:26:59.823 | null | null | 364 | [
"variance",
"mixed-model"
]
|
1650 | 2 | null | 1649 | 4 | null | You can structure the model along the following lines. Let,
$j = 1, 2$ be the two groups and
$i$ index the individuals in the two groups.
Then your model is:
$y_{ij} \sim N(\mu_j,\sigma_j^2)$ $\forall \ i, j$
$\sigma_j^2 \sim IG(v,1)$ $\forall \ j$
(Note: $IG(.)$ is the inverse gamma distribution.)
Priors
$\mu_j \sim N(\bar{\mu},\sigma_{\mu}^2)$
$v \sim IG(\bar{v},1)$ is the prior for $v$.
The above structure will let you shrink the error variances ($\sigma_j^2$) appropriately. You can then evaluate whether the within group variability is different by looking at the credible intervals associated with the group variabilities.
| null | CC BY-SA 2.5 | null | 2010-08-13T14:17:15.793 | 2010-08-13T14:17:15.793 | null | null | null | null |
1651 | 1 | null | null | 6 | 2188 | I need to fit $Y_{ij} \sim NegBin(m_{ij},k)$, i.e. a negative binomial distribution to count data. However, the data I have observed are censored - I know the value of $y_{ij}$, but it could be more than that value. The log-likelihood is
\begin{equation}
ll = \sum_{i=1}^n w_i (c_i \log(P(Y_{ij}=y_{ij}|X_{ij})) + (1- c_i) \log(1- \sum_{k=1}^32 P(Y_{ij} = k|X_{ij})))
\end{equation}
where $X_{ij}$ represent the design matrix (with the covariates of interest), $w_i$ is the weight for each observation, $y_{ij}$ is the response variable and $P(Y_{ij}=y_{ij}|X_{ij})$ is the negative binomial distribution where the $m_{ij}=exp(X_{ij} \beta)$ and $\alpha$ is the over-dispersion parameter.
Does anyone know of an R package to tackle this problem?
| How to fit a negative binomial distribution in R while incorporating censoring | CC BY-SA 2.5 | null | 2010-08-13T14:28:02.520 | 2010-08-17T13:21:27.277 | 2010-08-17T13:21:27.277 | 8 | null | [
"r",
"censoring",
"negative-binomial-distribution"
]
|
1652 | 2 | null | 534 | 10 | null | Correlation alone never implies causation. It's that simple.
But it's very rare to have only a correlation between two variables. Often you also know something about what those variables are and a theory, or theories, suggesting why there might be a causal relationship between the variables. If not, then we bother checking for a correlation? (However people mining massive correlation matrices for significant results often have no casual theory - otherwise, why bother mining. A counterargument to that is that often some exploration is needed to get ideas for casual theories. And so on and so on...)
A response to the common criticism "Yeah, but that's just a correlation: it doesn't imply causation":
- For a casual relationship, correlation is necessary. A repeated failure to find a correlation would be bad news indeed.
- I didn't just give you a correlation.
- Then go on to explain possible causal mechanisms explaining the correlation...
| null | CC BY-SA 2.5 | null | 2010-08-13T15:21:55.373 | 2010-08-13T15:21:55.373 | null | null | 702 | null |
1653 | 2 | null | 1637 | 26 | null | The t-test simply a special case of the F-test where only two groups are being compared. The result of either will be exactly the same in terms of the p-value and there is a simple relationship between the F and t statistics as well. F = t^2. The two tests are algebraically equivalent and their assumptions are the same.
In fact, these equivalences extend to the whole class of ANOVAs, t-tests, and linear regression models. The t-test is a special case of ANOVA. ANOVA is a special case of regression. All of these procedures are subsumed under the General Linear Model and share the same assumptions.
- Independence of observations.
- Normality of residuals = normality in each group in the special case.
- Equal of variances of residuals = equal variances across groups in the special case.
You might think of it as normality in the data, but you are checking for normality in each group--which is actually the same as checking for normality in the residuals when the only predictor in the model is an indicator of group. Likewise with equal variances.
Just as an aside, R does not have seperate routines for ANOVA. The anova functions in R are just wrappers to the lm() function--the same thing that is used to fit linear regression models--packaged a little differently to provide what is typically found in an ANOVA summary rather than a regression summary.
| null | CC BY-SA 2.5 | null | 2010-08-13T15:24:20.490 | 2010-08-13T15:42:56.300 | 2010-08-13T15:42:56.300 | 485 | 485 | null |
1654 | 2 | null | 173 | 5 | null | You may try to model your data using a Dynamic Generalized Linear Model (DGLM). In R, you can fit this kind of models using packages sspir and KFAS. In a sense, this is similar to the gam approach suggested by Rob, except that instead of assuming that the log mean of the Poisson observations be a smooth function of time, it assumes that it follows a stochastic dynamics.
| null | CC BY-SA 2.5 | null | 2010-08-13T15:31:27.857 | 2010-08-13T15:31:27.857 | null | null | null | null |
1655 | 2 | null | 1601 | 9 | null | John Tukey systematically discusses transformations in his book on EDA. In addition to the Box-Cox family (affinely scaled power transformations) he defines a family of "folded" transformations for proportions (essentially powers of x/(1-x)) and "started" counts (adding a positive offset to counted data before transforming them). The folded transformations, which essentially generalize the logit, are especially useful for test scores.
In a completely different vein, Johnson & Kotz in their books on distributions offer many transformations intended to convert test statistics to approximate normality (or to some other target distribution), such as the cube-root transformation for chi-square. This material is a great source of ideas for useful transformations when you anticipate your data will follow some specific distribution.
| null | CC BY-SA 2.5 | null | 2010-08-13T15:48:29.310 | 2010-08-13T15:48:29.310 | null | null | 919 | null |
1656 | 2 | null | 1645 | 3 | null | For completeness, econometricians also like the Kiefer and Salmon test from their 1983 paper in Economics Letters -- it sums 'normalized' expressions of skewness and kurtosis which is then chi-square distributed. I have an old C++ version I wrote during grad school I could translate into R.
Edit: And [here](http://econ.la.psu.edu/~hbierens/NORMTEST.PDF) is recent paper by Bierens (re-)deriving Jarque-Bera and Kiefer-Salmon.
Edit 2: I looked over the old code, and it seems that it really is the same test between Jarque-Bera and Kiefer-Salmon.
| null | CC BY-SA 2.5 | null | 2010-08-13T15:54:37.857 | 2010-08-13T16:54:43.857 | 2010-08-13T16:54:43.857 | 334 | 334 | null |
1657 | 2 | null | 726 | 5 | null | [CauseWeb](http://www.causeweb.org/) has a collection of statistics quotations. Many have already been repeated here, but it has plenty that haven't yet been quoted, such as
>
"The only statistics you can trust are those you falsified yourself."
(Falsely attributed to Sir Winston Churchill.) For the rest, follow the CauseWeb links to Resources->Fun->Quote.
| null | CC BY-SA 2.5 | null | 2010-08-13T16:04:08.990 | 2010-08-17T22:11:08.277 | 2010-08-17T22:11:08.277 | 919 | 919 | null |
1658 | 2 | null | 726 | 7 | null | "I cannot conceal the fact here that in the [application of probability theory], I foresee many things happening which can cause one to be badly mistaken if he does not proceed cautiously.",
Bernoulli (1713) (via ET Jaynes)
"A statistician is someone who knows what to assume to be Gaussian"
Dikran Marsupial (2009) (not famous yet ;o).
| null | CC BY-SA 2.5 | null | 2010-08-13T16:11:20.950 | 2010-08-13T16:11:20.950 | null | null | 887 | null |
1659 | 2 | null | 1462 | 4 | null | A player's yardage is unlikely to be anywhere near normally distributed. If it were, your guy averaging 5.3 give or take 1.7 yards would almost never lose yards or gain more than 11 yards on any play in the entire season. Gone is the excitement of the game, to be replaced by some statistical mediocrity. If football were played like this, a team's chances of making a set of downs would be almost certain; there would almost never be a loss of downs; and the game would simply be determined by who won the initial coin flip and got on the field first.
Why not just draw a value at random from a list of the player's recent gains (and losses)? It's fairly easy to program: you just have to generate a uniformly distributed integer to index into an array of the gains. It doesn't require any kind of statistical model--no need to fit anything. It can account for change in the player's ability over time (just by selecting which time period you will use to draw the data from). And it's obviously driven by "real-live data."
| null | CC BY-SA 2.5 | null | 2010-08-13T16:15:41.417 | 2010-08-13T16:15:41.417 | null | null | 919 | null |
1660 | 1 | 1665 | null | 11 | 2369 | I need to define what a test of independence is, without the use of heavily statistic terms.
| What is a test of independence? | CC BY-SA 3.0 | null | 2010-08-13T16:43:45.133 | 2015-12-19T16:29:50.090 | 2015-12-19T16:29:50.090 | 28666 | 559 | [
"hypothesis-testing",
"independence",
"definition"
]
|
1661 | 1 | null | null | 2 | 720 | X1 is wing length, X2 is tail length for 45 male and 45 female bugs.
Which 2-sample univariate t-test should I use?
My thought was to use Hotelling's T-square?
But Hotelling's is multi-variate not univariate. Now, I'm not sure...
Any ideas?
| Which 2-sample univariate t-test to use? | CC BY-SA 2.5 | null | 2010-08-13T17:21:37.057 | 2010-08-17T18:31:59.927 | 2010-08-15T08:47:32.430 | null | null | [
"t-test"
]
|
1662 | 2 | null | 1661 | 1 | null | While your questions is not clear (which means do you want to compare?) you can consult the wiki: [Comparing Means](http://en.wikipedia.org/wiki/Comparing_means) to decide what to do.
| null | CC BY-SA 2.5 | null | 2010-08-13T17:39:02.347 | 2010-08-13T17:39:02.347 | null | null | null | null |
1663 | 2 | null | 1660 | 2 | null | Why don't you take the definition of wikipedia. It's quite short und doesn't use heavily statistic terms.
>
A test of independence assesses whether paired observations on two variables, expressed in a contingency table, are independent of each other – for example, whether people from different regions differ in the frequency with which they report that they support a political candidate.
| null | CC BY-SA 2.5 | null | 2010-08-13T18:27:56.683 | 2010-08-13T18:27:56.683 | null | null | 927 | null |
1664 | 2 | null | 1661 | 1 | null | As others have said, you need to clarify your question.
However, I'm guessing that you want to determine if wing length or tail length differ between male and female bugs. In this case I would just do a couple of [two sample t-tests](http://en.wikipedia.org/wiki/Student%27s_t-test#Independent_two-sample_t-test). So for wing length you would have the following hypothesis:
```
H_0: wing length does not depend on gender
H_1: wing length differs by gender.
```
You would have something similar for tail length.
| null | CC BY-SA 2.5 | null | 2010-08-13T19:29:26.220 | 2010-08-13T19:29:26.220 | null | null | 8 | null |
1665 | 2 | null | 1660 | 5 | null | I would start by defining what you mean by independence. For example,
>
If two variables are independent this
means that knowing the value of one
variable does not tell you anything
about the value of the other variable.
Then I would describe the test:
>
To test for independence we construct
a table of values that we would expect
to see if the variables were
independent. If we observed something
"very" different from these expected
values, we would conclude that the
variables are unlikely to be independent.
| null | CC BY-SA 2.5 | null | 2010-08-13T19:39:43.713 | 2010-08-13T19:39:43.713 | null | null | 8 | null |
1666 | 2 | null | 414 | 12 | null | I loved the Freedman, Pisani, Purves' [Statistics](https://wwnorton.com/books/Statistics/) text because it is extremely non-mathematical. As a mathematician, you will find it to be such a clear guide to the statistical concepts that you will be able to develop all the mathematical theory as an exercise: that's a rewarding thing to do. (The first edition of this text was my initiation to statistics after I completed a PhD in pure mathematics and I still enjoy re-reading it.)
| null | CC BY-SA 4.0 | null | 2010-08-13T21:22:56.557 | 2023-02-11T10:04:41.897 | 2023-02-11T10:04:41.897 | 362671 | 919 | null |
1667 | 1 | null | null | 5 | 4810 | As an engineer, I'm interested in topics such as designing experiments that are statistically valid, quality control, process control, reliability, and cost control. I took a course in engineering statistics, but unfortunately neither the book nor the professor were that good. I did OK in the course, but I'm interested in learning more about these topics and how to apply them to engineering problems. I would prefer a general book that covered as many of these topics as possible - great depth is not needed.
I think that I can learn a lot about improving my abilities by looking at how all engineering disciplines use statistics, so I'm not looking for any particular engineering field.
Can the Statistical Analysis community recommend books that I can use to learn more about applying statistics to engineering problems?
| What books provide an overview of engineering statistics? | CC BY-SA 2.5 | null | 2010-08-13T22:29:08.050 | 2010-08-14T01:32:29.947 | 2010-08-14T01:21:36.430 | 110 | 110 | [
"references",
"engineering-statistics"
]
|
1668 | 1 | null | null | 16 | 5548 | As a software engineer, I'm interested in topics such as statistical algorithms, data mining, machine learning, Bayesian networks, classification algorithms, neural networks, Markov chains, Monte Carlo methods, and random number generation.
I personally haven't had the pleasure of working hands-on with any of these techniques, but I have had to work with software that, under the hood, employed them and would like to know more about them, at a high level. I'm looking for books that cover a great breadth - great depth is not necessary at this point. I think that I can learn a lot about software development if I can understand the mathematical foundations behind the algorithms and techniques that are employed.
Can the Statistical Analysis community recommend books that I can use to learn more about implementing various statistical elements in software?
| What books provide an overview of computational statistics as it applies to computer science? | CC BY-SA 2.5 | null | 2010-08-13T22:35:50.853 | 2018-09-07T06:27:28.540 | 2010-08-14T01:21:07.197 | 110 | 110 | [
"references",
"computational-statistics"
]
|
1669 | 2 | null | 1667 | 4 | null | NIST/SEMATECH e-Handbook of Statistical Methods is a good start. Free and online:
[http://www.itl.nist.gov/div898/handbook/](http://www.itl.nist.gov/div898/handbook/)
| null | CC BY-SA 2.5 | null | 2010-08-13T23:58:11.623 | 2010-08-13T23:58:11.623 | null | null | 74 | null |
1670 | 2 | null | 1667 | 1 | null | When I took the Engineering Statistics course I mentioned in the question, the assigned textbook wasn't very helpful. Instead, I used [Probability and Statistics for Engineers and Scientists - Anthony Hayter](http://rads.stackoverflow.com/amzn/click/0495107573) to get through the course. It didn't cover everything in the same order and depth of the course, but it was sufficient to get me through the material and get a passing grade.
Topics covered include probability theory, random variables, discrete and continuous probability distributions, normal distributions, descriptive statistics, statistical estimation and sampling distributions, population means, discrete data analysis, ANOVA, linear regression, nonlinear regression, multifactor experimental design and analysis, nonparametric statistical analysis, quality control methods, and reliability analysis. Unfortunately, the course only covered the first 11 chapters and occasionally in more depth then this book went into.
| null | CC BY-SA 2.5 | null | 2010-08-14T00:58:21.737 | 2010-08-14T00:58:21.737 | null | null | 110 | null |
1671 | 2 | null | 1668 | 1 | null | I picked up a copy of [Probability and Statistics for Computer Scientists - Michael Baron](http://rads.stackoverflow.com/amzn/click/1584886412) on sale with another statistics book (I honestly bought it because of the name - I wanted a book that would take some kind of look at statistics from a computer science perspective, even if it wasn't perfect). I haven't had a chance to read it or work any problems in it yet, but it seems like a solid book.
The preface of the book says that it's for upper level undergraduate students and beginning graduate students, and I would agree with this. Some understanding of probability and statistics are necessary to grasp the contents of this book.
Topics include probability, discrete random variables, continuous distributions, Monte Carlo methods, stochastic processes, queuing systems, statistical inference, and regression.
| null | CC BY-SA 2.5 | null | 2010-08-14T01:02:31.637 | 2010-08-14T01:02:31.637 | null | null | 110 | null |
1672 | 2 | null | 1668 | 1 | null | Although it's not specifically computational statistics, [A Handbook of Statistical Analyses Using R - Brian S. Everitt and Torsten Hothorn](http://rads.stackoverflow.com/amzn/click/1584885394) covers a lot of topics that I've seen covered in basic and intermediate statistics books - inference, ANOVA, linear regression, logistic regression, density estimation, recursive partitioning, principal component analysis, and cluster analysis - using the R language. This might be of interest to those interested in programming.
However, unlike other books, the emphasis is on using the R language to carry out these statistical functions. Other books I've seen use combinations of algebra and calculus to demonstrate statistics. This book actually focuses on how to analyze data using the R language. And to make it even more useful, the data sets the authors use are in CRAN - the R Repository.
| null | CC BY-SA 2.5 | null | 2010-08-14T01:09:33.820 | 2010-08-14T01:09:33.820 | null | null | 110 | null |
1673 | 2 | null | 1668 | 1 | null | [Statistical Computing with R - Maria L. Rizzo](http://rads.stackoverflow.com/amzn/click/1584885459) covers a lot of the topics in Probability and Statistics for Computer Scientists - basic probability and statistics, random variables, Bayesian statistics, Markov chains, visualization of multivariate data, Monte Carlo methods, Permutation tests, probability density estimation, and numerical methods.
The equations and formulas used are presented both as mathematical formulas as well as in R code. I would say that a basic knowledge of probability, statistics, calculus, and maybe discrete mathematics would be advisable for anyone who wants to read this book. A programming background would also be helpful, but there are some references for the R language, operators, and syntax.
| null | CC BY-SA 2.5 | null | 2010-08-14T01:16:16.037 | 2010-08-14T01:16:16.037 | null | null | 110 | null |
1674 | 1 | 1693 | null | 8 | 692 | On page 331 of "Elements of Information Theory" (1991), author says that while entropy is related to the volume of the typical set, Fisher information is related to the surface area of the typical set, but I can't find anything more on this...can anyone explain this connection?
| Fisher information and the "surface area of the typical set" | CC BY-SA 2.5 | null | 2010-08-14T01:31:43.443 | 2010-09-17T20:26:19.617 | 2010-09-17T20:26:19.617 | null | 511 | [
"information-theory"
]
|
1675 | 2 | null | 1667 | 1 | null | The book [Statistical Methods for Engineers - Geoffrey Vining](http://rads.stackoverflow.com/amzn/click/053873518X) is used in my university's Engineering Statistics course. However, I do not recommend this book. When I took the course, I ended up not being able to learn from the professor, so I was using this book to teach myself the material. It went along with the course in terms of content and depth, but I found the examples presented to be confusing and not as clear as they could have been. If you have a strong statistics background to begin with, it might be a suitable book, but this was the first statistics course that I had taken (and the only one required). There were no errors with the book - the examples and solutions were all correct.
If you are an engineer with a preexisting background in statistics, perhaps it might be worth it to visit your local library and check it out first before you buy it - it might work better for you than it did for me.
| null | CC BY-SA 2.5 | null | 2010-08-14T01:32:29.947 | 2010-08-14T01:32:29.947 | null | null | 110 | null |
1676 | 1 | null | null | 7 | 2389 | Duplicate thread: [What R packages do you find most useful in your daily work?](https://stats.stackexchange.com/questions/73/what-r-packages-do-you-find-most-useful-in-your-daily-work)
Are there any R packages that are just plain good to have, regardless of the type of work you are doing? If so, what are these packages? If not, what packages do you find the most useful?
| I just installed the latest version of R. What packages should I obtain? | CC BY-SA 3.0 | 0 | 2010-08-14T01:39:23.253 | 2020-10-19T07:15:38.787 | 2017-04-13T12:44:24.667 | -1 | 110 | [
"r"
]
|
1677 | 2 | null | 1668 | 3 | null | You might want to read the extremely popular question on Stack Overflow on
[what statistics a programmer or computer scientist should know](https://stackoverflow.com/questions/2039904/what-statistics-should-a-programmer-or-computer-scientist-know).
| null | CC BY-SA 2.5 | null | 2010-08-14T01:41:18.490 | 2010-08-14T01:41:18.490 | 2017-05-23T12:39:26.167 | -1 | 183 | null |
1678 | 2 | null | 73 | 6 | null | I imagine graphics and data manipulation are two things that are useful no matter what you are doing. Thus, I'd recommend:
- ggplot2 (great graphics)
- lattice (great graphics)
- plyr (useful for data manipulation)
- Hmisc (good for descriptive statistics and much more)
| null | CC BY-SA 2.5 | null | 2010-08-14T01:46:57.587 | 2010-08-14T01:46:57.587 | null | null | 183 | null |
1680 | 2 | null | 73 | 3 | null | This is definitely a question that doesn't have "an answer". It is completely dependent on what you want to do. That aside, I'll share the packages that I install as a standard with an R update...
```
install.packages(c("car","gregmisc","xtable","Design","Hmisc","psych",
"CCA", "fda", "zoo", "fields",
"catspec","sem","multilevel","Deducer","RQDA"))
```
and leave it to you to investigate those packages and see if they are valuable to you.
| null | CC BY-SA 2.5 | null | 2010-08-14T04:58:27.553 | 2010-08-14T04:58:27.553 | null | null | 485 | null |
1681 | 1 | 2081 | null | 4 | 385 | Chernoff bound (for [absolute error](http://en.wikipedia.org/wiki/Chernoff_bound#Theorem_for_additive_form_.28absolute_error.29)) gives a bound on probability of large deviation in terms of sample size and amount of deviation, but it doesn't seem possible to rewrite it to give an explicit bound on the amount of deviation. So, what is good way to bound the largest deviation from the mean in terms of sample size and probability of that deviation?
| Chernoff-like bound for largest allowed deviation? | CC BY-SA 2.5 | null | 2010-08-14T07:16:00.080 | 2023-02-11T12:54:06.857 | 2011-04-29T00:15:43.423 | 3911 | 511 | [
"probability"
]
|
1682 | 2 | null | 31 | 12 | null | What the p-value doesn't tell you is how likely it is that the null hypothesis is true. Under the conventional (Fisher) significance testing framework we first compute the likelihood of observing the data assuming the null hypothesis is true, this is the p-value. It seems intuitively reasonable then to assume the null hypothesis is probably false if the data are sufficiently unlikely to be observed under the null hypothesis. This is entirely reasonable. Statisticians tranditionally use a threshold and "reject the null hypothesis at the 95% significance level" if (1 - p) > 0.95; however this is just a convention that has proven reasonable in practice - it doesn't mean that there is less than 5% probability that the null hypothesis is false (and therefore a 95% probability that the alternative hypothesis is true). One reason that we can't say this is that we have not looked at the alternative hypothesis yet.
Imaging a function f() that maps the p-value onto the probability that the alternative hypothesis is true. It would be reasonable to assert that this function is strictly decreasing (such that the more likely the observations under the null hypothesis, the less likely the alternative hypothesis is true), and that it gives values between 0 and 1 (as it gives an estimate of probability). However, that is all that we know about f(), so while there is a relationship between p and the probability that the alternative hypothesis is true, it is uncalibrated. This means we cannot use the p-value to make quantitative statements about the plausibility of the nulll and alternatve hypotheses.
Caveat lector: It isn't really within the frequentist framework to speak of the probability that a hypothesis is true, as it isn't a random variable - it is either true or it isn't. So where I have talked of the probability of the truth of a hypothesis I have implicitly moved to a Bayesian interpretation. It is incorrect to mix Bayesian and frequentist, however there is always a temptation to do so as what we really want is an quantative indication of the relative plausibility/probability of the hypotheses. But this is not what the p-value provides.
| null | CC BY-SA 2.5 | null | 2010-08-14T07:52:35.467 | 2010-08-14T07:52:35.467 | null | null | 887 | null |
1683 | 2 | null | 596 | 1 | null | I think semi-supervised methods may be what you are looking for, there is quite a lot of litterature on this in Machine learning. There is a good [book](http://www.amazon.co.uk/Semi-Supervised-Learning-Adaptive-Computation-Machine/dp/0262514125) on this topic, which gives a good idea of recent developments in this area.
An E.M.-like algorithm for logistic regression (a discriminative model) is easily implemented as follows:
(i) Train a LR model using only the labelled data.
(ii) Use the LR model to assign labels to the unlabelled data.
(iii) Train a new LR model using both the labelled and unlabelled data (using the predicted labels).
(iv) repeat (ii) and (iii) until convergence is reached (i.e. none of the predicted labels for the unlabelled examples change).
This works quite well for some problems (e.g. text classification), not so well in others. EM also works well with naieve Bayes. McLachlan's excellent [book](http://www.amazon.co.uk/Discriminant-Statistical-Recognition-Probability-Statistics/dp/0471691151) on discriminant analysis also has some material on basic algorithms.
HTH
| null | CC BY-SA 3.0 | null | 2010-08-14T08:12:41.753 | 2018-03-27T10:16:59.000 | 2018-03-27T10:16:59.000 | 887 | 887 | null |
1684 | 2 | null | 73 | 3 | null | If you are working with Latex, I recommend TikZ Device for outputting nice, Latex-formatted (like PSTricks) graphics. The output you get is text-based Latex code, which can be embedded with include(filename) into any figure environment.
Pros:
- Same font in graphics as in your text
- Professional look
Cons:
- Takes longer to compile than PNG or PDF
- for very complex R graphics, there are could be some display errors
[https://github.com/Sharpie/RTikZDevice](https://github.com/Sharpie/RTikZDevice) - Project, Packages available from CRAN and R-Forge
| null | CC BY-SA 3.0 | null | 2010-08-14T08:51:32.627 | 2011-05-10T22:15:06.783 | 2011-05-10T22:15:06.783 | 13 | 939 | null |
1685 | 2 | null | 1383 | 0 | null | So, here's the example: [http://nishi.dreamhosters.com/u/book1bwt_de.txt](http://nishi.dreamhosters.com/u/book1bwt_de.txt)
Its a list of choices between 'd' and 'e' in coding of BWT output of a plaintext book.
(This is a practical task, think bzip2).
For the reference, current result is (reasonably good, but not really the best possible)
6087 bytes = 48686 bits (ie log-likelihood) for that string of 99054 bits
| null | CC BY-SA 2.5 | null | 2010-08-14T12:10:58.203 | 2010-08-14T12:10:58.203 | null | null | 799 | null |
1686 | 2 | null | 73 | 12 | null | In a narrow sense, R Core has a recommendation: the "recommended" packages.
Everything else depends on your data analysis tasks at hand, and I'd recommend the [Task Views](http://cran.r-project.org/web/views) at CRAN.
| null | CC BY-SA 2.5 | null | 2010-08-14T13:06:45.767 | 2010-08-14T13:06:45.767 | null | null | 334 | null |
1687 | 1 | 1762 | null | 2 | 446 | The problem I’m trying to solve is “How do I figure out how much gunpowder should I put into a cartridge so that I can give myself a good probability of making the minimum power factor?”
I compete in USPSA/IPSC which requires that a competitors rounds make a minimum power factor. Power Factor is computed to be the FLOOR(average bullet velocity * bullet weight) / 1000) where velocity is in feet per second, and bullet weight is in grains. Note the use of FLOOR. No rounding is done. Only the integral part of the computation is used. The higher the power factor, the higher the felt recoil and harder it is to quickly do follow up shots. Since the sport is about firing shots as quickly and as accurately as possible, the lower the recoil the better.
Different divisions within the sport have different power factor floors, but the particular division I compete in has a minimum of 165 Power Factor. The bullets I use are 180 gn bullets, and vary by about +/- 0.2 gn and is normally distributed.
What makes this an interesting problem (and move it out of my meager stats and probability skills) is the testing procedure during a major match. Random sample of 8 rounds are collected. Of the 8 rounds, one is taken apart and the bullet is weighed for use in the formula above. Next, 3 rounds are fired and the average velocity is used. If the resulting power factor is below the minimum, then another 3 rounds are fired. The average of the 3 fastest velocities from the 6 rounds fired is now used to compute the power factor. If the resulting power factor is still below the floor, then the shooter has the option of having the last round taken apart and weighed or the last round fired. If the bullet is taken apart and it is heavier than the first bullet, the heavier weight is used to compute the power factor. If the last round is fired, the average of the 3 fastest velocities from the 7 rounds fired is used to compute the power factor.
To add spice to this problem, not all chronographs used to measure bullet velocities are created equal. The chronograph industry acknowledges that there can be as much as +/- 4% variance between chronographs of different brands. Even more interesting is that the rules allow for the same chronograph used for a particular match to have +/- 4% variance over the duration of a match. I don’t know if either of these 4% variances are normally distributed or not.
With my own chronograph, I test batches of a particular gunpowder load to get the average velocity and standard deviation. After statistical analyses of many different batches, I’ve confirmed that this data is normally distributed.
The way I’m currently determining my minimum load is by finding the load the gives me 165 < FLOOR( target * 179.9 / 1000) where target = (average velocity - standard deviation) * fudge factor. For fudge factor, I’ve unscientifically chosen 1.04. The 0.04 is the 4% variance between chronographs, but ignores the day-to-day allowable variance. I chose to just subtract just 1 standard deviation because it’ll only be 16% of the time that one bullet will be below the floor. In my mind, the probability of all the first 3 bullets all going below the floor is 0.16^3 which less than half a percent.
My specific questions are: Am I going about computing the target the right way? Should my fudge factor include another 4% for the day-to-day variance allowed? Is the 1 standard deviation too much or too little? How should I write the formula for my target?
Edit:
After Srikant's initial response below let me add a couple of focusing questions and notes.
I understand figuring out the error due to my measurements. Not much problem there unless I get really sloppy with quality control or maintenance.
My grasp of probabilities is weak so please bear with me as I ask about computing probabilities:
1) One of the key issues I need to deal with is figuring out how to correctly compute the probability for the testing process around the 7 rounds. It's pretty straight forward to me for the case of the first 3 rounds: (probability that the round is below the power flaw)^3. How do I account for the next 3 rounds and the last round?
I can see computing the probabilities using combinations of bullets above or below the floor, but it's not quite a binary above or below. Let's assume that 6 rounds have been fired, and the average of the highest 3 rounds is 164.9. If the last round has at least 165.2, then the average of the highest 3 rounds out of 7 will be 165.
2) The other issue I need to deal with is figuring out how to account for the 4% variance between my chrono and the match chrono, and how the match chrono is allowed to drift by 4% from day to day. Do I just assume the worst case and make sure that I'm at least 8% above 165 -- that is my rounds are shooting at least 179 power factor? Or do I try to assume some kind of normal distribution over the two 4% variances?
| Applied statistics to find the minimum load for power factor floor | CC BY-SA 2.5 | null | 2010-08-14T14:00:16.297 | 2011-04-29T00:18:36.073 | 2011-04-29T00:18:36.073 | 3911 | 937 | [
"probability",
"standard-deviation"
]
|
1688 | 2 | null | 73 | 4 | null | You can get user reviews of packages on [crantastic](http://crantastic.org/reviews)
| null | CC BY-SA 2.5 | null | 2010-08-14T14:06:52.273 | 2010-08-14T14:06:52.273 | null | null | 573 | null |
1689 | 2 | null | 36 | 4 | null | Sperm count in males in Slovene villages and the number of bears (also in Slovenia) show a negative correlation. Some people find this very worrying. I'll try and get the study that did this.
| null | CC BY-SA 2.5 | null | 2010-08-14T17:42:41.177 | 2010-08-14T17:42:41.177 | null | null | 144 | null |
1690 | 2 | null | 36 | 7 | null | The standard citation pointing out the correlation between the number of newborn babies and breeding-pairs of storks in West Germany is [A new parameter for sex education, Nature 332, 495 (07 April 1988); doi:10.1038/332495a0](http://nature.com/nature/journal/v332/n6164/abs/332495a0.html)
| null | CC BY-SA 3.0 | null | 2010-08-14T17:59:20.630 | 2014-02-03T19:42:28.837 | 2014-02-03T19:42:28.837 | 22047 | 942 | null |
1691 | 2 | null | 1687 | 1 | null | You have a complex statistical problem and a complete analysis would be too long. However, I will suggest one idea that may perhaps help you to some extent.
You have already performed some calibration tests to assess the mean and standard deviation of the velocity of a bullet. However, I suspect that either your testing or your calculations or both are sub-optimal for the reasons I mention below.
The crucial point is this: The measured velocity of a bullet is a random variable that depends on three factors: (a) amount of gunpowder in the bullet, (b) random factors due to imperfections of the gun, and (c) errors because of chronograph inaccuracy. In other words, you can model the measured velocity of a bullet as follows:
$v_m = v(gp) + \epsilon_g + \epsilon_c$
where,
$v_m$ is the measured velocity,
$v(gp)$ is the true velocity of the bullet given that the bullet has $gp$ amount of gunpowder,
$\epsilon_g \sim N(0,\sigma_g^2)$ is the random error that arises due to imperfections of the gun and
$\epsilon_c \sim N(0,\sigma_c^2)$ is the error induced by the chronograph.
I think assuming that the errors are normally distributed with a mean zero and a finite variance is reasonable. Thus, we have:
$E(v_m) = v(gp)$
and
$Var(v_m) = \sigma_g^2 + \sigma_c^2$
The goal of testing is to get a sense of the average velocity and the standard deviation of the velocity of a bullet. You could simply compute the mean of the observed velocities to get a sense of the mean as $E(v_m) = v(g)$ but you cannot use the standard deviation of the observed velocities for setting your target as the the errors induced by the chronograph and the errors due to the imperfections of the gun are confounded.
Thus, you need to design your test to disentangle the variation induced by the chronograph and the intrinsic variation because of other factors. One way to achieve the above disentanglement is to perform a test as follows:
- Start test 1 and choose a certain amount of gunpowder that you will fill up each bullet for this test.
- Load the bullet with the amount of gunpowder you chose for this test in step 1.
- Measure the velocity of the bullet.
- Repeat steps 2 and 3 for a certain number of bullets.
- Perform several tests along the above lines ensuring that you choose a different amount of gunpowder for each test.
Let:
$i$ index the bullets,
$j$ index the test.
Then you have a set of velocities indexed $v_{ij}$. Then, you can calculate the variance of the velocity (and hence the standard deviation) as follows:
- Calculate the variance of all velocity measurements. Denote this number by $S_v^2$.
- Calculate the variance of velocities for each test separately. Denote each test specific variance by $S_j^2$. Note that each of these variance measures is a measure of how much variation is induced because of the chronograph as the amount of gunpowder is the same for each test.
Then a measure of the variance of the velocity that you can attribute to the gun itself would be:
$S_g^2 = S_v^2 - \frac {\sum_j S_j^2}{J}$
Once you know $S_g$ you can use your existing formula and there is no need to worry about the additional variation induced in your measurements because of your own chronograph as that has been 'taken care of' in our analysis.
Hope that helps to some extent.
| null | CC BY-SA 2.5 | null | 2010-08-14T18:32:47.380 | 2010-08-14T18:32:47.380 | null | null | null | null |
1692 | 2 | null | 36 | 15 | null | Although it's more of an illustration of the problem of multiple comparisons, it is also a good example of misattributed causation:
[Rugby (the religion of Wales) and its influence on the Catholic church: should Pope Benedict XVI be worried?](http://www.bmj.com/cgi/content/abstract/337/dec17_2/a2768)
>
"every time Wales win the rugby grand slam, a Pope dies, except for 1978 when Wales were really good, and two Popes died."
| null | CC BY-SA 2.5 | null | 2010-08-14T19:50:51.313 | 2010-08-14T19:50:51.313 | null | null | 495 | null |
1693 | 2 | null | 1674 | 5 | null | UPDATE
Tough crowd. :) For a concise account of connecting the trace of the Fisher matrix to surface area, please see section 4 ("Isoperimetric Inequalities") in the paper below. The crucial part is establishing the relation between differential entropy and the trace of the Fisher matrix, which the authors prove in the appendix.
- On the similarity of the entropy power inequality and the Brunn-Minkowski inequality
---
The basic intuition is through the isoperimetric inequality for the surface area of a sphere maximizing the volume. We can arrive at a similar relationship concerning the trace of the Fisher information matrix and the entropy w.r.t the Gaussian. The following may be helpful.
- Information Theoretic Inequalities for Contoured Probability Distributions
| null | CC BY-SA 2.5 | null | 2010-08-14T21:05:33.317 | 2010-08-17T04:25:48.237 | 2010-08-17T04:25:48.237 | 251 | 251 | null |
1694 | 2 | null | 534 | 11 | null | Sir Austin Bradford Hill's President's Address to the Royal Society of Medicine ([The Environment and Disease: Association or Causation?](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1898525/?tool=pubmed)) explains nine criteria which help to judge whether there is a causal relationship between two correlated or associated variables.
They are:
- Strength of the association
- Consistency: "has it been repeatedly
observed by different persons, in
different places, cirumstances and
times?"
- Specificity
- Temporality: "which is the cart and
which is the horse?" - the cause
must precede the effect
- Biological gradient (dose-response
curve) - in what way does the
magnitude of the effect depended
upon the magnitude of the (suspected) causal variable?
- Plausibility - is there a likely
explanation for causation?
- Coherance - would causation
contradict other established facts?
- Experiment - does experimental
manipulation of the (suspected)
causal variable affect the
(suspected) dependent variable
- Analogy - have we encountered
similar causal relationships in the
past?
| null | CC BY-SA 2.5 | null | 2010-08-14T21:19:58.003 | 2010-08-14T21:19:58.003 | null | null | 942 | null |
1695 | 2 | null | 73 | 4 | null | I would suggest using some of the packages provided by [revolution R](http://www.revolutionanalytics.com/). In particular, I quite like the:
- multicore package for parallel computing using shared memory processors
- there optimized packages for matrices
| null | CC BY-SA 2.5 | null | 2010-08-14T21:52:11.520 | 2010-08-14T21:52:11.520 | null | null | 8 | null |
1696 | 2 | null | 44 | 6 | null | I would show them the raw data of [Anscombe's Quartet](http://en.wikipedia.org/wiki/Anscombe%27s_quartet) ([JSTOR link to the paper](http://jstor.org/stable/pdfplus/2682899.pdf)) in a big table, alongside another table showing the Mean & Variance of x and y, the correlation coefficient, and the equation of the linear regression line. Ask them to explain the differences between each of the 4 datasets. They will be confused.
Then show them 4 graphs. They will be enlightened.
| null | CC BY-SA 2.5 | null | 2010-08-14T21:55:32.190 | 2010-08-14T21:55:32.190 | null | null | 942 | null |
1697 | 2 | null | 73 | 2 | null | Jeromy mentioned my first pick: Lattice.
I also have found the `doBy` package and its `summaryBy` function to be insanely useful. They extend `aggregate` with a formula syntax that lets you aggregate multiple functions simultaneously in non-trivial ways. Great if you want, say, mean, std. dev., and length.
| null | CC BY-SA 3.0 | null | 2010-08-15T00:31:42.537 | 2014-03-06T00:03:45.983 | 2014-03-06T00:03:45.983 | 22468 | 389 | null |
1698 | 2 | null | 534 | 6 | null | A related question might be -- under what conditions can you reliably extract causal relations from data?
A 2008 NIPS [workshop](http://jmlr.csail.mit.edu/proceedings/papers/v6/) try to address that question empirically. One of the tasks was to infer the direction of causality from observations of pairs of variables where one variable was known to cause another, and the best method was able to correctly extract causal direction 80% of the time.
| null | CC BY-SA 2.5 | null | 2010-08-15T00:35:01.157 | 2010-08-15T00:40:38.993 | 2010-08-15T00:40:38.993 | 511 | 511 | null |
1699 | 1 | 1706 | null | 6 | 2092 | I am trying to solve for an efficient portfolio in R. How do I translate my constraints for a tangency point for 2 risky asset portfolio, and a given risk free rate to R solve.QP function? So basically I have the following equations:
```
w = weight of the first risky asset
R1 = mean return of the first risky asset
R2 = mean return of the second risky asset
sd1 = sdev of first risky asset
sd2 = sdev of second risky asset
corr = correlation between two risky assets
rf = risk free rate
Return of portfolio, R = R2*(1-w)+R1*w
Standard Dev of portfolio, SD = sqrt((sd1*w)^2+(sd2*(1-w))^2+2*w*(1-w)*corr*sd1*sd2)
Now I need to maximize R-rf while minimizing SD (that is maximize my sharpe).
Let sigma be covariance matrix. So my function to minimize is W^T*sigma*W where W is
the weights vector. Now simulataneously I need to maximize the excess return (R-rf)
and W^T*1=1. I don't know how to express that in the constraints function.
```
I am confused how to express these constraints as expected by [http://pbil.univ-lyon1.fr/library/quadprog/html/solve.QP.html](http://pbil.univ-lyon1.fr/library/quadprog/html/solve.QP.html) . If you could also point me to a solved derivation of the final formula, that would be helpful as well as I am unable to get to the final formula.
| Tangency portfolio in R | CC BY-SA 2.5 | null | 2010-08-15T03:10:05.873 | 2010-09-30T21:19:46.560 | 2010-09-30T21:19:46.560 | 930 | 862 | [
"r",
"finance",
"extreme-value"
]
|
1700 | 2 | null | 278 | 4 | null | What you're discovering is a degree of instability in either the algorithm or the data itself. The approach termed 'consensus' or 'ensemble' clustering is a way of dealing with the problem. The problem there is: given a collection of clusterings, find a "consensus" clustering that is in some sense the "average" of the clusterings.
There's a fair bit of work on this topic, and a good place to start is the [clustering ensembles paper by Strehl and Ghosh](http://strehl.com/download/strehl-jmlr02.pdf).
| null | CC BY-SA 2.5 | null | 2010-08-15T05:29:00.713 | 2010-08-15T05:29:00.713 | null | null | 139 | null |
1701 | 2 | null | 278 | 1 | null | Which flat-clustering algorithm are you using? It might also be the case that the different results are because maybe it's not your data but your algorithm itself is non-deterministic (e.g., using K-means with random initialization, or using a model-based clustering with EM or MCMC for inference with random initialization)?
| null | CC BY-SA 2.5 | null | 2010-08-15T06:13:47.870 | 2010-08-15T06:13:47.870 | null | null | 881 | null |
1702 | 2 | null | 1668 | 2 | null | You've mentioned some ML techniques, so two quite nice books (quite because unfortunately my favorite is in Polish):
[http://www.amazon.com/Machine-Learning-Algorithmic-Perspective-Recognition/dp/1420067184](http://rads.stackoverflow.com/amzn/click/1420067184)
[http://ai.stanford.edu/~nilsson/mlbook.html](http://ai.stanford.edu/~nilsson/mlbook.html)
For numeric stuff like random number generation:
[http://www.nr.com/](http://www.nr.com/)
| null | CC BY-SA 2.5 | null | 2010-08-15T08:59:45.287 | 2010-08-15T08:59:45.287 | null | null | null | null |
1706 | 2 | null | 1699 | 4 | null | I haven't looked at your code yet, but here are two pointers:
- Rmetrics has the tangencyPortfolio function in the fPortfolio package: http://help.rmetrics.org/fPortfolio/html/class-fPORTFOLIO.html
- Here is a solution from David Ruppert's "Statistics and Finance" book: http://www.stat.tamu.edu/~ljin/Finance/chapter5/Fig5_9.txt
| null | CC BY-SA 2.5 | null | 2010-08-15T12:30:19.470 | 2010-08-15T12:30:19.470 | null | null | 5 | null |
1708 | 1 | null | null | 1 | 2265 | I have weights of SNP variation (output through Eigenstrat program) for each SNP for the three main PCs. I wish to reduce my list of SNPs to those that show maximum differentiation between the three PCs. Can anyone help me with which statistical method to use to do this.
say, if each PC describes the magnitude of variation, how can I find mutually exclusive rows or rows that define the strongest differenciator of each PC.
e.g:
>
SNPNam PC1-wt PC 2-wt PC3-wt
SNP_1 -1.489 -0.029 -0.507
SNP_2 -1.446 -0.816 0.661
SNP_3 -1.416 0.338 1.631
SNP_4 -1.392 -1.452 0.062
SNP_5 -1.278 0.362 -1.006
SNP_6 -1.21 0.514 0.144
SNP_7 -1.119 -0.633 0.163
SNP_8 -1.112 -0.193 -0.256
SNP_9 -1.054 1.081 -1.519
SNP_10 -0.936 -1.052 -0.419
SNP_11 -0.861 0.381 -0.207
SNP_12 -0.662 0.852 -0.211
SNP_13 -0.503 -1.602 0.585
SNP_14 -0.417 0.529 -1.003
SNP_15 0.101 -0.85 -0.258
SNP_16 0.198 -0.435 -1.599
SNP_17 0.588 -1.292 -1.257
SNP_18 1.167 0.891 1.106
SNP_19 1.35 0.036 0.729
SNP_20 1.532 1.599 0.499
Any help regarding which test to perform and how to (which package) is very much appreciated.
| Variation in PCA weights | CC BY-SA 2.5 | null | 2010-08-15T14:10:53.410 | 2011-03-25T20:32:46.083 | 2011-03-25T20:32:46.083 | 930 | 952 | [
"pca",
"genetics"
]
|
1709 | 1 | 1717 | null | 6 | 2780 | I have collected positional data. To visualize the data, I'd like to draw a 'typical' outcome of an experiment.
The data comes from a few hundred experiments, where I identify a variable number of objects at different positions relative to the origin in 2D. Thus, I can calculate the average number of objects, as well as estimate the empirical distribution of the objects. A plot of the 'typical' outcome would then have the average (or possibly mode) number of objects, say, 5. What I'm not sure about is where to position these 5 objects.
To simplify the problem, assume that the data follows a 2D normal distribution. If I were just to randomly draw 5 points from the distribution, I might get one point at [3,3], which would be a very rare outcome, and would thus not reflect the 'typical', or 'average' outcome. However, just drawing 5 points at [0,0] would also not make sense - even though [0,0] is the average position of the objects, 5 overlapping points are not an 'average' outcome of the process, either.
In other words, how can I get a 'likely' draw from a distribution?
---
EDIT
It looks like I should mention why I don't want to use the usual methods (like a 2D smoothed histogram, or plotting all the many points) to look at the 2D distribution.
- The objects (which are vesicles (i.e. little spheres) inside cells) vary in number, size and position (distribution of the distance from the cell center, amount of clustering). I would like to display all these features in one graph. Since there are several hundred cells containing many vesicles each, it is not very useful to combine them all in a single plot. I am well aware that I could use a multipanel graph showing the distributions of all parameters, but this would be a lot less intuitive.
- I would like to show a 'typical' cell that shows all the salient features that characterize a specific phenotype. This way, if I want to image a particular phenotype in a mixed population, I know what kind of cell I'm looking for.
- I think such a plot would be a cool way to display a lot of information at once, and I just want to try.
Maybe it would be clearer If I said that I want to simulate a likely experimental result based on my measurements?
| How to draw a probable outcome from a distribution? | CC BY-SA 2.5 | null | 2010-08-15T14:30:23.597 | 2010-08-16T10:40:58.773 | 2010-08-15T20:04:03.057 | 198 | 198 | [
"distributions",
"data-visualization"
]
|
1710 | 2 | null | 36 | 3 | null | I've recently been to a conference and one of the speakers gave this very interesting example (although the point was to illustrate something else):
- Americans and English eat a lot of fat food. There is a high rate of cardiovascular diseases in US and UK.
- French eat a lot of fat food, but they have a low(er) rate of cardiovascular diseases.
- Americans and English drink a lot of alcohol. There is a high rate of cardiovascular diseases in US and UK.
- Italians drink a lot of alcohol but, again, they have a low(er) rate of cardiovascular diseases.
The conclusion? Eat and drink what you want.
And you have a higher chance of getting a heart attack if you speak English!
| null | CC BY-SA 2.5 | null | 2010-08-15T15:33:31.153 | 2010-08-15T15:33:31.153 | null | null | 582 | null |
1711 | 2 | null | 1709 | 0 | null | One thing that you could do is to plot the position of all your experiments in the 2D plane, one point for each object, maybe colored by experiment (if you have a lot of experiments you may just plot a random subset of them).
If there is a pattern in the position of the objects it should emerge when doing this.
Also, depending on what you are measuring, maybe is not the absolute position that counts but the relative position of the objects. In that case you could rotate the positions around the origin so that for each experiment the first point always lies, for instance, on the x axis.
| null | CC BY-SA 2.5 | null | 2010-08-15T15:47:36.260 | 2010-08-15T15:47:36.260 | null | null | 582 | null |
1712 | 2 | null | 1709 | 0 | null | Maybe you could use a [smoothed scatterplot](http://addictedtor.free.fr/graphiques/RGraphGallery.php?graph=139)? It is an analogy to kernel density approximation, but in 2D.
| null | CC BY-SA 2.5 | null | 2010-08-15T16:05:02.070 | 2010-08-15T16:05:02.070 | null | null | null | null |
1713 | 1 | 1738 | null | 13 | 6223 | For some measurements, the results of an analysis are appropriately presented on the transformed scale. In most of the cases, however, it's desirable to present the results on the original scale of measurement (otherwise your work is more or less worthless).
For example, in case of log-transformed data, a problem with interpretation on the original scale arises because the mean of the logged values is not the log of the mean. Taking the antilogarithm of the estimate of the mean on the log scale does not give an estimate of the mean on the original scale.
If, however, the log-transformed data have symmetric distributions, the following
relationships hold (since the log preserves ordering):
$$\text{Mean}[\log (Y)] = \text{Median}[\log (Y)] = \log[\text{Median} (Y)]$$
(the antilogarithm of the mean of the log values is the median on the original scale of measurements).
So I only can make inferences about the difference (or the ratio) of the medians on the original scale of measurement.
Two-sample t-tests and confidence intervals are most reliable if the populations are roughly normal with approximately standard deviations, so we may be tempted to use the `Box-Cox` transformation for the normality assumption to hold (I also think that it is a variance stabilizing transformation too).
However, if we apply t-tools to `Box-Cox` transformed data , we will get inferences about the difference in means of the transformed data. How can we interpret those on the original scale of measurement? (The mean of the transformed values is not the transformed mean). In other words, taking the inverse transform of the estimate of the mean, on the transformed scale, does not give an estimate of the mean on the original scale.
Can I also make inferences only about the medians in this case? Is there a transformation that will allow me to go back to the means (on the original scale) ?
This question was initially posted as a comment [here](https://stats.stackexchange.com/questions/1601/normalizing-transformation-options/1607#1607)
| Express answers in terms of original units, in Box-Cox transformed data | CC BY-SA 3.0 | null | 2010-08-15T17:01:49.620 | 2013-05-15T01:43:43.250 | 2017-04-13T12:44:27.570 | -1 | 339 | [
"data-transformation",
"confidence-interval",
"t-test",
"interpretation"
]
|
1714 | 2 | null | 196 | 3 | null | [Viewpoints](https://www.assembla.com/wiki/show/viewpoints) is useful for multi-variate data sets.
| null | CC BY-SA 3.0 | null | 2010-08-15T17:26:03.560 | 2012-11-08T22:14:04.253 | 2012-11-08T22:14:04.253 | 957 | 957 | null |
1715 | 2 | null | 726 | 9 | null | >
To understand God's Thoughts
we must study statistics
for these are the measure
of His purpose.
--Florence Nightingale
| null | CC BY-SA 2.5 | null | 2010-08-15T18:36:58.477 | 2010-12-03T04:05:19.333 | 2010-12-03T04:05:19.333 | 795 | null | null |
1716 | 2 | null | 1709 | 2 | null | To summarise (please correct me if I'm wrong):
- You have a set of points for a number of parameters/states.
- The points provide a joint distribution of the parameters states
- You want to simulate from a model using some typical states.
The problem you have is that you can't write down a nice closed form density.
To tackle this problem you should use a particle filter. Suppose your model of a cell was this simple ODE:
\begin{equation}
\frac{dX(t)}{dt} = \lambda X(t)
\end{equation}
and your data consists of values of $\lambda$ and $X(0)$. Put this data in a matrix with two columns and $n$ rows, where $n$ is the number of points. Then
- Choose a row at random, to get a particular values of $\lambda$ and $X(0)$
- Optional step: perturb your parameters with noise.
- Simulate from your model, in this case the ODE.
- Repeat as necessary.
The key point is that step 1 is draw from the joint density of the $\lambda$ and $X(0)$.
---
This answer could be way off if I've misinterpreted what you mean about simulating from the model. Please correct me if I'm wrong.
| null | CC BY-SA 2.5 | null | 2010-08-15T18:48:27.970 | 2010-08-16T09:31:44.377 | 2010-08-16T09:31:44.377 | 8 | 8 | null |
1717 | 2 | null | 1709 | 4 | null | I also think that it's not clear what you want. But if you want a set of deterministically chosen points, so that they preserve the moments of the initial distribution, you can use the sigma point selection method that applies to the [unscented Kalman filter](http://en.wikipedia.org/wiki/UKF#Unscented_Kalman_filter).
Say that you want to select $2L+1$ points that fulfill those requirements. Then proceed in the following way:
$\mathcal{X}_0=\overline{x} \qquad w_0=\frac{\kappa}{L+\kappa} \qquad i=0$
$\mathcal{X}_i=\overline{x}+\left(\sqrt{(\:L+\kappa\:)\:\mathbf{P}_x}\right)_i \qquad w_i=\frac{1}{2(L+\kappa)} \qquad i=1, \dots,L$
$\mathcal{X}_i=\overline{x}-\left(\sqrt{(\:L+\kappa\:)\:\mathbf{P}_x}\right)_i \qquad w_i=\frac{1}{2(L+\kappa)} \qquad i=L+1, \dots,2L$
where $w_i$ the weight of the i-th point,
$\kappa=3-L$ (in case of Normally distributed data),
and $\left(\sqrt{(\:L+\kappa\:)\mathbf{P}_x}\right)_i$ is the i-th row (or column)* of the matrix square root of the weighted covariance $(\:L+\kappa\:)\:\mathbf{P}_x$ matrix (usually given by the [Cholesky decomposition](http://en.wikipedia.org/wiki/Cholesky_decomposition))
* If the matrix square root $\mathbf{A}$ gives the original by giving $\mathbf{A}^T\mathbf{A}$, then use the rows of $\mathbf{A}$. If it gives the original by giving $\mathbf{A}\mathbf{A}^T$, then use the columns of $\mathbf{A}$. The result of the matlab function [chol()](http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html) falls into the first category.
Here is a simple example using R
```
x <- rnorm(1000,5,2.5)
y <- rnorm(1000,2,1)
P <- cov(cbind(x,y))
V0 <- c(mean(x),mean(y))
n <- 2;k <- 1
A <- chol((n+k)*P) # matrix square root
points <- as.data.frame(sapply(1:(2*n),function(i) if (i<=n) A[i,] + V0 else -A[i-n,] + V0))
attach(points)
#mean (equals V0)
1/(2*(n+k))*(V1+V2+V3+V4) + k/(n+k)*V0
#covariance (equals P)
1/(2*(n+k)) * ((V1-V0) %*% t(V1-V0) + (V2-V0) %*% t(V2-V0) + (V3-V0) %*% t(V3-V0) + (V4-V0) %*% t(V4-V0))
```
| null | CC BY-SA 2.5 | null | 2010-08-15T19:14:50.803 | 2010-08-16T10:40:58.773 | 2010-08-16T10:40:58.773 | 339 | 339 | null |
1718 | 2 | null | 1308 | 18 | null | This is a counting problem: there are $b^n$ possible assignments of $b$ birthdays to $n$ people. Of those, let $q(k; n, b)$ be the number of assignments for which no birthday is shared by more than $k$ people but at least one birthday actually is shared by $k$ people. The probability we seek can be found by summing the $q(k;n,b)$ for appropriate values of $k$ and multiplying the result by $b^{-n}$.
These counts can be found exactly for values of $n$ less than several hundred. However, they will not follow any straightforward formula: we have to consider the patterns of ways in which birthdays can be assigned. I will illustrate this in lieu of providing a general demonstration. Let $n = 4$ (this is the smallest interesting situation). The possibilities are:
- Each person has a unique birthday; the code is {4}.
- Exactly two people share a birthday; the code is {2,1}.
- Two people have one birthday and the other two have another; the code is {0,2}.
- Three people share a birthday; the code is {1,0,1}.
- Four people share a birthday; the code is {0,0,0,1}.
Generally, the code $\{a[1], a[2], \ldots\}$ is a tuple of counts whose $k^\text{th}$ element stipulates how many distinct birthdates are shared by exactly $k$ people. Thus, in particular,
$$1 a[1] + 2a[2] + ... + k a[k] + \ldots = n.$$
Note, even in this simple case, that there are two ways in which the maximum of two people per birthday is attained: one with the code $\{0,2\}$ and another with the code $\{2,1\}$.
We can directly count the number of possible birthday assignments corresponding to any given code. This number is the product of three terms. One is a multinomial coefficient; it counts the number of ways of partitioning $n$ people into $a[1]$ groups of $1$, $a[2]$ groups of $2$, and so on. Because the sequence of groups does not matter, we have to divide this multinomial coefficient by $a[1]!a[2]!\cdots$; its reciprocal is the second term. Finally, line up the groups and assign them each a birthday: there are $b$ candidates for the first group, $b-1$ for the second, and so on. These values have to be multiplied together, forming the third term. It is equal to the "factorial product" $b^{(a[1]+a[2]+\cdots)}$ where $b^{(m)}$ means $b(b-1)\cdots(b-m+1)$.
There is an obvious and fairly simple recursion relating the count for a pattern $\{a[1], \ldots, a[k]\}$ to the count for the pattern $\{a[1], \ldots, a[k-1]\}$. This enables rapid calculation of the counts for modest values of $n$. Specifically, $a[k]$ represents $a[k]$ birthdates shared by exactly $k$ people each. After these $a[k]$ groups of $k$ people have been drawn from the $n$ people, which can be done in $x$ distinct ways (say), it remains to count the number of ways of achieving the pattern $\{a[1], \ldots, a[k-1]\}$ among the remaining people. Multiplying this by $x$ gives the recursion.
I doubt there is a closed form formula for $q(k; n, b)$, which is obtained by summing the counts for all partitions of $n$ whose maximum term equals $k$. Let me offer some examples:
With $b=5$ (five possible birthdays) and $n=4$ (four people), we obtain
$$\eqalign{
q(1) &= q(1;4,5) &= 120 \\
q(2) &= 360 + 60 &= 420 \\
q(3) &&= 80 \\
q(4) &&= 5.\\
}$$
Whence, for example, the chance that three or more people out of four share the same "birthday" (out of $5$ possible dates) equals $(80 + 5)/625 = 0.136$.
As another example, take $b = 365$ and $n = 23$. Here are the values of $q( k;23,365)$ for the smallest $k$ (to six sig figs only):
$$\eqalign{
k=1: &0.49270 \\
k=2: &0.494592 \\
k=3: &0.0125308 \\
k=4: &0.000172844 \\
k=5: &1.80449E-6 \\
k=6: &1.48722E-8 \\
k=7: &9.92255E-11 \\
k=8: &5.45195E-13.
}$$
Using this technique, we can readily compute that there is about a 50% chance of (at least) a three-way birthday collision among 87 people, a 50% chance of a four-way collision among 187, and a 50% chance of a five-way collision among 310 people. That last calculation starts taking a few seconds (in Mathematica, anyway) because the number of partitions to consider starts getting large. For substantially larger $n$ we need an approximation.
One approximation is obtained by means of the Poisson distribution with expectation $n/b$, because we can view a birthday assignment as arising from $b$ almost (but not quite) independent Poisson variables each with expectation $n/b$: the variable for any given possible birthday describes how many of the $n$ people have that birthday. The distribution of the maximum is therefore approximately $F(k)^b$ where $F$ is the Poisson CDF. This is not a rigorous argument, so let's do a little testing. The approximation for $n = 23$, $b = 365$ gives
$$\eqalign{
k=1: &0.498783 \\
k=2: &0.496803\\
k=3: &0.014187\\
k=4: &0.000225115.
}$$
By comparing with the preceding you can see that the relative probabilities can be poor when they are small, but the absolute probabilities are reasonably well approximated to about 0.5%. Testing with a wide range of $n$ and $b$ suggests the approximation is usually about this good.
To wrap up, let's consider the original question: take $n = 10,000$ (the number of observations) and $b = 1\,000\,000$ (the number of possible "structures," approximately). The approximate distribution for the maximum number of "shared birthdays" is
$$\eqalign{
k=1: &0 \\
k=2: &0.8475+\\
k=3: &0.1520+\\
k=4: &0.0004+\\
k\gt 4: &\lt 1E-6.
}$$
(This is a fast calculation.) Clearly, observing one structure 10 times out of 10,000 would be highly significant. Because $n$ and $b$ are both large, I expect the approximation to work quite well here.
Incidentally, as Shane intimated, simulations can provide useful checks. A Mathematica simulation is created with a function like
`simulate[n_, b_] := Max[Last[Transpose[Tally[RandomInteger[{0, b - 1}, n]]]]];`
which is then iterated and summarized, as in this example which runs 10,000 iterations of the $n = 10000$, $b = 1\,000\,000$ case:
`Tally[Table[simulate[10000, 1000000], {n, 1, 10000}]] // TableForm`
Its output is
>
2 8503
3 1493
4 4
These frequencies closely agree with those predicted by the Poisson approximation.
| null | CC BY-SA 3.0 | null | 2010-08-15T22:03:07.387 | 2016-10-14T20:56:33.213 | 2016-10-14T20:56:33.213 | 919 | 919 | null |
1719 | 1 | null | null | 8 | 3168 | Well, I'm an engineer by day. Although most of my work revolves around modeling, we generally do pretty basic stuff. An "Advanced" model would be a monte carlo simulation validated using R2 tests.
Currently, in my field, there is a lot of research using Logistic and bayesian analysis.
My question is, which courses would you recommend someone to take from [MIT's open course site](http://ocw.mit.edu/courses/mathematics/) or any other sites, for someone who learns best by video/audio first, and reading second?
What i'd like to learn are the following:
- Be able to understand the models and when to employ them
- able to take in field data (which is generated once and cannot be regenerated) and design and perform experiments
- Able to understand the results, look at them, and figure out if something is off, "show stopper" or "outliers", or if everything is fine and dandy
- Be able to validate and calibrate the model, to actual "As-built" results
- Be able to forecast the results using appropriate sensitivity analysis
- be able to forecast / "plug" missing data
- be able to write journal papers related to my field
my field in a nutshell is: transportation demand modeling for passenger vehicles, using either the generic four step model, or socio economic activity/tour based models such as PECAS or urbansim
| Video/Audio online material for getting into Bayesian analysis and logistic-regressions | CC BY-SA 2.5 | null | 2010-08-15T22:51:09.993 | 2012-08-03T07:55:05.020 | 2010-08-15T23:22:39.817 | 159 | 59 | [
"bayesian",
"logistic"
]
|
1720 | 2 | null | 1713 | 11 | null | If the Box-Cox transformation yields a symmetric distribution, then the mean of the transformed data is back-transformed to the median on the original scale. This is true for any monotonic transformation, including the Box-Cox transformations, the IHS transformations, etc. So inferences about the means on the transformed data correspond to inferences about the median on the original scale.
As the original data were skewed (or you wouldn't have used a Box-Cox transformation in the first place), why do you want inferences about the means? I would have thought working with medians would make more sense in this situation. I don't understand why this is seen as a "problem with interpretation on the original scale".
| null | CC BY-SA 2.5 | null | 2010-08-15T23:35:52.447 | 2010-08-15T23:35:52.447 | null | null | 159 | null |
1721 | 2 | null | 1095 | 0 | null | So it turns out the first assumption was actually correct: U is indeed the first k eigenvectors of C, that we calculate from G by means of the eigendecompposition $(X_tVD^{-\frac12})D^{1/2} = X_tV$.
| null | CC BY-SA 2.5 | null | 2010-08-15T23:39:20.410 | 2010-08-16T21:27:59.097 | 2010-08-16T21:27:59.097 | 282 | 282 | null |
1722 | 2 | null | 596 | 2 | null | [This paper](https://web.archive.org/web/20151223091555/http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.3681&rep=rep1&type=pdf) is very interesting, has good results, and is easy to apply to discriminative models.
And the term is semi-supervised learning, not unsupervised learning.
| null | CC BY-SA 4.0 | null | 2010-08-15T23:43:45.330 | 2022-11-24T03:18:28.493 | 2022-11-24T03:18:28.493 | 362671 | 959 | null |
1723 | 2 | null | 1645 | 13 | null | For normality, actual Shapiro-Wilk has good power in fairly small samples.
The main competitor in studies that I have seen is the more general Anderson-Darling, which does fairly well, but I wouldn't say it was better. If you can clarify what alternatives interest you, possibly a better statistic would be more obvious. [edit: if you estimate parameters, the A-D test should be adjusted for that.]
[I strongly recommend against considering Jarque-Bera in small samples (which probably better known as Bowman-Shenton in statistical circles - they studied the small sample distribution). The asymptotic joint distribution of skewness and kurtosis is nothing like the small-sample distribution - in the same way a banana doesn't look much like an orange. It also has very low power against some interesting alternatives - for example it has low power to pick up a symmetric bimodal distribution that has kurtosis close to that of a normal distribution.]
Frequently people test goodness of fit for what turn out to be not-particularly-good reasons, or they're answering a question other than the one that they actually want to answer.
For example, you almost certainly already know your data aren't really normal (not exactly), so there's no point in trying to answer a question you know the answer to - and the hypothesis test doesn't actually answer it anyway.
Given you know you don't have exact normality already, your hypothesis test of normality is really giving you an answer to a question closer to "is my sample size large enough to pick up the amount of non-normality that I have", while the real question you're interested in answering is usually closer to "what is the impact of this non-normality on these other things I'm interested in?". The hypothesis test is measuring sample size, while the question you're interested in answering is not very dependent on sample size.
There are times when testing of normality makes some sense, but those situations almost never occur with small samples.
Why are you testing normality?
| null | CC BY-SA 4.0 | null | 2010-08-15T23:59:19.030 | 2019-10-14T01:50:09.690 | 2019-10-14T01:50:09.690 | 805 | 805 | null |
1724 | 2 | null | 1713 | 6 | null | If you want to do inference about means on the original scale, you could consider using inference that doesn't use a normality assumption.
Take care, however. Simply plugging through a straight comparison of means via say resampling (either permutation tests or bootstrapping) when the two samples have different variances may be a problem if your analysis assumes the variances to be equal (and equal variances on the transformed scale will be difference variances on the original scale if the means differ). Such techniques don't avoid the necessity to think about what you're doing.
Another approach to consider if you're more interested in estimation or prediction than testing is to use a Taylor expansion of the transformed variables to compute the approximate mean and variance after transforming back - where in the usual Taylor expansion you'd write $f(x+h)$, you now write $t[\mu + (Y-\mu)]$ where $Y$ is a random variable with mean $\mu$ and variance $\sigma^2$, which you're about to transform back using $t()$.
If you take expectations, the second term drops out, and people usually take just the first and third terms (where the third represents an approximation to the bias in just transforming the mean); further if you take the variance of the expansion to the second term, the first term and the first covariance terms drop out - because $t(\mu)$ is a constant - leaving you with a single-term approximation for the variance.
--
The easiest case is when you have normality on the log-scale, and hence a lognormal on the original scale. If your variance is known (which happens very rarely at best), you can construct lognormal CIs and PIs on the original scale, and you can give a predicted mean from the mean of the distribution of the relevant quantity.
If you're estimating both mean and variance on the log-scale, you can construct log-$t$ intervals (prediction intervals for an observation, say), but your original-scale log-$t$ doesn't have any moments. So the mean of a prediction just doesn't exist.
You need to think very carefully about precisely what question you're trying to answer.
| null | CC BY-SA 3.0 | null | 2010-08-16T00:30:59.557 | 2013-05-15T01:38:09.353 | 2013-05-15T01:38:09.353 | 805 | 805 | null |
1725 | 2 | null | 1719 | 2 | null | I've only had a little look at this lecture series on Machine Learning, but it looks good.
- http://academicearth.org/courses/machine-learning
[Lecture 11](http://freevideolectures.com/Course/2257/Machine-Learning/11) covers Bayesian Statistics and Regularization.
| null | CC BY-SA 2.5 | null | 2010-08-16T02:38:20.447 | 2010-08-16T02:38:20.447 | null | null | 183 | null |
1726 | 2 | null | 726 | 44 | null | >
Do not trust any statistics you did not fake yourself.
-- Winston Churchill
| null | CC BY-SA 2.5 | null | 2010-08-16T05:45:29.743 | 2010-12-03T04:01:19.320 | 2010-12-03T04:01:19.320 | 795 | 128 | null |
1727 | 2 | null | 196 | 3 | null | Python's [matplotlib](http://matplotlib.sourceforge.net/)
| null | CC BY-SA 2.5 | null | 2010-08-16T06:19:27.897 | 2010-08-16T06:19:27.897 | null | null | 961 | null |
1728 | 2 | null | 1719 | 4 | null | I would go straight to [VideoLectures.net](http://videolectures.net/Top/#o=top&t=vl). This is by far the best source--whether free or paid--i have found for very-high quality (both w/r/t the video quality and w/r/t the presentation content) video lectures and tutorials on statistics, forecasting, and machine learning. The target audience for these video lectures ranges from beginner (some lectures are specifically tagged as "tutorials") to expert; most of them seem to be somewhere in the middle.
All of the lectures and tutorials are taught to highly experienced professionals and academics, and in many instances, the lecturer is the leading authority on the topic he/she is lecturing on. The site is also 100% free.
The one disadvantage is that you cannot download the lectures and store them in e.g., itunes; however, nearly every lectures has a set of slides which you can download (or, conveniently, you can view them online as you watch the presentation).
YouTube might have more, but even if you search Y/T through a specific channel, i am sure the signal-to-noise ratio is far higher--on VideoLectures.net, every lecture i've viewed has been outstanding and if you scan the viewer reviews, you'll find that's the consensus opinion towards the entire collection.
A few that i've watched and that i can recommend highly:
- Basics of Probability and Statistics
- Introduction to Machine Learning
- Gaussian Process Basics
- Graphical Models
- k-Nearest Neighbor Models
| null | CC BY-SA 2.5 | null | 2010-08-16T07:09:25.387 | 2010-08-16T07:14:37.997 | 2010-08-16T07:14:37.997 | 438 | 438 | null |
1729 | 1 | 1730 | null | 6 | 4088 | I am trying to format the output from pairwise.t.test into LaTeX, but have not found a way of doing this. Has anyone got any suggestions?
EDIT: As this is a one-time only report where I do need to customize the variable names, and row-/column headings, I was hoping to avoid using Sweave.
| Export/format output from pairwise.t.test to LaTeX | CC BY-SA 2.5 | null | 2010-08-16T08:11:26.317 | 2017-05-18T21:16:24.690 | 2017-05-18T21:16:24.690 | 28666 | 913 | [
"r",
"t-test"
]
|
1730 | 2 | null | 1729 | 7 | null | Does this help?
```
> library(xtable)
> attach(airquality)
> res <- pairwise.t.test(Ozone, Month)
> xtable(res$p.value, caption=res$method)
% latex table generated in R 2.9.2 by xtable 1.5-6 package
% Mon Aug 16 04:24:21 2010
\begin{table}[ht]
\begin{center}
\begin{tabular}{rrrrr}
\hline
& 5 & 6 & 7 & 8 \\
\hline
6 & 1.00 & & & \\
7 & 0.00 & 0.05 & & \\
8 & 0.00 & 0.05 & 1.00 & \\
9 & 1.00 & 1.00 & 0.00 & 0.00 \\
\hline
\end{tabular}
\caption{t tests with pooled SD}
\end{center}
\end{table}
```
| null | CC BY-SA 2.5 | null | 2010-08-16T08:27:13.563 | 2010-08-16T08:27:13.563 | null | null | 251 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.