source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
5,354
I've got some data about airline flights (in a data frame called flights ) and I would like to see if the flight time has any effect on the probability of a significantly delayed arrival (meaning 10 or more minutes). I figured I'd use logistic regression, with the flight time as the predictor and whether or not each flight was significantly delayed (a bunch of Bernoullis) as the response. I used the following code... flights$BigDelay <- flights$ArrDelay >= 10 delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial(link="logit")) summary(delay.model) ...but got the following output. > flights$BigDelay <- flights$ArrDelay >= 10 > delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial(link="logit")) Warning messages: 1: In glm.fit(x = X, y = Y, weights = weights, start = start, etastart = etastart, : algorithm did not converge 2: In glm.fit(x = X, y = Y, weights = weights, start = start, etastart = etastart, : fitted probabilities numerically 0 or 1 occurred > summary(delay.model) Call: glm(formula = BigDelay ~ ArrDelay, family = binomial(link = "logit"), data = flights) Deviance Residuals: Min 1Q Median 3Q Max -3.843e-04 -2.107e-08 -2.107e-08 2.107e-08 3.814e-04 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -312.14 170.26 -1.833 0.0668 . ArrDelay 32.86 17.92 1.833 0.0668 . --- Signif. codes: 0 â***â 0.001 â**â 0.01 â*â 0.05 â.â 0.1 â â 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 2.8375e+06 on 2291292 degrees of freedom Residual deviance: 9.1675e-03 on 2291291 degrees of freedom AIC: 4.0092 Number of Fisher Scoring iterations: 25 What does it mean that the algorithm did not converge? I thought it be because the BigDelay values were TRUE and FALSE instead of 0 and 1 , but I got the same error after I converted everything. Any ideas?
glm() uses an iterative re-weighted least squares algorithm. The algorithm hit the maximum number of allowed iterations before signalling convergence. The default, documented in ?glm.control is 25. You pass control parameters as a list in the glm call: delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial, control = list(maxit = 50)) As @Conjugate Prior says, you seem to be predicting the response with the data used to generate it. You have complete separation as any ArrDelay < 10 will predict FALSE and any ArrDelay >= 10 will predict TRUE . The other warning message tells you that the fitted probabilities for some observations were effectively 0 or 1 and that is a good indicator you have something wrong with the model. The two warnings can go hand in hand. The likelihood function can be quite flat when some $\hat{\beta}_i$ get large, as in your example. If you allow more iterations, the model coefficients will diverge further if you have a separation issue.
{ "source": [ "https://stats.stackexchange.com/questions/5354", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1973/" ] }
5,450
In a regression, the interaction term wipes out both related direct effects. Do I drop the interaction or report the outcome? The interaction was not part of the original hypothesis.
I think this one is tricky; as you hint, there's 'moral hazard' here: if you hadn't looked at the interaction at all, you'd be free and clear, but now that you have there is a suspicion of data-dredging if you drop it. The key is probably a change in the meaning of your effects when you go from the main-effects-only to the interaction model. What you get for the 'main effects' depends very much on how your treatments and contrasts are coded. In R, the default is treatment contrasts with the first factor levels (the ones with the first names in alphabetical order unless you have gone out of your way to code them differently) as the baseline levels. Say (for simplicity) that you have two levels, 'control' and 'trt', for each factor. Without the interaction, the meaning of the 'v1.trt' parameter (assuming treatment contrasts as is the default in R) is "average difference between 'v1.control' and 'v1.trt' group"; the meaning of the 'v2.trt' parameter is "average difference between 'v2.control' and 'v2.trt'". With the interaction, 'v1.trt' is the average difference between 'v1.control' and 'v1.trt' in the 'v2.control' group , and similarly 'v2.trt' is the average difference between v2 groups in the 'v1.control' group. Thus, if you have fairly small treatment effects in each of the control groups, but a large effect in the treatment groups, you could easily see what you're seeing. The only way I can see this happening without a significant interaction term, however, is if all the effects are fairly weak (so that what you really mean by "the effect disappeared" is that you went from p=0.06 to p=0.04, across the magic significance line). Another possibility is that you are 'using up too many degrees of freedom' -- that is, the parameter estimates don't actually change that much, but the residual error term is sufficiently inflated by having to estimate another 4 [ = (2-1)*(5-1)] parameters that your significant terms become non-significant. Again, I would only expect this with a small data set/relatively weak effects. One possible solution is to move to sum contrasts, although this is also delicate -- you have to be convinced that 'average effect' is meaningful in your case. The very best thing is to plot your data and to look at the coefficients and understand what's happening in terms of the estimated parameters. Hope that helps.
{ "source": [ "https://stats.stackexchange.com/questions/5450", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2367/" ] }
5,465
I am looking for some statistics (and probability, I guess) interview questions, from the most basic through the more advanced. Answers are not necessary (although links to specific questions on this site would do well).
Not sure what the job is, but I think "Explain x to a novice" would probably be good- a) because they will probably need to do this in the job b) it's a good test of understanding, I reckon.
{ "source": [ "https://stats.stackexchange.com/questions/5465", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/795/" ] }
5,591
Roughly speaking a p-value gives a probability of the observed outcome of an experiment given the hypothesis (model). Having this probability (p-value) we want to judge our hypothesis (how likely it is). But wouldn't it be more natural to calculate the probability of the hypothesis given the observed outcome? In more details. We have a coin. We flip it 20 times and we get 14 heads (14 out of 20 is what I call "outcome of experiment"). Now, our hypothesis is that the coin is fair (probabilities of head and tail are equal to each other). Now we calculate the p-value, that is equal to the probability to get 14 or more heads in 20 flips of coin. OK, now we have this probability (0.058) and we want to use this probability to judge our model (how is it likely that we have a fair coin). But if we want to estimate the probability of the model, why don't we calculate the probability of the model given the experiment? Why do we calculate the probability of the experiment given the model (p-value)?
Computing the probability that the hypothesis is correct doesn't fit well within the frequentist definition of a probability (a long run frequency), which was adopted to avoid the supposed subjectivity of the Bayesian definition of a probability. The truth of a particular hypothesis is not a random variable, it is either true or it isn't and has no long run frequency. It is indeed more natural to be interested in the probability of the truth of the hypothesis, which is IMHO why p-values are often misinterpreted as the probability that the null hypothesis is true. Part of the difficulty is that from Bayes rule, we know that to compute the posterior probability that a hypothesis is true, you need to start with a prior probability that the hypothesis is true. A Bayesian would compute the probability that the hypothesis is true, given the data (and his/her prior belief). Essentially in deciding between frequentist and Bayesian approaches is a choice whether the supposed subjectivity of the Bayesian approach is more abhorrent than the fact that the frequentist approach generally does not give a direct answer to the question you actually want to ask - but there is room for both. In the case of asking whether a coin is fair, i.e. the probability of a head is equal to the probability of a tail, we also have an example of a hypothesis that we know in the real world is almost certainly false right from the outset. The two sides of the coin are non-symmetric, so we should expect a slight asymmetry in the probabilities of heads and tails, so if the coin "passes" the test, it just means we don't have enough observations to be able to conclude what we already know to be true - that the coin is very slightly biased!
{ "source": [ "https://stats.stackexchange.com/questions/5591", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2407/" ] }
5,680
I have analyzed an experiment with a repeated measures ANOVA. The ANOVA is a 3x2x2x2x3 with 2 between-subject factors and 3 within (N = 189). Error rate is the dependent variable. The distribution of error rates has a skew of 3.64 and a kurtosis of 15.75. The skew and kurtosis are the result of 90% of the error rate means being 0. Reading some of the previous threads on normality tests here has me a little confused. I thought that if you had data that was not normally distributed it was in your best interest to transform it if possible, but it seems that a lot of people think analyzing non-normal data with an ANOVA or a T-test is acceptable. Can I trust the results of the ANOVA? (FYI, In the future I intend to analyze this type of data in R with mixed-models with a binomial distribution)
Like other parametric tests, the analysis of variance assumes that the data fit the normal distribution. If your measurement variable is not normally distributed, you may be increasing your chance of a false positive result if you analyze the data with an anova or other test that assumes normality. Fortunately, an anova is not very sensitive to moderate deviations from normality; simulation studies, using a variety of non-normal distributions, have shown that the false positive rate is not affected very much by this violation of the assumption (Glass et al. 1972, Harwell et al. 1992, Lix et al. 1996). This is because when you take a large number of random samples from a population, the means of those samples are approximately normally distributed even when the population is not normal. It is possible to test the goodness-of-fit of a data set to the normal distribution. I do not suggest that you do this, because many data sets that are significantly non-normal would be perfectly appropriate for an anova. Instead, if you have a large enough data set, I suggest you just look at the frequency histogram. If it looks more-or-less normal, go ahead and perform an anova. If it looks like a normal distribution that has been pushed to one side, like the sulphate data above, you should try different data transformations and see if any of them make the histogram look more normal. If that doesn't work, and the data still look severely non-normal, it's probably still okay to analyze the data using an anova. However, you may want to analyze it using a non-parametric test. Just about every parametric statistical test has a non-parametric substitute, such as the Kruskal–Wallis test instead of a one-way anova, Wilcoxon signed-rank test instead of a paired t-test, and Spearman rank correlation instead of linear regression. These non-parametric tests do not assume that the data fit the normal distribution. They do assume that the data in different groups have the same distribution as each other, however; if different groups have different shaped distributions (for example, one is skewed to the left, another is skewed to the right), a non-parametric test may not be any better than a parametric one. References Glass, G.V., P.D. Peckham, and J.R. Sanders. 1972. Consequences of failure to meet assumptions underlying fixed effects analyses of variance and covariance. Rev. Educ. Res. 42: 237-288. Harwell, M.R., E.N. Rubinstein, W.S. Hayes, and C.C. Olds. 1992. Summarizing Monte Carlo results in methodological research: the one- and two-factor fixed effects ANOVA cases. J. Educ. Stat. 17: 315-339. Lix, L.M., J.C. Keselman, and H.J. Keselman. 1996. Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance F test. Rev. Educ. Res. 66: 579-619.
{ "source": [ "https://stats.stackexchange.com/questions/5680", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2322/" ] }
5,686
I would like to know why some languages like R has both NA and NaN. What are the differences or are they equally the same? Is it really needed to have NA?
?is.nan ?is.na ?NA ?NaN Should answer your question. But, in short: NaN means $\frac {0} {0}$ -- Stands for Not a Number NA is generally interpreted as a missing value and has various forms - NA_integer_, NA_real_, etc. Therefore, NaN $\neq$ NA and there is a need for NaN and NA.
{ "source": [ "https://stats.stackexchange.com/questions/5686", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2479/" ] }
5,727
I have a question on calculating James-Stein Shrinkage factor in the 1977 Scientific American paper by Bradley Efron and Carl Morris, "Stein's Paradox in Statistics" . I gathered the data for the baseball players and it is given below: Name, avg45, avgSeason Clemente, 0.400, 0.346 Robinson, 0.378, 0.298 Howard, 0.356, 0.276 Johnstone, 0.333, 0.222 Berry, 0.311, 0.273 Spencer, 0.311, 0.270 Kessinger, 0.289, 0.263 Alvarado, 0.267, 0.210 Santo, 0.244, 0.269 Swoboda, 0.244, 0.230 Unser, 0.222, 0.264 Williams, 0.222, 0.256 Scott, 0.222, 0.303 Petrocelli, 0.222, 0.264 Rodriguez, 0.222, 0.226 Campaneris, 0.200, 0.285 Munson, 0.178, 0.316 Alvis, 0.156, 0.200 avg45 is the average after $45$ at bats and is denoted as $y$ in the article. avgSeason is the end of the season average. The James-Stein estimator for the average ($z$) is given by $$z = \bar{y} + c (y-\bar{y})$$ and the the shrinkage factor $c$ is given by (page 5 of the Scientific American 1977 article) $$ c = 1 - \frac{(k-3) \sigma^2} {\sum (y - \bar{y})^2}, $$ where $k$ is the number of unknown means. Here there are 18 players so $k = 18$. I can calculate $\sum (y - \bar{y})^2$ using avg45 values. But I don't know how to calculate $\sigma^2$. The authors say $c = 0.212$ for the given data set. I tried using both $\sigma_{x}^2$ and $\sigma_{y}^2$ for $\sigma^2$ but they don't give the correct answer of $c = 0.212$ Can anybody be kind enough to let me know how to calculate $\sigma^2$ for this data set?
The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(45,p_i)$, so the normal approximation to the binomial distribution gives (taking $ \hat{p_{i}} = Y_{i}$) $$ \hat{p}_{i}\approx \mathsf{norm}(\mathtt{mean}=p_{i},\mathtt{var} = p_{i}(1-p_{i})/45). $$ Obviously in this case the variances are not equal, yet if they had been equal to a common value then we could estimate it with the pooled estimator $$ \hat{\sigma}^2 = \frac{\hat{p}(1 - \hat{p})}{45}, $$ where $\hat{p}$ is the grand mean $$ \hat{p} = \frac{1}{18\cdot 45}\sum_{i = 1}^{18}45\cdot{Y_{i}}=\overline{Y}. $$ It looks as though this is what Efron and Morris have done (in the 1977 paper). You can check this with the following R code. Here are the data: y <- c(0.4, 0.378, 0.356, 0.333, 0.311, 0.311, 0.289, 0.267, 0.244, 0.244, 0.222, 0.222, 0.222, 0.222, 0.222, 0.2, 0.178, 0.156) and here is the estimate for $\sigma^2$: s2 <- mean(y)*(1 - mean(y))/45 which is $\hat{\sigma}^2 \approx 0.004332392$. The shrinkage factor in the paper is then 1 - 15*s2/(17*var(y)) which gives $c \approx 0.2123905$. Note that in the second paper they made a transformation to sidestep the variance problem (as @Wolfgang said). Also note in the 1975 paper they used $k - 2$ while in the 1977 paper they used $k - 3$.
{ "source": [ "https://stats.stackexchange.com/questions/5727", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2513/" ] }
5,747
I know empirically that is the case. I have just developed models that run into this conundrum. I also suspect it is not necessarily a yes/no answer. I mean by that if both A and B are correlated with C, this may have some implication regarding the correlation between A and B. But, this implication may be weak. It may be just a sign direction and nothing else. Here is what I mean... Let's say A and B both have a 0.5 correlation with C. Given that, the correlation between A and B could well be 1.0. I think it also could be 0.5 or even lower. But, I think it is unlikely that it would be negative. Do you agree with that? Also, is there an implication if you are considering the standard Pearson Correlation Coefficient or instead the Spearman (rank) Correlation Coefficient? My recent empirical observations were associated with the Spearman Correlation Coefficient.
Because correlation is a mathematical property of multivariate distributions, some insight can be had purely through calculations, regardless of the statistical genesis of those distributions. For the Pearson correlations , consider multinormal variables $X$, $Y$, $Z$. These are useful to work with because any non-negative definite matrix actually is the covariance matrix of some multinormal distributions, thereby resolving the existence question. If we stick to matrices with $1$ on the diagonal, the off-diagonal entries of the covariance matrix will be their correlations. Writing the correlation of $X$ and $Y$ as $\rho$, the correlation of $Y$ and $Z$ as $\tau$, and the correlation of $X$ and $Z$ as $\sigma$, we compute that $1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$ (because this is the determinant of the correlation matrix and it cannot be negative). When $\sigma = 0$ this implies that $\rho^2 + \tau^2 \le 1$. To put it another way: when both $\rho$ and $\tau$ are large in magnitude, $X$ and $Z$ must have nonzero correlation. If $\rho^2 = \tau^2 = 1/2$, then any non-negative value of $\sigma$ (between $0$ and $1$ of course) is possible. When $\rho^2 + \tau^2 \lt 1$, negative values of $\sigma$ are allowable. For example, when $\rho = \tau = 1/2$, $\sigma$ can be anywhere between $-1/2$ and $1$. These considerations imply there are indeed some constraints on the mutual correlations. The constraints (which depend only on the non-negative definiteness of the correlation matrix, not on the actual distributions of the variables) can be tightened depending on assumptions about the univariate distributions. For instance, it's easy to see (and to prove) that when the distributions of $X$ and $Y$ are not in the same location-scale family, their correlations must be strictly less than $1$ in size. (Proof: a correlation of $\pm 1$ implies $X$ and $Y$ are linearly related a.s.) As far as Spearman rank correlations go, consider three trivariate observations $(1,1,2)$, $(2,3,1)$, and $(3,2,3)$ of $(X, Y, Z)$. Their mutual rank correlations are $1/2$, $1/2$, and $-1/2$. Thus even the sign of the rank correlation of $Y$ and $Z$ can be the reverse of the signs of the correlations of $X$ and $Y$ and $X$ and $Z$.
{ "source": [ "https://stats.stackexchange.com/questions/5747", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1329/" ] }
5,750
I have several hundred measurements. Now, I am considering utilizing some kind of software to correlate every measure with every measure. This means that there are thousands of correlations. Among these there should (statistically) be a high correlation, even if the data is completely random (each measure has only about 100 datapoints). When I find a correlation, how do I include the information about how hard I looked for a correlation, into it? I am not at a high level in statistics, so please bear with me.
This is an excellent question, worthy of someone who is a clear statistical thinker, because it recognizes a subtle but important aspect of multiple testing. There are standard methods to adjust the p-values of multiple correlation coefficients (or, equivalently, to broaden their confidence intervals), such as the Bonferroni and Sidak methods ( q.v. ). However, these are far too conservative with large correlation matrices due to the inherent mathematical relationships that must hold among correlation coefficients in general. (For some examples of such relationships see the recent question and the ensuing thread .) One of the best approaches for dealing with this situation is to conduct a permutation (or resampling) test . It's easy to do this with correlations: in each iteration of the test, just randomly scramble the order of values of each of the fields (thereby destroying any inherent correlation) and recompute the full correlation matrix. Do this for several thousand iterations (or more), then summarize the distributions of the entries of the correlation matrix by, for instance, giving their 97.5 and 2.5 percentiles: these would serve as mutual symmetric two-sided 95% confidence intervals under the null hypothesis of no correlation. (The first time you do this with a large number of variables you will be astonished at how high some of the correlation coefficients can be even when there is no inherent correlation.) When reporting the results, no matter what computations you do, you should include the following: The size of the correlation matrix ( i.e. , how many variables you have looked at). How you determined the p-values or "significance" of any of the correlation coefficients ( e.g. , left them as-is, applied a Bonferroni correction, did a permutation test, or whatever). Whether you looked at alternative measures of correlation, such as Spearman rank correlation . If you did, also indicate why you chose the method you are actually reporting on and using.
{ "source": [ "https://stats.stackexchange.com/questions/5750", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/888/" ] }
5,774
I have a dataset that has both continuous and categorical data. I am analyzing by using PCA and am wondering if it is fine to include the categorical variables as a part of the analysis. My understanding is that PCA can only be applied to continuous variables. Is that correct? If it cannot be used for categorical data, what alternatives exist for their analysis?
Although a PCA applied on binary data would yield results comparable to those obtained from a Multiple Correspondence Analysis (factor scores and eigenvalues are linearly related), there are more appropriate techniques to deal with mixed data types, namely Multiple Factor Analysis for mixed data available in the FactoMineR R package ( FAMD() ). If your variables can be considered as structured subsets of descriptive attributes, then Multiple Factor Analysis ( MFA() ) is also an option. The challenge with categorical variables is to find a suitable way to represent distances between variable categories and individuals in the factorial space. To overcome this problem, you can look for a non-linear transformation of each variable--whether it be nominal, ordinal, polynomial, or numerical--with optimal scaling. This is well explained in Gifi Methods for Optimal Scaling in R: The Package homals , and an implementation is available in the corresponding R package homals .
{ "source": [ "https://stats.stackexchange.com/questions/5774", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2540/" ] }
5,782
Lets say we have random variable $X$ with known variance and mean. The question is: what is the variance of $f(X)$ for some given function f. The only general method that I'm aware of is the delta method, but it gives only aproximation. Now I'm interested in $f(x)=\sqrt{x}$, but it'd be also nice to know some general methods. Edit 29.12.2010 I've done some calculations using Taylor series, but I'm not sure whether they are correct, so I'd be glad if someone could confirm them. First we need to approximate $E[f(X)]$ $E[f(X)] \approx E[f(\mu)+f'(\mu)(X-\mu)+\frac{1}{2}\cdot f''(\mu)(X-\mu)^2]=f(\mu)+\frac{1}{2}\cdot f''(\mu)\cdot Var[X]$ Now we can approximate $D^2 [f(X)]$ $E[(f(X)-E[f(X)])^2] \approx E[(f(\mu)+f'(\mu)(X-\mu)+\frac{1}{2}\cdot f''(\mu)(X-\mu)^2 -E[f(X)])^2]$ Using the approximation of $E[f(X)]$ we know that $f(\mu)-Ef(x) \approx -\frac{1}{2}\cdot f''(\mu)\cdot Var[X]$ Using this we get: $D^2[f(X)] \approx \frac{1}{4}\cdot f''(\mu)^2\cdot Var[X]^2-\frac{1}{2}\cdot f''(\mu)^2\cdot Var[X]^2 + f'(\mu)^2\cdot Var[X]+\frac{1}{4}f''(\mu)^2\cdot E[(X-\mu)^4] +\frac{1}{2}f'(\mu)f''(\mu)E[(X-\mu)^3]$ $D^2 [f(X)] \approx \frac{1}{4}\cdot f''(\mu)^2 \cdot [D^4 X-(D^2 X)^2]+f'(\mu)\cdot D^2 X +\frac{1}{2}f'(\mu)f''(\mu)D^3 X$
Update I've underestimated Taylor expansions. They actually work. I assumed that integral of the remainder term can be unbounded, but with a little work it can be shown that this is not the case. The Taylor expansion works for functions in bounded closed interval. For random variables with finite variance Chebyshev inequality gives $$P(|X-EX|>c)\le \frac{\operatorname{Var}(X)}{c}$$ So for any $\varepsilon>0$ we can find large enough $c$ so that $$P(X\in [EX-c,EX+c])=P(|X-EX|\le c)<1-\varepsilon$$ First let us estimate $Ef(X)$ . We have \begin{align} Ef(X)=\int_{|x-EX|\le c}f(x)dF(x)+\int_{|x-EX|>c}f(x)dF(x) \end{align} where $F(x)$ is the distribution function for $X$ . Since the domain of the first integral is interval $[EX-c,EX+c]$ which is bounded closed interval we can apply Taylor expansion: \begin{align} f(x)=f(EX)+f'(EX)(x-EX)+\frac{f''(EX)}{2}(x-EX)^2+\frac{f'''(\alpha)}{3!}(x-EX)^3 \end{align} where $\alpha\in [EX-c,EX+c]$ , and the equality holds for all $x\in[EX-c,EX+c]$ . I took only $4$ terms in the Taylor expansion, but in general we can take as many as we like, as long as function $f$ is smooth enough. Substituting this formula to the previous one we get \begin{align} Ef(X)&=\int_{|x-EX|\le c}f(EX)+f'(EX)(x-EX)+\frac{f''(EX)}{2}(x-EX)^2dF(x)\\\\ &+\int_{|x-EX|\le c}\frac{f'''(\alpha)}{3!}(x-EX)^3dF(x) +\int_{|x-EX|>c}f(x)dF(x) \end{align} Now we can increase the domain of the integration to get the following formula \begin{align} Ef(X)&=f(EX)+\frac{f''(EX)}{2}E(X-EX)^2+R_3\\\\ \end{align} where \begin{align} R_3&=\frac{f'''(\alpha)}{3!}E(X-EX)^3+\\\\ &+\int_{|x-EX|>c}\left(f(EX)+f'(EX)(x-EX)+\frac{f''(EX)}{2}(x-EX)^2+f(X)\right)dF(x) \end{align} Now under some moment conditions we can show that the second term of this remainder term is as large as $P(|X-EX|>c)$ which is small. Unfortunately the first term remains and so the quality of the approximation depends on $E(X-EX)^3$ and the behaviour of third derivative of $f$ in bounded intervals. Such approximation should work best for random variables with $E(X-EX)^3=0$ . Now for the variance we can use Taylor approximation for $f(x)$ , subtract the formula for $Ef(x)$ and square the difference. Then $E(f(x)-Ef(x))^2=(f'(EX))^2\operatorname{Var}(X)+T_3$ where $T_3$ involves moments $E(X-EX)^k$ for $k=4,5,6$ . We can arrive at this formula also by using only first-order Taylor expansion, i.e. using only the first and second derivatives. The error term would be similar. Other way is to expand $f^2(x)$ : \begin{align} f^2(x)&=f^2(EX)+2f(EX)f'(EX)(x-EX)\\\\ &+[(f'(EX))^2+f(EX)f''(EX)](X-EX)^2+\frac{(f^2(\beta))'''}{3!}(X-EX)^3 \end{align} Similarly we get then \begin{align*} Ef^2(x)=f^2(EX)+[(f'(EX))^2+f(EX)f''(EX)]\operatorname{Var}(X)+\tilde{R}_3 \end{align*} where $\tilde{R}_3$ is similar to $R_3$ . The formula for variance then becomes \begin{align} \operatorname{Var}(f(X))=[f'(EX)]^2\operatorname{Var}(X)-\frac{[f''(EX)]^2}{4}\operatorname{Var}^2(X)+\tilde{T}_3 \end{align} where $\tilde{T}_3$ have only third moments and above.
{ "source": [ "https://stats.stackexchange.com/questions/5782", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1643/" ] }
5,935
I am analyzing an experimental data set. The data consists of a paired vector of treatment type and a binomial outcome: Treatment Outcome A 1 B 0 C 0 D 1 A 0 ... In the outcome column, 1 denotes a success and 0 denotes a failure. I'd like to figure out if the treatment significantly varies the outcome. There are 4 different treatments with each experiment repeated a large number of times (2000 for each treatment). My question is, can I analyze the binary outcome using ANOVA? Or should I be using a chi-square test to check the binomial data? It seems like chi-square assumes the proportion would be be evenly split, which isn't the case. Another idea would be to summarize the data using the proportion of successes versus failures for each treatment and then to use a proportion test. I'm curious to hear your recommendations for tests that make sense for these sorts of binomial success/failure experiments.
No to ANOVA, which assumes a normally distributed outcome variable (among other things). There are "old school" transformations to consider, but I would prefer logistic regression (equivalent to a chi square when there is only one independent variable, as in your case). The advantage of using logistic regression over a chi square test is that you can easily use a linear contrast to compare specific levels of the treatment if you find a significant result to the overall test (type 3). For example A versus B, B versus C etc. Update Added for clarity: Taking data at hand (the post doc data set from Allison ) and using the variable cits as follows, this was my point: postdocData$citsBin <- ifelse(postdocData$cits>2, 3, postdocData$cits) postdocData$citsBin <- as.factor(postdocData$citsBin) ordered(postdocData$citsBin, levels=c("0", "1", "2", "3")) contrasts(postdocData$citsBin) <- contr.treatment(4, base=4) # set 4th level as reference contrasts(postdocData$citsBin) # 1 2 3 # 0 1 0 0 # 1 0 1 0 # 2 0 0 1 # 3 0 0 0 # fit the univariate logistic regression model model.1 <- glm(pdoc~citsBin, data=postdocData, family=binomial(link="logit")) library(car) # John Fox package car::Anova(model.1, test="LR", type="III") # type 3 analysis (SAS verbiage) # Response: pdoc # LR Chisq Df Pr(>Chisq) # citsBin 1.7977 3 0.6154 chisq.test(table(postdocData$citsBin, postdocData$pdoc)) # X-squared = 1.7957, df = 3, p-value = 0.6159 # then can test differences in levels, such as: contrast cits=0 minus cits=1 = 0 # Ho: Beta_1 - Beta_2 = 0 cVec <- c(0,1,-1,0) car::linearHypothesis(model.1, cVec, verbose=TRUE)
{ "source": [ "https://stats.stackexchange.com/questions/5935", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2624/" ] }
5,937
When teaching an introductory level class, the teachers I know tend to invent some numbers and a story in order to exemplify the method they are teaching. What I would prefer is to tell a real story with real numbers. However, these stories needs to relate to a very tiny dataset, which enables manual calculations. Any suggestions for such datasets will be very welcomed. Some sample topics for the tiny datasets: correlation/regression (basic) ANOVA (1/2 ways) z/t tests - one/two un/paired samples comparisons of proportions - two/multi way tables
The data and story library is an " online library of datafiles and stories that illustrate the use of basic statistics methods". This site seems to have what you need, and you can search it for particular data sets.
{ "source": [ "https://stats.stackexchange.com/questions/5937", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
5,960
I understand that once we plot the values as a chart, we can identify a bimodal distribution by observing the twin-peaks, but how does one find it programmatically? (I am looking for an algorithm.)
Identifying a mode for a continuous distribution requires smoothing or binning the data. Binning is typically too procrustean: the results often depend on where you place the bin cutpoints. Kernel smoothing (specifically, in the form of kernel density estimation ) is a good choice. Although many kernel shapes are possible, typically the result does not depend much on the shape. It depends on the kernel bandwidth. Thus, people either use an adaptive kernel smooth or conduct a sequence of kernel smooths for varying fixed bandwidths in order to check the stability of the modes that are identified. Although using an adaptive or "optimal" smoother is attractive, be aware that most (all?) of these are designed to achieve a balance between precision and average accuracy: they are not designed to optimize estimation of the location of modes. As far as implementation goes, kernel smoothers locally shift and scale a predetermined function to fit the data. Provided that this basic function is differentiable--Gaussians are a good choice because you can differentiate them as many times as you like--then all you have to do is replace it by its derivative to obtain the derivative of the smooth. Then it's simply a matter of applying a standard zero-finding procedure to detect and test the critical points. ( Brent's method works well.) Of course you can do the same trick with the second derivative to get a quick test of whether any critical point is a local maximum--that is, a mode. For more details and working code (in R ) please see https://stats.stackexchange.com/a/428083/919 .
{ "source": [ "https://stats.stackexchange.com/questions/5960", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2535/" ] }
6,026
I'm a medical student trying to understand statistics(!) - so please be gentle! ;) I'm writing an essay containing a fair amount of statistical analysis including survival analysis (Kaplan-Meier, Log-Rank and Cox regression). I ran a Cox regression on my data trying to find out if I can find a significant difference between the deaths of patients in two groups (high risk or low risk patients). I added several covariates to the Cox regression to control for their influence. Risk (Dichotomous) Gender (Dichotomous) Age at operation (Integer level) Artery occlusion (Dichotomous) Artery stenosis (Dichotomous) Shunt used in operation (Dichotomous) I removed Artery occlusion from the covariates list because its SE was extremely high (976). All other SEs are between 0,064 and 1,118. This is what I get: B SE Wald df Sig. Exp(B) 95,0% CI for Exp(B) Lower Upper risk 2,086 1,102 3,582 1 ,058 8,049 ,928 69,773 gender -,900 ,733 1,508 1 ,220 ,407 ,097 1,710 op_age ,092 ,062 2,159 1 ,142 1,096 ,970 1,239 stenosis ,231 ,674 ,117 1 ,732 1,259 ,336 4,721 op_shunt ,965 ,689 1,964 1 ,161 2,625 ,681 10,119 I know that risk is only borderline-significant at 0,058. But besides that how do I interpret the Exp(B) value? I read an article on logistic regression (which is somewhat similar to Cox regression?) where the Exp(B) value was interpreted as: "Being in the high-risk group includes an 8-fold increase in possibility of the outcome," which in this case is death. Can I say that my high-risk patients are 8 times as likely to die earlier than ... what? Please help me! ;) By the way I'm using SPSS 18 to run the analysis.
Generally speaking, $\exp(\hat\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The parallel with other linear models is that in Cox regression the hazard function is modeled as $h(t)=h_0(t)\exp(\beta'x)$ , where $h_0(t)$ is the baseline hazard. This is equivalent to say that $\log(\text{group hazard}/\text{baseline hazard})=\log\big((h(t)/h_0(t)\big)=\sum_i\beta_ix_i$ . Then, a unit increase in $x_i$ is associated with $\beta_i$ increase in the log hazard rate. The regression coefficient allow thus to quantify the log of the hazard in the treatment group (compared to the control or placebo group), accounting for the covariates included in the model; it is interpreted as a relative risk (assuming no time-varying coefficients). In the case of logistic regression, the regression coefficient reflects the log of the odds-ratio , hence the interpretation as an k-fold increase in risk. So yes, the interpretation of hazard ratios shares some resemblance with the interpretation of odds ratios. Be sure to check Dave Garson's website where there is some good material on Cox Regression with SPSS.
{ "source": [ "https://stats.stackexchange.com/questions/6026", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2652/" ] }
6,067
Okay, so I think I have a decent enough sample, taking into account the 20:1 rule of thumb: a fairly large sample (N=374) for a total of 7 candidate predictor variables. My problem is the following: whatever set of predictor variables I use, the classifications never get better than a specificity of 100% and a sensitivity of 0%. However unsatisfactory, this could actually be the best possible result, given the set of candidate predictor variables (from which I can't deviate). But, I couldn't help but think I could do better, so I noticed that the categories of the dependent variable were quite unevenly balanced, almost 4:1. Could a more balanced subsample improve classifications?
Balance in the Training Set For logistic regression models unbalanced training data affects only the estimate of the model intercept (although this of course skews all the predicted probabilities, which in turn compromises your predictions). Fortunately the intercept correction is straightforward: Provided you know, or can guess, the true proportion of 0s and 1s and know the proportions in the training set you can apply a rare events correction to the intercept. Details are in King and Zeng (2001) [ PDF ]. These 'rare event corrections' were designed for case control research designs, mostly used in epidemiology, that select cases by choosing a fixed, usually balanced number of 0 cases and 1 cases, and then need to correct for the resulting sample selection bias. Indeed, you might train your classifier the same way. Pick a nice balanced sample and then correct the intercept to take into account the fact that you've selected on the dependent variable to learn more about rarer classes than a random sample would be able to tell you. Making Predictions On a related but distinct topic: Don't forget that you should be thresholding intelligently to make predictions. It is not always best to predict 1 when the model probability is greater 0.5. Another threshold may be better. To this end you should look into the Receiver Operating Characteristic (ROC) curves of your classifier, not just its predictive success with a default probability threshold.
{ "source": [ "https://stats.stackexchange.com/questions/6067", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2690/" ] }
6,127
I have data from an experiment that I analyzed using t-tests. The dependent variable is interval scaled and the data are either unpaired (i.e., 2 groups) or paired (i.e., within-subjects). E.g. (within subjects): x1 <- c(99, 99.5, 65, 100, 99, 99.5, 99, 99.5, 99.5, 57, 100, 99.5, 99.5, 99, 99, 99.5, 89.5, 99.5, 100, 99.5) y1 <- c(99, 99.5, 99.5, 0, 50, 100, 99.5, 99.5, 0, 99.5, 99.5, 90, 80, 0, 99, 0, 74.5, 0, 100, 49.5) However, the data are not normal so one reviewer asked us to use something other than the t-test. However, as one can easily see, the data are not only not normally distributed, but the distributions are not equal between conditions: Therefore, the usual nonparametric tests, the Mann-Whitney-U-Test (unpaired) and the Wilcoxon Test (paired), cannot be used as they require equal distributions between conditions. Hence, I decided that some resampling or permutation test would be best. Now, I am looking for an R implementation of a permutation-based equivalent of the t-test, or any other advice on what to do with the data. I know that there are some R-packages that can do this for me (e.g., coin, perm, exactRankTest, etc.), but I don't know which one to pick. So, if somebody with some experience using these tests could give me a kick-start, that would be ubercool. UPDATE: It would be ideal if you could provide an example of how to report the results from this test.
It shouldn't matter that much since the test statistic will always be the difference in means (or something equivalent). Small differences can come from the implementation of Monte-Carlo methods. Trying the three packages with your data with a one-sided test for two independent variables: DV <- c(x1, y1) IV <- factor(rep(c("A", "B"), c(length(x1), length(y1)))) library(coin) # for oneway_test(), pvalue() pvalue(oneway_test(DV ~ IV, alternative="greater", distribution=approximate(B=9999))) [1] 0.00330033 library(perm) # for permTS() permTS(DV ~ IV, alternative="greater", method="exact.mc", control=permControl(nmc=10^4-1))$p.value [1] 0.003 library(exactRankTests) # for perm.test() perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value [1] 0.003171822 To check the exact p-value with a manual calculation of all permutations, I'll restrict the data to the first 9 values. x1 <- x1[1:9] y1 <- y1[1:9] DV <- c(x1, y1) IV <- factor(rep(c("A", "B"), c(length(x1), length(y1)))) pvalue(oneway_test(DV ~ IV, alternative="greater", distribution="exact")) [1] 0.0945907 permTS(DV ~ IV, alternative="greater", exact=TRUE)$p.value [1] 0.0945907 # perm.test() gives different result due to rounding of input values perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value [1] 0.1029412 # manual exact permutation test idx <- seq(along=DV) # indices to permute idxA <- combn(idx, length(x1)) # all possibilities for different groups # function to calculate difference in group means given index vector for group A getDiffM <- function(x) { mean(DV[x]) - mean(DV[!(idx %in% x)]) } resDM <- apply(idxA, 2, getDiffM) # difference in means for all permutations diffM <- mean(x1) - mean(y1) # empirical differencen in group means # p-value: proportion of group means at least as extreme as observed one (pVal <- sum(resDM >= diffM) / length(resDM)) [1] 0.0945907 coin and exactRankTests are both from the same author, but coin seems to be more general and extensive - also in terms of documentation. exactRankTests is not actively developed anymore. I'd therefore choose coin (also because of informative functions like support() ), unless you don't like to deal with S4 objects. EDIT: for two dependent variables, the syntax is id <- factor(rep(1:length(x1), 2)) # factor for participant pvalue(oneway_test(DV ~ IV | id, alternative="greater", distribution=approximate(B=9999))) [1] 0.00810081
{ "source": [ "https://stats.stackexchange.com/questions/6127", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/442/" ] }
6,206
I made a logistic regression model using glm in R. I have two independent variables. How can I plot the decision boundary of my model in the scatter plot of the two variables. For example, how can I plot a figure like here .
set.seed(1234) x1 <- rnorm(20, 1, 2) x2 <- rnorm(20) y <- sign(-1 - 2 * x1 + 4 * x2 ) y[ y == -1] <- 0 df <- cbind.data.frame( y, x1, x2) mdl <- glm( y ~ . , data = df , family=binomial) slope <- coef(mdl)[2]/(-coef(mdl)[3]) intercept <- coef(mdl)[1]/(-coef(mdl)[3]) library(lattice) xyplot( x2 ~ x1 , data = df, groups = y, panel=function(...){ panel.xyplot(...) panel.abline(intercept , slope) panel.grid(...) }) I must remark that perfect separation occurs here, therefore the glm function gives you a warning. But that is not important here as the purpose is to illustrate how to draw the linear boundary and the observations colored according to their covariates.
{ "source": [ "https://stats.stackexchange.com/questions/6206", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2755/" ] }
6,239
I have a time series and I want to subset it while keeping it as a time series, preserving the start, end, and frequency. For example, let's say I have a time series: > qs <- ts(101:110, start=c(2009, 2), frequency=4) > qs Qtr1 Qtr2 Qtr3 Qtr4 2009 101 102 103 2010 104 105 106 107 2011 108 109 110 Now I will subset it: > qs[time(qs) >= 2010 & time(qs) < 2011] [1] 104 105 106 107 Notice that I got the correct results, but I lost the "wrappings" from the time series (namely start, end, frequency.) I'm looking for a function for this. Isn't subsetting a time series is a common scenario? Since I haven't found one yet, here is a function I wrote: subset.ts <- function(data, start, end) { ks <- which(time(data) >= start & time(data) < end) vec <- data[ks] ts(vec, start=start(data) + c(0, ks[1] - 1), frequency=frequency(data)) } I'd like to hear about improvements or cleaner ways to do this. In particular, I don't like the way I'm hard-coding start and end. I'd rather let the user specify an arbitrary boolean condition.
Use the window function: > window(qs, 2010, c(2010, 4)) Qtr1 Qtr2 Qtr3 Qtr4 2010 104 105 106 107
{ "source": [ "https://stats.stackexchange.com/questions/6239", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/660/" ] }
6,275
I am currently collecting data for an experiment into psychosocial characteristics associated with the experience of pain. As part of this, I am collecting GSR and BP measurements electronically from my participants, along with various self-report and implicit measures. I have a psychological background and am comfortable with factor analysis, linear models and experimental analysis. My question is what are good (preferably free) resources available for learning about time series analysis. I am a total newb when it comes to this area, so any help would be greatly appreciated. I have some pilot data to practice on, but would like to have my analysis plan worked out in detail before I finish collected data. If the provided references were also R related, that would be wonderful. Edited: to change grammar and to add 'self report and implicit measures'
This is a very large subject and there are many good books that cover it. These are both good, but Cryer is my favorite of the two: Cryer. " Time Series Analysis: With Applications in R " is a classic on the subject, updated to include R code. Shumway and Stoffer. " Time Series Analysis and Its Applications: With R Examples ". A good free resource is Zoonekynd's ebook, especially the time series section . My first suggestion for seeing the R packages would be the free ebook "A Discussion of Time Series Objects for R in Finance" from Rmetrics. It gives lots of examples comparing the different time series packages and discusses some of the considerations, but it doesn't provide any theory. Eric Zivot's "Modeling financial time series with S-PLUS" and Ruey Tsay's " Analysis of Financial Time Series " (available in the TSA package on CRAN) are directed and financial time series but both provide good general references. I strongly recommend looking at Ruey Tsay's homepage because it covers all these topics, and provides the necessary R code. In particular, look at the "Analysis of Financial Time Series" , and "Multivariate Time Series Analysis" courses.
{ "source": [ "https://stats.stackexchange.com/questions/6275", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/656/" ] }
6,279
I am a very beginner of statistic. Recently a project require me to analyse data using logistic regression & SPSS within a specific time frame. Although I have read few books, but still very blur on how to start off. Can someone guide me through? What is the 1st ste and what next? Anyway, I have started some. Once entered the data into SPSS, I have done crosstab (categorical IV), descriptive (continuous IV) and spearman correlation. Then, I proceed to test for nonlinearity by transforming into Ln which give me some problems. I have re-coded all zero cells to a small value (0.0001) to enable the Ln transformation. Then, I re-test the nonlinearity. Question: 1) The only solution for violation is to transform the variable from continuous to categorical? I got one violation. 2) One Exp(B) is extremely large (15203.835). What does this means? Why? 3) There is one interaction has Exp(B) = 0.00. Why? Many thanks.
This is a very large subject and there are many good books that cover it. These are both good, but Cryer is my favorite of the two: Cryer. " Time Series Analysis: With Applications in R " is a classic on the subject, updated to include R code. Shumway and Stoffer. " Time Series Analysis and Its Applications: With R Examples ". A good free resource is Zoonekynd's ebook, especially the time series section . My first suggestion for seeing the R packages would be the free ebook "A Discussion of Time Series Objects for R in Finance" from Rmetrics. It gives lots of examples comparing the different time series packages and discusses some of the considerations, but it doesn't provide any theory. Eric Zivot's "Modeling financial time series with S-PLUS" and Ruey Tsay's " Analysis of Financial Time Series " (available in the TSA package on CRAN) are directed and financial time series but both provide good general references. I strongly recommend looking at Ruey Tsay's homepage because it covers all these topics, and provides the necessary R code. In particular, look at the "Analysis of Financial Time Series" , and "Multivariate Time Series Analysis" courses.
{ "source": [ "https://stats.stackexchange.com/questions/6279", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2793/" ] }
6,330
I have previously used forecast pro to forecast univariate time series, but am switching my workflow over to R. The forecast package for R contains a lot of useful functions, but one thing it doesn't do is any kind of data transformation before running auto.arima(). In some cases forecast pro decides to log transform data before doing forecasts, but I haven't yet figured out why. So my question is: when should I log-transform my time series before trying ARIMA methods on it? /edit: after reading your answers, I'm going to use something like this, where x is my time series: library(lmtest) if ((gqtest(x~1)$p.value < 0.10) { x<-log(x) } Does this make sense?
Some caveats before to proceed. As I often suggest to my students, use auto.arima() things only as a first approximation to your final result or if you want to have parsimonious model when you check that your rival theory-based model do better. Data You have clearly to start from the description of time series data you are working with. In macro-econometrics you usually work with aggregated data, and geometric means (surprisingly) have more empirical evidence for macro time series data, probably because most of them decomposable into exponentially growing trend . By the way Rob's suggestion "visually" works for time series with clear seasonal part , as slowly varying annual data is less clear for the increases in variation. Luckily exponentially growing trend is usually seen (if it seems to be linear, than no need for logs). Model If your analysis is based on some theory that states that some weighted geometric mean $Y(t) = X_1^{\alpha_1}(t)...X_k^{\alpha_k}(t)\varepsilon(t)$ more known as the multiplicative regression model is the one you have to work with. Then you usually move to a log-log regression model , that is linear in parameters and most of your variables, but some growth rates, are transformed. In financial econometrics logs are a common thing due to the popularity of log-returns, because... Log transformations have nice properties In log-log regression model it is the interpretation of estimated parameter, say $\alpha_i$ as the elasticity of $Y(t)$ on $X_i(t)$. In error-correction models we have an empirically stronger assumption that proportions are more stable ( stationary ) than the absolute differences. In financial econometrics it is easy to aggregate the log-returns over time . There are many other reasons not mentioned here. Finally Note that log-transformation is usually applied to non-negative (level) variables. If you observe the differences of two time series (net export, for instance) it is not even possible to take the log, you have either to search for original data in levels or assume the form of common trend that was subtracted. [ addition after edit ] If you still want a statistical criterion for when to do log transformation a simple solution would be any test for heteroscedasticity. In the case of increasing variance I would recommend Goldfeld-Quandt Test or similar to it. In R it is located in library(lmtest) and is denoted by gqtest(y~1) function. Simply regress on intercept term if you don't have any regression model, y is your dependent variable.
{ "source": [ "https://stats.stackexchange.com/questions/6330", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2817/" ] }
6,350
The Wikipedia page on ANOVA lists three assumptions , namely: Independence of cases – this is an assumption of the model that simplifies the statistical analysis. Normality – the distributions of the residuals are normal. Equality (or "homogeneity") of variances, called homoscedasticity... Point of interest here is the second assumption. Several sources list the assumption differently. Some say normality of the raw data, some claim of residuals. Several questions pop up: are normality and normal distribution of residuals the same person (based on Wikipedia entry, I would claim normality is a property, and does not pertain residuals directly (but can be a property of residuals (deeply nested text within brackets, freaky)))? if not, which assumption should hold? One? Both? if the assumption of normally distributed residuals is the right one, are we making a grave mistake by checking only the histogram of raw values for normality?
Let's assume this is a fixed effects model. (The advice doesn't really change for random-effects models, it just gets a little more complicated.) First let us distinguish the "residuals" from the "errors:" the former are the differences between the responses and their predicted values, while the latter are random variables in the model. With sufficiently large amounts of data and a good fitting procedure, the distributions of the residuals will approximately look like the residuals were drawn randomly from the error distribution (and will therefore give you good information about the properties of that distribution). The assumptions, therefore, are about the errors, not the residuals. No, normality (of the responses) and normal distribution of errors are not the same . Suppose you measured yield from a crop with and without a fertilizer application. In plots without fertilizer the yield ranged from 70 to 130. In two plots with fertilizer the yield ranged from 470 to 530. The distribution of results is strongly non-normal: it's clustered at two locations related to the fertilizer application. Suppose further the average yields are 100 and 500, respectively. Then all residuals range from -30 to +30, and so the errors will be expected to have a comparable distribution. The errors might (or might not) be normally distributed, but obviously this is a completely different distribution. The distribution of the residuals matters , because those reflect the errors, which are the random part of the model. Note also that the p-values are computed from F (or t) statistics and those depend on residuals, not on the original values. If there are significant and important effects in the data (as in this example), then you might be making a "grave" mistake . You could, by luck, make the correct determination: that is, by looking at the raw data you will seeing a mixture of distributions and this can look normal (or not). The point is that what you're looking it is not relevant. ANOVA residuals don't have to be anywhere close to normal in order to fit the model. However, unless you have an enormous amount of data, near-normality of the residuals is essential for p-values computed from the F-distribution to be meaningful.
{ "source": [ "https://stats.stackexchange.com/questions/6350", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/144/" ] }
6,469
How do I fit a linear model with autocorrelated errors in R? In stata I would use the prais command, but I can't find an R equivalent...
Have a look at gls (generalized least squares) from the package nlme You can set a correlation profile for the errors in the regression, e.g. ARMA, etc: gls(Y ~ X, correlation=corARMA(p=1,q=1)) for ARMA(1,1) errors.
{ "source": [ "https://stats.stackexchange.com/questions/6469", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2817/" ] }
6,478
When building a CART model (specifically classification tree) using rpart (in R), it is often interesting to know what is the importance of the various variables introduced to the model. Thus, my question is: What common measures exists for ranking/measuring variable importance of participating variables in a CART model? And how can this be computed using R (for example, when using the rpart package) For example, here is some dummy code, created so you might show your solutions on it. This example is structured so that it is clear that variable x1 and x2 are "important" while (in some sense) x1 is more important then x2 (since x1 should apply to more cases, thus make more influence on the structure of the data, then x2). set.seed(31431) n <- 400 x1 <- rnorm(n) x2 <- rnorm(n) x3 <- rnorm(n) x4 <- rnorm(n) x5 <- rnorm(n) X <- data.frame(x1,x2,x3,x4,x5) y <- sample(letters[1:4], n, T) y <- ifelse(X[,2] < -1 , "b", y) y <- ifelse(X[,1] < 0 , "a", y) require(rpart) fit <- rpart(y~., X) plot(fit); text(fit) info.gain.rpart(fit) # your function - telling us on each variable how important it is (references are always welcomed)
Variable importance might generally be computed based on the corresponding reduction of predictive accuracy when the predictor of interest is removed (with a permutation technique, like in Random Forest) or some measure of decrease of node impurity, but see (1) for an overview of available methods. An obvious alternative to CART is RF of course ( randomForest , but see also party ). With RF, the Gini importance index is defined as the averaged Gini decrease in node impurities over all trees in the forest (it follows from the fact that the Gini impurity index for a given parent node is larger than the value of that measure for its two daughter nodes, see e.g. (2)). I know that Carolin Strobl and coll. have contributed a lot of simulation and experimental studies on (conditional) variable importance in RFs and CARTs (e.g., (3-4), but there are many other ones, or her thesis, Statistical Issues in Machine Learning – Towards Reliable Split Selection and Variable Importance Measures ). To my knowledge, the caret package (5) only considers a loss function for the regression case (i.e., mean squared error). Maybe it will be added in the near future (anyway, an example with a classification case by k-NN is available in the on-line help for dotPlot ). However, Noel M O'Boyle seems to have some R code for Variable importance in CART . References Sandri and Zuccolotto. A bias correction algorithm for the Gini variable importance measure in classification trees . 2008 Izenman. Modern Multivariate Statistical Techniques . Springer 2008 Strobl, Hothorn, and Zeilis. Party on! . R Journal 2009 1/2 Strobl, Boulesteix, Kneib, Augustin, and Zeilis. Conditional variable importance for random forests . BMC Bioinformatics 2008, 9:307 Kuhn. Building Predictive Models in R Using the caret Package . JSS 2008 28(5)
{ "source": [ "https://stats.stackexchange.com/questions/6478", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
6,493
I have been using log normal distributions as prior distributions for scale parameters (for normal distributions, t distributions etc.) when I have a rough idea about what the scale should be, but want to err on the side of saying I don't know much about it. I use it because the that use makes intuitive sense to me, but I haven't seen others use it. Are there any hidden dangers to this?
I would recommend using a "Beta distribution of the second kind" (Beta 2 for short) for a mildly informative distribution, and to use the conjugate inverse gamma distribution if you have strong prior beliefs. The reason I say this is that the conjugate prior is non-robust in the sense that, if the prior and data conflict, the prior has an unbounded influence on the posterior distribution. Such behaviour is what I would call "dogmatic", and not justified by mild prior information. The property which determines robustness is the tail-behaviour of the prior and of the likelihood. A very good article outlining the technical details is here . For example, a likelihood can be chosen (say a t-distribution) such that as an observation $y_i \rightarrow \infty$ (i.e. becomes arbitrarily large) it is discarded from the analysis of a location parameter (much in the same way that you would intuitively do with such an observation). The rate of "discarding" depends on how heavy the tails of the distribution are. Some slides which show an application in the hierarchical modelling context can be found here (shows the mathematical form of the Beta 2 distribution), with a paper here . If you are not in the hierarchical modeling context, then I would suggest comparing the posterior (or whatever results you are creating) but use the Jeffreys prior for a scale parameter, which is given by $p(\sigma)\propto\frac{1}{\sigma}$ . This can be created as a limit of the Beta 2 density as both its parameters converge to zero. For an approximation you could use small values. But I would try to work out the solution analytically if at all possible (and if not a complete analytical solution, get the analytical solution as far progressed as you possibly can), because you will not only save yourself some computational time, but you are also likely to understand what is happening in your model better. A further alternative is to specify your prior information in the form of constraints (mean equal to $M$ , variance equal to $V$ , IQR equal to $IQR$ , etc. with the values of $M,V,IQR$ specified by yourself), and then use the maximum entropy distribution (search any work by Edwin Jaynes or Larry Bretthorst for a good explanation of what Maximum Entropy is and what it is not) with respect to Jeffreys' "invariant measure" $m(\sigma)=\frac{1}{\sigma}$ . MaxEnt is the "Rolls Royce" version, while the Beta 2 is more a "sedan" version. The reason for this is that the MaxEnt distribution "assumes the least" subject to the constraints you have put into it (e.g., no constraints means you just get the Jeffreys prior), whereas the Beta 2 distribution may contain some "hidden" features which may or may not be desirable in your specific case (e.g., if the prior information is more reliable than the data, then Beta 2 is bad). The other nice property of MaxEnt distribution is that if there are no unspecified constraints operating in the data generating mechanism then the MaxEnt distribution is overwhelmingly the most likely distribution that you will see (we're talking odds way over billions and trillions to one). Therefore, if the distribution you see is not the MaxEnt one, then there is likely additional constraints which you have not specified operating on the true process, and the observed values can provide a clue as to what that constraint might be.
{ "source": [ "https://stats.stackexchange.com/questions/6493", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1146/" ] }
6,505
Suppose I am going to do a univariate logistic regression on several independent variables, like this: mod.a <- glm(x ~ a, data=z, family=binominal("logistic")) mod.b <- glm(x ~ b, data=z, family=binominal("logistic")) I did a model comparison (likelihood ratio test) to see if the model is better than the null model by this command 1-pchisq(mod.a$null.deviance-mod.a$deviance, mod.a$df.null-mod.a$df.residual) Then I built another model with all variables in it mod.c <- glm(x ~ a+b, data=z, family=binomial("logistic")) In order to see if the variable is statistically significant in the multivariate model, I used the lrtest command from epicalc lrtest(mod.c,mod.a) ### see if variable b is statistically significant after adjustment of a lrtest(mod.c,mod.b) ### see if variable a is statistically significant after adjustment of b I wonder if the pchisq method and the lrtest method are equivalent for doing loglikelihood test? As I dunno how to use lrtest for univate logistic model.
Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced, family=binomial, data=infert) > lrtest (model0, model1) Likelihood ratio test for MLE method Chi-squared 1 d.f. = 36.48675 , P value = 0 > model1$deviance-model0$deviance [1] 36.48675 and not the deviance for the null model which is the same in both cases. The number of df is the number of parameters that differ between the two nested models, here df=1. BTW, you can look at the source code for lrtest() by just typing > lrtest at the R prompt.
{ "source": [ "https://stats.stackexchange.com/questions/6505", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/588/" ] }
6,534
So, I have a data set of percentages like so: 100 / 10000 = 1% (0.01) 2 / 5 = 40% (0.4) 4 / 3 = 133% (1.3) 1000 / 2000 = 50% (0.5) I want to find the standard deviation of the percentages, but weighted for their data volume. ie, the first and last data points should dominate the calculation. How do I do that? And is there a simple way to do it in Excel?
The formula for weighted standard deviation is: $$ \sqrt{ \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i } },$$ where $N$ is the number of observations. $M$ is the number of nonzero weights. $w_i$ are the weights $x_i$ are the observations. $\bar{x}^*$ is the weighted mean. Remember that the formula for weighted mean is: $$\bar{x}^* = \frac{\sum_{i=1}^N w_i x_i}{\sum_{i=1}^N w_i}.$$ Use the appropriate weights to get the desired result. In your case I would suggest to use $\frac{\mbox{Number of cases in segment}}{\mbox{Total number of cases}}$. To do this in Excel, you need to calculate the weighted mean first. Then calculate the $(x_i - \bar{x}^*)^2$ in a separate column. The rest must be very easy.
{ "source": [ "https://stats.stackexchange.com/questions/6534", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/142/" ] }
6,538
I know people love to close duplicates so I am not asking for a reference to start learning statistics (as here ). I have a doctorate in mathematics but never learned statistics. What is the shortest route to the equivalent knowledge to a top notch BS statistics degree and how do I measure when I have achieved that. If a list of books would suffice (assuming I do the exercises lets say), that's terrific. Yes, I expect working out problems to be an implicit part of learning it but I want to fast track as much as realistically possible. I am not looking for an insanely rigorous treatment unless that is part of what statistical majors generally learn.
(Very) short story Long story short, in some sense, statistics is like any other technical field: There is no fast track . Long story Bachelor's degree programs in statistics are relatively rare in the U.S. One reason I believe this is true is that it is quite hard to pack all that is necessary to learn statistics well into an undergraduate curriculum. This holds particularly true at universities that have significant general-education requirements. Developing the necessary skills (mathematical, computational, and intuitive) takes a lot of effort and time. Statistics can begin to be understood at a fairly decent "operational" level once the student has mastered calculus and a decent amount of linear and matrix algebra. However, any applied statistician knows that it is quite easy to find oneself in territory that doesn't conform to a cookie-cutter or recipe-based approach to statistics. To really understand what is going on beneath the surface requires as a prerequisite mathematical and, in today's world, computational maturity that are only really attainable in the later years of undergraduate training. This is one reason that true statistical training mostly starts at the M.S. level in the U.S. (India, with their dedicated ISI is a little different story. A similar argument might be made for some Canadian-based education. I'm not familiar enough with European-based or Russian-based undergraduate statistics education to have an informed opinion.) Nearly any (interesting) job would require an M.S. level education and the really interesting (in my opinion) jobs essentially require a doctorate-level education. Seeing as you have a doctorate in mathematics, though we don't know in what area, here are my suggestions for something closer to an M.S.-level education. I include some parenthetical remarks to explain the choices. D. Huff, How to Lie with Statistics . (Very quick, easy read. Shows many of the conceptual ideas and pitfalls, in particular, in presenting statistics to the layman.) Mood, Graybill, and Boes, Introduction to the Theory of Statistics , 3rd ed., 1974. (M.S.-level intro to theoretical statistics. You'll learn about sampling distributions, point estimation and hypothesis testing in a classical, frequentist framework. My opinion is that this is generally better, and a bit more advanced, than modern counterparts such as Casella & Berger or Rice.) Seber & Lee, Linear Regression Analysis , 2nd ed. (Lays out the theory behind point estimation and hypothesis testing for linear models, which is probably the most important topic to understand in applied statistics. Since you probably have a good linear algebra background, you should immediately be able to understand what is going on geometrically, which provides a lot of intuition. Also has good information related to assessment issues in model selection, departures from assumptions, prediction, and robust versions of linear models.) Hastie, Tibshirani, and Friedman, Elements of Statistical Learning , 2nd ed., 2009. (This book has a much more applied feeling than the last and broadly covers lots of modern machine-learning topics. The major contribution here is in providing statistical interpretations of many machine-learning ideas, which pays off particularly in quantifying uncertainty in such models. This is something that tends to go un(der)addressed in typical machine-learning books. Legally available for free here .) A. Agresti, Categorical Data Analysis , 2nd ed. (Good presentation of how to deal with discrete data in a statistical framework. Good theory and good practical examples. Perhaps on the traditional side in some respects.) Boyd & Vandenberghe, Convex Optimization . (Many of the most popular modern statistical estimation and hypothesis-testing problems can be formulated as convex optimization problems. This also goes for numerous machine-learning techniques, e.g., SVMs. Having a broader understanding and the ability to recognize such problems as convex programs is quite valuable, I think. Legally available for free here .) Efron & Tibshirani, An Introduction to the Bootstrap . (You ought to at least be familiar with the bootstrap and related techniques. For a textbook, it's a quick and easy read.) J. Liu, Monte Carlo Strategies in Scientific Computing or P. Glasserman, Monte Carlo Methods in Financial Engineering . (The latter sounds very directed to a particular application area, but I think it'll give a good overview and practical examples of all the most important techniques. Financial engineering applications have driven a fair amount of Monte Carlo research over the last decade or so.) E. Tufte, The Visual Display of Quantitative Information . (Good visualization and presentation of data is [highly] underrated, even by statisticians.) J. Tukey, Exploratory Data Analysis . (Standard. Oldie, but goodie. Some might say outdated, but still worth having a look at.) Complements Here are some other books, mostly of a little more advanced, theoretical and/or auxiliary nature, that are helpful. F. A. Graybill, Theory and Application of the Linear Model . (Old fashioned, terrible typesetting, but covers all the same ground of Seber & Lee, and more. I say old-fashioned because more modern treatments would probably tend to use the SVD to unify and simplify a lot of the techniques and proofs.) F. A. Graybill, Matrices with Applications in Statistics . (Companion text to the above. A wealth of good matrix algebra results useful to statistics here. Great desk reference.) Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition . (Rigorous and theoretical text on quantifying performance in classification problems.) Brockwell & Davis, Time Series: Theory and Methods . (Classical time-series analysis. Theoretical treatment. For more applied ones, Box, Jenkins & Reinsel or Ruey Tsay's texts are decent.) Motwani and Raghavan, Randomized Algorithms . (Probabilistic methods and analysis for computational algorithms.) D. Williams, Probability and Martingales and/or R. Durrett, Probability: Theory and Examples . (In case you've seen measure theory, say, at the level of D. L. Cohn, but maybe not probability theory. Both are good for getting quickly up to speed if you already know measure theory.) F. Harrell, Regression Modeling Strategies . (Not as good as Elements of Statistical Learning [ESL], but has a different, and interesting, take on things. Covers more "traditional" applied statistics topics than does ESL and so worth knowing about, for sure.) More Advanced (Doctorate-Level) Texts Lehmann and Casella, Theory of Point Estimation . (PhD-level treatment of point estimation. Part of the challenge of this book is reading it and figuring out what is a typo and what is not. When you see yourself recognizing them quickly, you'll know you understand. There's plenty of practice of this type in there, especially if you dive into the problems.) Lehmann and Romano, Testing Statistical Hypotheses . (PhD-level treatment of hypothesis testing. Not as many typos as TPE above.) A. van der Vaart, Asymptotic Statistics . (A beautiful book on the asymptotic theory of statistics with good hints on application areas. Not an applied book though. My only quibble is that some rather bizarre notation is used and details are at times brushed under the rug.)
{ "source": [ "https://stats.stackexchange.com/questions/6538", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2912/" ] }
6,581
What is "Deviance," how is it calculated, and what are its uses in different fields in statistics? In particular, I'm personally interested in its uses in CART (and its implementation in rpart in R). I'm asking this since the wiki-article seems somewhat lacking and your insights will be most welcomed.
Deviance and GLM Formally, one can view deviance as a sort of distance between two probabilistic models; in GLM context, it amounts to two times the log ratio of likelihoods between two nested models $\ell_1/\ell_0$ where $\ell_0$ is the "smaller" model; that is, a linear restriction on model parameters (cf. the Neyman–Pearson lemma ), as @suncoolsu said. As such, it can be used to perform model comparison . It can also be seen as a generalization of the RSS used in OLS estimation (ANOVA, regression), for it provides a measure of goodness-of-fit of the model being evaluated when compared to the null model (intercept only). It works with LM too: > x <- rnorm(100) > y <- 0.8*x+rnorm(100) > lm.res <- lm(y ~ x) The residuals SS (RSS) is computed as $\hat\varepsilon^t\hat\varepsilon$ , which is readily obtained as: > t(residuals(lm.res))%*%residuals(lm.res) [,1] [1,] 98.66754 or from the (unadjusted) $R^2$ > summary(lm.res) Call: lm(formula = y ~ x) (...) Residual standard error: 1.003 on 98 degrees of freedom Multiple R-squared: 0.4234, Adjusted R-squared: 0.4175 F-statistic: 71.97 on 1 and 98 DF, p-value: 2.334e-13 since $R^2=1-\text{RSS}/\text{TSS}$ where $\text{TSS}$ is the total variance. Note that it is directly available in an ANOVA table, like > summary.aov(lm.res) Df Sum Sq Mean Sq F value Pr(>F) x 1 72.459 72.459 71.969 2.334e-13 *** Residuals 98 98.668 1.007 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Now, look at the deviance: > deviance(lm.res) [1] 98.66754 In fact, for linear models the deviance equals the RSS (you may recall that OLS and ML estimates coincide in such a case). Deviance and CART We can see CART as a way to allocate already $n$ labeled individuals into arbitrary classes (in a classification context). Trees can be viewed as providing a probability model for individuals class membership. So, at each node $i$ , we have a probability distribution $p_{ik}$ over the classes. What is important here is that the leaves of the tree give us a random sample $n_{ik}$ from a multinomial distribution specified by $p_{ik}$ . We can thus define the deviance of a tree, $D$ , as the sum over all leaves of $$D_i=-2\sum_kn_{ik}\log(p_{ik}),$$ following Venables and Ripley's notations ( MASS , Springer 2002, 4th ed.). If you have access to this essential reference for R users (IMHO), you can check by yourself how such an approach is used for splitting nodes and fitting a tree to observed data (p. 255 ff.); basically, the idea is to minimize, by pruning the tree, $D+\alpha \#(T)$ where $\#(T)$ is the number of nodes in the tree $T$ . Here we recognize the cost-complexity trade-off . Here, $D$ is equivalent to the concept of node impurity (i.e., the heterogeneity of the distribution at a given node) which are based on a measure of entropy or information gain, or the well-known Gini index, defined as $1-\sum_kp_{ik}^2$ (the unknown proportions are estimated from node proportions). With a regression tree, the idea is quite similar, and we can conceptualize the deviance as sum of squares defined for individuals $j$ by $$D_i=\sum_j(y_j-\mu_i)^2,$$ summed over all leaves. Here, the probability model that is considered within each leaf is a gaussian $\mathcal{N}(\mu_i,\sigma^2)$ . Quoting Venables and Ripley (p. 256), " $D$ is the usual scaled deviance for a gaussian GLM. However, the distribution at internal nodes of the tree is then a mixture of normal distributions, and so $D_i$ is only appropriate at the leaves. The tree-construction process has to be seen as a hierarchical refinement of probability models, very similar to forward variable selection in regression ." Section 9.2 provides further detailed information about rpart implementation, but you can already look at the residuals() function for rpart object, where "deviance residuals" are computed as the square root of minus twice the logarithm of the fitted model. An introduction to recursive partitioning using the rpart routines , by Atkinson and Therneau, is also a good start. For more general review (including bagging), I would recommend Moissen, G.G. (2008). Classification and Regression Trees . Ecological Informatics , pp. 582-588. Sutton, C.D. (2005). Classification and Regression Trees, Bagging, and Boosting , in Handbook of Statistics, Vol. 24 , pp. 303-329, Elsevier.
{ "source": [ "https://stats.stackexchange.com/questions/6581", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
6,601
This is a similar question to the one here , but different enough I think to be worthwhile asking. I thought I'd put as a starter, what I think one of the hardest to grasp is. Mine is the difference between probability and frequency . One is at the level of "knowledge of reality" (probability), while the other is at the level "reality itself" (frequency). This almost always makes me confused if I think about it too much. Edwin Jaynes Coined a term called the "mind projection fallacy" to describe getting these things mixed up. Any thoughts on any other tough concepts to grasp?
for some reason, people have difficulty grasping what a p-value really is.
{ "source": [ "https://stats.stackexchange.com/questions/6601", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2392/" ] }
6,652
I know roughly and informally what a confidence interval is. However, I can't seem to wrap my head around one rather important detail: According to Wikipedia: A confidence interval does not predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained. I've also seen similar points made in several places on this site. A more correct definition, also from Wikipedia, is: if confidence intervals are constructed across many separate data analyses of repeated (and possibly different) experiments, the proportion of such intervals that contain the true value of the parameter will approximately match the confidence level Again, I've seen similar points made in several places on this site. I don't get it. If, under repeated experiments, the fraction of computed confidence intervals that contain the true parameter $\theta$ is $(1 - \alpha)$, then how can the probability that $\theta$ is in the confidence interval computed for the actual experiment be anything other than $(1 - \alpha)$? I'm looking for the following in an answer: Clarification of the distinction between the incorrect and correct definitions above. A formal, precise definition of a confidence interval that clearly shows why the first definition is wrong. A concrete example of a case where the first definition is spectacularly wrong, even if the underlying model is correct.
There are many issues concerning confidence intervals, but let's focus on the quotations. The problem lies in possible misinterpretations rather than being a matter of correctness. When people say a "parameter has a particular probability of" something, they are thinking of the parameter as being a random variable. This is not the point of view of a (classical) confidence interval procedure, for which the random variable is the interval itself and the parameter is determined, not random, yet unknown. This is why such statements are frequently attacked. Mathematically, if we let $t$ be any procedure that maps data $\mathbf{x} = (x_i)$ to subsets of the parameter space and if (no matter what the value of the parameter $\theta$ may be) the assertion $\theta \in t(\mathbf{x})$ defines an event $A(\mathbf{x})$, then--by definition--it has a probability $\Pr_{\theta}\left( A(\mathbf{x}) \right)$ for any possible value of $\theta$. When $t$ is a confidence interval procedure with confidence $1-\alpha$ then this probability is supposed to have an infimum (over all parameter values) of $1-\alpha$. (Subject to this criterion, we usually select procedures that optimize some additional property, such as producing short confidence intervals or symmetric ones, but that's a separate matter.) The Weak Law of Large Numbers then justifies the second quotation. That, however, is not a definition of confidence intervals: it is merely a property they have. I think this analysis has answered question 1, shows that the premise of question 2 is incorrect, and makes question 3 moot.
{ "source": [ "https://stats.stackexchange.com/questions/6652", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1347/" ] }
6,728
I was trying to fit my data into various models and figured out that the fitdistr function from library MASS of R gives me Negative Binomial as the best-fit. Now from the wiki page, the definition is given as: NegBin(r,p) distribution describes the probability of k failures and r successes in k+r Bernoulli(p) trials with success on the last trial. Using R to perform model fitting gives me two parameters mean and dispersion parameter . I am not understanding how to interpret these because I cannot see these parameters on the wiki page. All I can see is the following formula: where k is the number of observations and r=0...n . Now how do I relate these with the parameters given by R ? The help file does not provide much information either. Also, just to say a few words about my experiment: In a social experiment that I was conducting, I was trying to count the number of people each user contacted in a period of 10 days. The population size was 100 for the experiment. Now, if the model fits the Negative Binomial, I can blindly say that it follows that distribution but I really want to understand the intuitive meaning behind this. What does it mean to say that the number of people contacted by my test subjects follows a negative binomial distribution? Can someone please help clarify this?
You should look further down the Wikipedia article on the NB , where it says "gamma-Poisson mixture". While the definition you cite (which I call the "coin-flipping" definition since I usually define it for classes as "suppose you want to flip a coin until you get $k$ heads") is easier to derive and makes more sense in an introductory probability or mathematical statistics context, the gamma-Poisson mixture is (in my experience) a much more generally useful way to think about the distribution in applied contexts. (In particular, this definition allows non-integer values of the dispersion/size parameter.) In this context, your dispersion parameter describes the distribution of a hypothetical Gamma distribution that underlies your data and describes unobserved variation among individuals in their intrinsic level of contact. In particular, it is the shape parameter of the Gamma, and it may be helpful in thinking about this to know that the coefficient of variation of a Gamma distribution with shape parameter $\theta$ is $1/\sqrt{\theta}$ ; as $\theta$ becomes large the latent variability disappears and the distribution approaches the Poisson.
{ "source": [ "https://stats.stackexchange.com/questions/6728", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2164/" ] }
6,759
How can I remove duplicate rows from this example data frame? A 1 A 1 A 2 B 4 B 1 B 1 C 2 C 2 I would like to remove the duplicates based on both the columns: A 1 A 2 B 4 B 1 C 2 Order is not important.
unique() indeed answers your question, but another related and interesting function to achieve the same end is duplicated() . It gives you the possibility to look up which rows are duplicated. a <- c(rep("A", 3), rep("B", 3), rep("C",2)) b <- c(1,1,2,4,1,1,2,2) df <-data.frame(a,b) duplicated(df) [1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE TRUE > df[duplicated(df), ] a b 2 A 1 6 B 1 8 C 2 > df[!duplicated(df), ] a b 1 A 1 3 A 2 4 B 4 5 B 1 7 C 2
{ "source": [ "https://stats.stackexchange.com/questions/6759", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2725/" ] }
6,762
We have n actors. Each actor chooses from n*2 actions. How can I calculate the probability that at least one actor will choose a unique outcome? For example, say we have 5 pickup artists in a town with 10 bars. Each PUA chooses a bar at random. What are the odds that at least one PUA will have a bar to himself/herself? This is relevant to some work I'm doing on scheduling in a distributed system. I know that the relevant equation should probably start with $\binom{n + 2n -1}{n}$, but then what?
unique() indeed answers your question, but another related and interesting function to achieve the same end is duplicated() . It gives you the possibility to look up which rows are duplicated. a <- c(rep("A", 3), rep("B", 3), rep("C",2)) b <- c(1,1,2,4,1,1,2,2) df <-data.frame(a,b) duplicated(df) [1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE TRUE > df[duplicated(df), ] a b 2 A 1 6 B 1 8 C 2 > df[!duplicated(df), ] a b 1 A 1 3 A 2 4 B 4 5 B 1 7 C 2
{ "source": [ "https://stats.stackexchange.com/questions/6762", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2995/" ] }
6,780
I have several query frequencies, and I need to estimate the coefficient of Zipf's law. These are the top frequencies: 26486 12053 5052 3033 2536 2391 1444 1220 1152 1039
Update I've updated the code with maximum likelihood estimator as per @whuber suggestion. Minimizing sum of squares of differences between log theoretical probabilities and log frequencies though gives an answer would be a statistical procedure if it could be shown that it is some kind of M-estimator. Unfortunately I could not think of any which could give the same results. Here is my attempt. I calculate logarithms of the frequencies and try to fit them to logarithms of theoretical probabilities given by this formula . The final result seems reasonable. Here is my code in R. fr <- c(26486, 12053, 5052, 3033, 2536, 2391, 1444, 1220, 1152, 1039) p <- fr/sum(fr) lzipf <- function(s,N) -s*log(1:N)-log(sum(1/(1:N)^s)) opt.f <- function(s) sum((log(p)-lzipf(s,length(p)))^2) opt <- optimize(opt.f,c(0.5,10)) > opt $minimum [1] 1.463946 $objective [1] 0.1346248 The best quadratic fit then is $s=1.47$. The maximum likelihood in R can be performed with mle function (from stats4 package), which helpfully calculates standard errors (if correct negative maximum likelihood function is supplied): ll <- function(s) sum(fr*(s*log(1:10)+log(sum(1/(1:10)^s)))) fit <- mle(ll,start=list(s=1)) > summary(fit) Maximum likelihood estimation Call: mle(minuslogl = ll, start = list(s = 1)) Coefficients: Estimate Std. Error s 1.451385 0.005715046 -2 log L: 188093.4 Here is the graph of the fit in log-log scale (again as @whuber suggested): s.sq <- opt$minimum s.ll <- coef(fit) plot(1:10,p,log="xy") lines(1:10,exp(lzipf(s.sq,10)),col=2) lines(1:10,exp(lzipf(s.ll,10)),col=3) Red line is sum of squares fit, green line is maximum-likelihood fit.
{ "source": [ "https://stats.stackexchange.com/questions/6780", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2998/" ] }
6,913
I recently read the advice that you should generally use median not mean to eliminate outliers. Example: The following article http://www.amazon.com/Forensic-Science-Introduction-Scientific-Investigative/product-reviews/1420064932/ has 16 reviews at the moment: review = c(5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 3, 2, 1, 1) summary(review) ## "ordinary" summary Min. 1st Qu. Median Mean 3rd Qu. Max. 1.000 3.750 5.000 4.062 5.000 5.000 Because they use Mean the article gets 4 stars but if they would use Median it would get 5 stars. Isn't the median a 'fairer' judge? An experiment shows that median's error is always bigger than mean. Is median worse? library(foreach) # the overall population of bookjudgments n <- 5 p <- 0.5 expected.value <- n*p peoplesbelieve <- rbinom(10^6,n, p) # 16 ratings made for 100 books ratings <- foreach(i=1:100, .combine=rbind) %do% sample(peoplesbelieve,16) stat <- foreach(i=1:100, .combine=rbind) %do% c(mean=mean(ratings[i,]), median=median(ratings[i,])) # which mean square error is bigger? Mean's or Median's? meansqrterror.mean <- mean((stat[,"mean"]-expected.value)^2) meansqrterror.median <- mean((stat[,"median"]-expected.value)^2) res <- paste("mean MSE",meansqrterror.mean) res <- paste(res, "| median MSE", meansqrterror.median) print(res)
The problem is that you haven't really defined what it means to have a good or fair rating. You suggest in a comment on @Kevin's answer that you don't like it if one bad review takes down an item. But comparing two items where one has a "perfect record" and the other has one bad review, maybe that difference should be reflected. There's a whole (high-dimensional) continuum between median and mean. You can order the votes by value, then take a weighted average with the weights depending on the position in that order. The mean corresponds to all weights being equal, the median corresponds to only one or two entries in the middle getting nonzero weight, a trimmed average corresponds to giving all except the first and last couple the same weight, but you could also decide to weight the $k$th out of $n$ samples with weight $\frac{1}{1 + (2 k - 1 - n)^2}$ or $\exp(-\frac{(2k - 1 - n)^2}{n^2})$, to throw something random in there. Maybe such a weighted average where the outliers get less weight, but still a nonzero amount, could combine good properties of median and mean?
{ "source": [ "https://stats.stackexchange.com/questions/6913", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/778/" ] }
6,920
I'm analysing some data where I would like to perform ordinary linear regression, however this is not possible as I am dealing with an on-line setting with a continuous stream of input data (which will quickly get too large for memory) and need to update parameter estimates while this is being consumed. i.e. I cannot just load it all into memory and perform linear regression on the entire data set. I'm assuming a simple linear multivariate regression model, i.e. $$\mathbf y = \mathbf A\mathbf x + \mathbf b + \mathbf e$$ What's the best algorithm for creating a continuously updating estimate of the linear regression parameters $\mathbf A$ and $\mathbf b$? Ideally: I'd like an algorithm that is most $\mathcal O(N\cdot M)$ space and time complexity per update, where $N$ is the dimensionality of the independent variable ($\mathbf x$) and $M$ is the dimensionality of the dependent variable ($\mathbf y$). I'd like to be able to specify some parameter to determine how much the parameters are updated by each new sample, e.g. 0.000001 would mean that the next sample would provide one millionth of the parameter estimate. This would give some kind of exponential decay for the effect of samples in the distant past.
Maindonald describes a sequential method based on Givens rotations . (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the previous step you have decomposed the design matrix $\mathbf{X}$ into a triangular matrix $\mathbf{T}$ via an orthogonal transformation $\mathbf{Q}$ so that $\mathbf{Q}\mathbf{X} = (\mathbf{T}, \mathbf{0})'$ . (It's fast and easy to get the regression results from a triangular matrix.) Upon adjoining a new row $v$ below $\mathbf{X}$ , you effectively extend $(\mathbf{T}, \mathbf{0})'$ by a nonzero row, too, say $t$ . The task is to zero out this row while keeping the entries in the position of $\mathbf{T}$ diagonal. A sequence of Givens rotations does this: the rotation with the first row of $\mathbf{T}$ zeros the first element of $t$ ; then the rotation with the second row of $\mathbf{T}$ zeros the second element, and so on. The effect is to premultiply $\mathbf{Q}$ by a series of rotations, which does not change its orthogonality. When the design matrix has $p+1$ columns (which is the case when regressing on $p$ variables plus a constant), the number of rotations needed does not exceed $p+1$ and each rotation changes two $p+1$ -vectors. The storage needed for $\mathbf{T}$ is $O((p+1)^2)$ . Thus this algorithm has a computational cost of $O((p+1)^2)$ in both time and space. A similar approach lets you determine the effect on regression of deleting a row. Maindonald gives formulas; so do Belsley, Kuh, & Welsh . Thus, if you are looking for a moving window for regression, you can retain data for the window within a circular buffer, adjoining the new datum and dropping the old one with each update. This doubles the update time and requires additional $O(k (p+1))$ storage for a window of width $k$ . It appears that $1/k$ would be the analog of the influence parameter. For exponential decay, I think (speculatively) that you could adapt this approach to weighted least squares, giving each new value a weight greater than 1. There shouldn't be any need to maintain a buffer of previous values or delete any old data. References J. H. Maindonald, Statistical Computation. J. Wiley & Sons, 1984. Chapter 4. D. A. Belsley, E. Kuh, R. E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. J. Wiley & Sons, 1980.
{ "source": [ "https://stats.stackexchange.com/questions/6920", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2942/" ] }
6,958
I have fitted the ARIMA models to the original time series, and the best model is ARIMA(1,1,0). Now I want to simulate the series from that model. I wrote the simple AR(1) model, but I couldn't understand how to adjust the difference within the model ARI(1,1,0). The following R code for AR(1) series is: phi= -0.7048 z=rep(0,100) e=rnorm(n=100,0,0.345) cons=2.1 z[1]=4.1 for (i in 2:100) z[i]=cons+phi*z[i-1]+e[i] plot(ts(Y)) How do i include the difference term ARI(1,1) in above code. Any one help me in this regard.
If you want to simulate ARIMA you can use arima.sim in R, there is no need to do it by hand. This will generate the series you want. e <- rnorm(100,0,0.345) arima.sim(n=100,model=list(ar=-0.7048,order=c(1,1,0)),start.innov=4.1,n.start=1,innov=2.1+e) You can look at the code of how this is achieved by typing arima.sim in R command line. Alternatively if you do it yourself, the function you are probably looking is diffinv . It computes the inverse of lagged differences. For recursive sequences R has a nice function filter . So instead of using loop z <- rep(NA,100) z[1] <- 4.1 for (i in 2:100) z[i]=cons+phi*z[i-1]+e[i] you can write filter(c(4.1,2.1+e),filter=-0.7048,method="recursive") This will give the identical result to arima.sim example above: diffinv(filter(c(4.1,2.1+e),filter=-0.7048,method="recursive")[-1])
{ "source": [ "https://stats.stackexchange.com/questions/6958", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3084/" ] }
6,966
Why continue to teach and use hypothesis testing (with all its difficult concepts and which are among the most statistical sins) for problems where there is an interval estimator (confidence, bootstrap, credibility or whatever)? What is the best explanation (if any) to be given to students? Only tradition? The views will be very welcome.
This is my personal opinion, so I'm not sure it properly qualifies as an answer. Why should we teach hypothesis testing? One very big reason, in short, is that, in all likelihood, in the time it takes you to read this sentence, hundreds, if not thousands (or millions) of hypothesis tests have been conducted within a 10ft radius of where you sit. Your cell phone is definitely using a likelihood ratio test to decide whether or not it is within range of a base station. Your laptop's WiFi hardware is doing the same in communicating with your router. The microwave you used to auto-reheat that two-day old piece of pizza used a hypothesis test to decide when your pizza was hot enough. Your car's traction control system kicked in when you gave it too much gas on an icy road, or the tire-pressure warning system let you know that your rear passenger-side tire was abnormally low, and your headlights came on automatically at around 5:19pm as dusk was setting in. Your iPad is rendering this page in landscape format based on (noisy) accelerometer readings. Your credit card company shut off your card when "you" purchased a flat-screen TV at a Best Buy in Texas and a $2000 diamond ring at Zales in a Washington-state mall within a couple hours of buying lunch, gas, and a movie near your home in the Pittsburgh suburbs. The hundreds of thousands of bits that were sent to render this webpage in your browser each individually underwent a hypothesis test to determine whether they were most likely a 0 or a 1 (in addition to some amazing error-correction). Look to your right just a little bit at those "related" topics. All of these things "happened" due to hypothesis tests . For many of these things some interval estimate of some parameter could be calculated. But, especially for automated industrial processes, the use and understanding of hypothesis testing is crucial. On a more theoretical statistical level, the important concept of statistical power arises rather naturally from a decision-theoretic / hypothesis-testing framework. Plus, I believe "even" a pure mathematician can appreciate the beauty and simplicity of the Neyman–Pearson lemma and its proof. This is not to say that hypothesis testing is taught, or understood, well. By and large, it's not. And, while I would agree that—particularly in the medical sciences—reporting of interval estimates along with effect sizes and notions of practical vs. statistical significance are almost universally preferable to any formal hypothesis test, this does not mean that hypothesis testing and the related concepts are not important and interesting in their own right.
{ "source": [ "https://stats.stackexchange.com/questions/6966", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/523/" ] }
6,989
I want to fit a multilevel GLMM with a Poisson distribution (with over-dispersion) using R. At the moment I am using lme4 but I noticed that recently the quasipoisson family was removed. I've seen elsewhere that you can model additive over-dispersion for binomial distributions by adding a random intercept with one level per observation. Does this apply to the poisson distribution as well? Is there a better way to do it? Are there other packages that you would recommend?
You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4 , MCMCglmm , arm , etc. A good reference to see is Gelman and Hill (2007) I will give an example of doing this using rjags package in R . It is an interface between R and JAGS (like OpenBUGS or WinBUGS ). $$n_{ij} \sim \mathrm{Poisson}(\theta_{ij})$$ $$\log \theta_{ij} = \beta_0 + \beta_1 \mbox{ } \mathtt{Treatment}_{i} + \delta_{ij}$$ $$\delta_{ij} \sim N(0, \sigma^2_{\epsilon})$$ $$i=1 \ldots I, \quad j = 1\ldots J$$ $\mathtt{Treatment}_i = 0 \mbox{ or } 1, \ldots, J-1 \mbox{ if the } i^{th} \mbox{ observation belongs to treatment group } 1 \mbox{, or, } 2, \ldots, J$ The $\delta_{ij}$ part in the code above models overdispersion. But there is no one stopping you from modeling correlation between individuals (you don't believe that individuals are really independent) and within individuals (repeated measures). Also, the rate parameter may be scaled by some other constant as in rate models . Please see Gelman and Hill (2007) for more reference. Here is the JAGS code for the simple model: data{ for (i in 1:I){ ncount[i,1] <- obsTrt1[i] ncount[i,2] <- obsTrt2[i] ## notice I have only 2 treatments and I individuals } } model{ for (i in 1:I){ nCount[i, 1] ~ dpois( means[i, 1] ) nCount[i, 2] ~ dpois( means[i, 2] ) log( means[i, 1] ) <- mu + b * trt1[i] + disp[i, 1] log( means[i, 2] ) <- mu + b * trt2[i] + disp[i, 2] disp[i, 1] ~ dnorm( 0, tau) disp[i, 2] ~ dnorm( 0, tau) } mu ~ dnorm( 0, 0.001) b ~ dnorm(0, 0.001) tau ~ dgamma( 0.001, 0.001) } Here is the R code to implement use it (say it is named: overdisp.bug ) dataFixedEffect <- list("I" = 10, "obsTrt1" = obsTrt1 , #vector of n_i1 "obsTrt2" = obsTrt2, #vector of n_i2 "trt1" = trt1, #vector of 0 "trt2" = trt2, #vector of 1 ) initFixedEffect <- list(mu = 0.0 , b = 0.0, tau = 0.01) simFixedEffect <- jags.model(file = "overdisp.bug", data = dataFixedEffect, inits = initFixedEffect, n.chains = 4, n.adapt = 1000) sampleFixedEffect <- coda.samples(model = simFixedEffect, variable.names = c("mu", "b", "means"), n.iter = 1000) meansTrt1 <- as.matrix(sampleFixedEffect[ , 2:11]) meansTrt2 <- as.matrix(sampleFixedEffect[ , 12:21]) You can play around with your parameters' posteriors and you can introduce more parameters to make you modeling more precise ( we like to think this ). Basically, you get the idea. For more details on using rjags and JAGS , please see John Myles White's page
{ "source": [ "https://stats.stackexchange.com/questions/6989", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1871/" ] }
7,155
People often talk about dealing with outliers in statistics. The thing that bothers me about this is that, as far as I can tell, the definition of an outlier is completely subjective. For example, if the true distribution of some random variable is very heavy-tailed or bimodal, any standard visualization or summary statistic for detecting outliers will incorrectly remove parts of the distribution you want to sample from. What is a rigorous definition of an outlier, if one exists, and how can outliers be dealt with without introducing unreasonable amounts of subjectivity into an analysis?
As long as your data comes from a known distribution with known properties, you can rigorously define an outlier as an event that is too unlikely to have been generated by the observed process (if you consider "too unlikely" to be non-rigorous, then all hypothesis testing is). However, this approach is problematic on two levels: It assumes that the data comes from a known distribution with known properties, and it brings the risk that outliers are looked at as data points that were smuggled into your data set by some magical faeries. In the absence of magical data faeries, all data comes from your experiment, and thus it is actually not possible to have outliers, just weird results. These can come from recording errors (e.g. a 400000 bedroom house for 4 dollars), systematic measurement issues (the image analysis algorithm reports huge areas if the object is too close to the border) experimental problems (sometimes, crystals precipitate out of the solution, which give very high signal), or features of your system (a cell can sometimes divide in three instead of two), but they can also be the result of a mechanism that no one's ever considered because it's rare and you're doing research, which means that some of the stuff you do is simply not known yet. Ideally, you take the time to investigate every outlier, and only remove it from your data set once you understand why it doesn't fit your model. This is time-consuming and subjective in that the reasons are highly dependent on the experiment, but the alternative is worse: If you don't understand where the outliers came from, you have the choice between letting outliers "mess up" your results, or defining some "mathematically rigorous" approach to hide your lack of understanding. In other words, by pursuing "mathematical rigorousness" you choose between not getting a significant effect and not getting into heaven. EDIT If all you have is a list of numbers without knowing where they come from, you have no way of telling whether some data point is an outlier, because you can always assume a distribution where all data are inliers.
{ "source": [ "https://stats.stackexchange.com/questions/7155", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1347/" ] }
7,200
I know that an easy to handle formula for the CDF of a normal distribution is somewhat missing, due to the complicated error function in it. However, I wonder if there is a a nice formula for $N(c_{-} \leq x < c_{+}| \mu, \sigma^2)$. Or what the "state of the art" approximation for this problem might be.
It depends on exactly what you are looking for . Below are some brief details and references. Much of the literature for approximations centers around the function $$ Q(x) = \int_x^\infty \frac{1}{\sqrt{2\pi}} e^{-\frac{u^2}{2}} \, \mathrm{d}u $$ for $x > 0$ . This is because the function you provided can be decomposed as a simple difference of the function above (possibly adjusted by a constant). This function is referred to by many names, including "upper-tail of the normal distribution", "right normal integral", and "Gaussian $Q$ -function", to name a few. You'll also see approximations to Mills' ratio , which is $$ R(x) = \frac{Q(x)}{\varphi(x)} $$ where $\varphi(x) = (2\pi)^{-1/2} e^{-x^2 / 2}$ is the Gaussian pdf. Here I list some references for various purposes that you might be interested in. Computational The de-facto standard for computing the $Q$ -function or the related complementary error function is W. J. Cody, Rational Chebyshev Approximations for the Error Function , Math. Comp. , 1969, pp. 631--637. Every (self-respecting) implementation uses this paper. (MATLAB, R, etc.) "Simple" Approximations Abramowitz and Stegun have one based on a polynomial expansion of a transformation of the input. Some people use it as a "high-precision" approximation. I don't like it for that purpose since it behaves badly around zero. For example, their approximation does not yield $\hat{Q}(0) = 1/2$ , which I think is a big no-no. Sometimes bad things happen because of this. Borjesson and Sundberg give a simple approximation which works pretty well for most applications where one only requires a few digits of precision. The absolute relative error is never worse than 1%, which is quite good considering its simplicity. The basic approximation is $$ \hat{Q}(x) = \frac{1}{(1-a) x + a \sqrt{x^2 + b}} \varphi(x) $$ and their preferred choices of the constants are $a = 0.339$ and $b = 5.51$ . That reference is P. O. Borjesson and C. E. Sundberg. Simple approximations of the error function Q(x) for communications applications . IEEE Trans. Commun. , COM-27(3):639–643, March 1979. Here is a plot of its absolute relative error. The electrical-engineering literature is awash with various such approximations and seem to take an overly intense interest in them. Many of them are poor though or expand to very strange and convoluted expressions. You might also look at W. Bryc. A uniform approximation to the right normal integral . Applied Mathematics and Computation , 127(2-3):365–374, April 2002. Laplace's continued fraction Laplace has a beautiful continued fraction which yields successive upper and lower bounds for every value of $x > 0$ . It is, in terms of Mills' ratio, $$ R(x) = \frac{1}{x+}\frac{1}{x+}\frac{2}{x+}\frac{3}{x+}\cdots , $$ where the notation I've used is fairly standard for a continued fraction , i.e., $1/(x+1/(x+2/(x+3/(x+\cdots))))$ . This expression doesn't converge very fast for small $x$ , though, and it diverges at $x = 0$ . This continued fraction actually yields many of the "simple" bounds on $Q(x)$ that were "rediscovered" in the mid-to-late 1900s. It's easy to see that for a continued fraction in "standard" form (i.e., composed of positive integer coefficients), truncating the fraction at odd (even) terms gives an upper (lower) bound. Hence, Laplace tells us immediately that $$ \frac{x}{x^2 + 1} < R(x) < \frac{1}{x} \>, $$ both of which are bounds that were "rediscovered" in the mid-1900's. In terms of the $Q$ -function, this is equivalent to $$ \frac{x}{x^2 + 1} \varphi(x) < Q(x) < \frac{1}{x} \varphi(x) . $$ An alternative proof of this using simple integration by parts can be found in S. Resnick, Adventures in Stochastic Processes , Birkhauser, 1992, in Chapter 6 (Brownian motion). The absolute relative error of these bounds is no worse than $x^{-2}$ , as shown in this related answer . Notice, in particular, that the inequalities above immediately imply that $Q(x) \sim \varphi(x)/x$ . This fact can be established using L'Hopital's rule as well. This also helps explain the choice of the functional form of the Borjesson-Sundberg approximation. Any choice of $a \in [0,1]$ maintains the asymptotic equivalence as $x \to \infty$ . The parameter $b$ serves as a "continuity correction" near zero. Here is a plot of the $Q$ -function and the two Laplace bounds. C-I. C. Lee has a paper from the early 1990's that does a "correction" for small values of $x$ . See C-I. C. Lee. On Laplace continued fraction for the normal integral . Ann. Inst. Statist. Math. , 44(1):107–120, March 1992. Durrett's Probability: Theory and Examples provides the classical upper and lower bounds on $Q(x)$ on pages 6–7 of the 3rd edition. They're meant for larger values of $x$ (say, $x > 3$ ) and are asymptotically tight. Hopefully this will get you started. If you have a more specific interest, I might be able to point you somewhere.
{ "source": [ "https://stats.stackexchange.com/questions/7200", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2860/" ] }
7,207
I understand the formal differences between them, what I want to know is when it is more relevant to use one vs. the other. Do they always provide complementary insight about the performance of a given classification/detection system? When is it reasonable to provide them both, say, in a paper? instead of just one? Are there any alternative (maybe more modern) descriptors that capture the relevant aspects of both ROC and precision recall for a classification system? I am interested in arguments for both binary and multi-class (e.g. as one-vs-all) cases.
The key difference is that ROC curves will be the same no matter what the baseline probability is, but PR curves may be more useful in practice for needle-in-haystack type problems or problems where the "positive" class is more interesting than the negative class. To show this, first let's start with a very nice way to define precision, recall and specificity. Assume you have a "positive" class called 1 and a "negative" class called 0. $\hat{Y}$ is your estimate of the true class label $Y$. Then: $$ \begin{aligned} &\text{Precision} &= P(Y = 1 | \hat{Y} = 1) \\ &\text{Recall} = \text{Sensitivity} &= P(\hat{Y} = 1 | Y = 1) \\ &\text{Specificity} &= P(\hat{Y} = 0 | Y = 0) \end{aligned} $$ The key thing to note is that sensitivity/recall and specificity, which make up the ROC curve, are probabilities conditioned on the true class label . Therefore, they will be the same regardless of what $P(Y = 1)$ is. Precision is a probability conditioned on your estimate of the class label and will thus vary if you try your classifier in different populations with different baseline $P(Y = 1)$. However, it may be more useful in practice if you only care about one population with known background probability and the "positive" class is much more interesting than the "negative" class. (IIRC precision is popular in the document retrieval field, where this is the case.) This is because it directly answers the question, "What is the probability that this is a real hit given my classifier says it is?". Interestingly, by Bayes' theorem you can work out cases where specificity can be very high and precision very low simultaneously. All you have to do is assume $P(Y = 1)$ is very close to zero. In practice I've developed several classifiers with this performance characteristic when searching for needles in DNA sequence haystacks. IMHO when writing a paper you should provide whichever curve answers the question you want answered (or whichever one is more favorable to your method, if you're cynical). If your question is: "How meaningful is a positive result from my classifier given the baseline probabilities of my problem ?", use a PR curve. If your question is, "How well can this classifier be expected to perform in general, at a variety of different baseline probabilities ?", go with a ROC curve.
{ "source": [ "https://stats.stackexchange.com/questions/7207", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2798/" ] }
7,208
I am using Cohen's Kappa to calculate the inter-agreement between two judges. It is calculated as: $ \frac{P(A) - P(E)}{1 - P(E)} $ where $P(A)$ is the proportion of agreement and $P(E)$ the probability of agreement by chance. Now for the following dataset, I get the expected results: User A judgements: - 1, true - 2, false User B judgements: - 1, false - 2, false Proportion agreed: 0.5 Agreement by chance: 0.625 Kappa for User A and B: -0.3333333333333333 We can see that both judges have not agreed very well. However in the following case where both judges evaluate one criteria, kappa evaluates to zero: User A judgements: - 1, false User B judgements: - 1, false Proportion agreed: 1.0 Agreement by chance: 1.0 Kappa for User A and B: 0 Now I can see that the agreement by chance is obviously 1, which leads to kappa being zero, but does this count as a reliable result? The problem is that I normally don't have more than two judgements per criteria, so these will all never evaluate to any kappa greater than 0, which I think is not very representative. Am I right with my calculations? Can I use a different method to calculate inter-agreement? Here we can see that kappa works fine for multiple judgements: User A judgements: - 1, false - 2, true - 3, false - 4, false - 5, true User A judgements: - 1, true - 2, true - 3, false - 4, true - 5, false Proportion agreed: 0.4 Agreement by chance: 0.5 Kappa for User A and B: -0.19999999999999996
The key difference is that ROC curves will be the same no matter what the baseline probability is, but PR curves may be more useful in practice for needle-in-haystack type problems or problems where the "positive" class is more interesting than the negative class. To show this, first let's start with a very nice way to define precision, recall and specificity. Assume you have a "positive" class called 1 and a "negative" class called 0. $\hat{Y}$ is your estimate of the true class label $Y$. Then: $$ \begin{aligned} &\text{Precision} &= P(Y = 1 | \hat{Y} = 1) \\ &\text{Recall} = \text{Sensitivity} &= P(\hat{Y} = 1 | Y = 1) \\ &\text{Specificity} &= P(\hat{Y} = 0 | Y = 0) \end{aligned} $$ The key thing to note is that sensitivity/recall and specificity, which make up the ROC curve, are probabilities conditioned on the true class label . Therefore, they will be the same regardless of what $P(Y = 1)$ is. Precision is a probability conditioned on your estimate of the class label and will thus vary if you try your classifier in different populations with different baseline $P(Y = 1)$. However, it may be more useful in practice if you only care about one population with known background probability and the "positive" class is much more interesting than the "negative" class. (IIRC precision is popular in the document retrieval field, where this is the case.) This is because it directly answers the question, "What is the probability that this is a real hit given my classifier says it is?". Interestingly, by Bayes' theorem you can work out cases where specificity can be very high and precision very low simultaneously. All you have to do is assume $P(Y = 1)$ is very close to zero. In practice I've developed several classifiers with this performance characteristic when searching for needles in DNA sequence haystacks. IMHO when writing a paper you should provide whichever curve answers the question you want answered (or whichever one is more favorable to your method, if you're cynical). If your question is: "How meaningful is a positive result from my classifier given the baseline probabilities of my problem ?", use a PR curve. If your question is, "How well can this classifier be expected to perform in general, at a variety of different baseline probabilities ?", go with a ROC curve.
{ "source": [ "https://stats.stackexchange.com/questions/7208", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1205/" ] }
7,224
I'm working on a little project involving the faces of twitter users via their profile pictures. A problem I've encountered is that after I filter out all but the images that are clear portrait photos, a small but significant percentage of twitter users use a picture of Justin Bieber as their profile picture. In order to filter them out, how can I tell programmatically whether a picture is that of Justin Bieber?
A better idea might be to trash all images that appear in the feed of more than one user - no recognition needed.
{ "source": [ "https://stats.stackexchange.com/questions/7224", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21466/" ] }
7,261
My question relates mostly around the practical differences between General Linear Modeling (GLM) and Generalized Linear Modelling (GZLM). In my case it would be a few continuous variables as covariates and a few factors in an ANCOVA, versus GZLM. I want to examine the main effects of each variable, as well as one three-way interaction that I will outline in the model. I can see this hypothesis being tested in an ANCOVA, or using GZLM. To some extent I understand the math processes and reasoning behind running a General Linear Model like an ANCOVA, and I somewhat understand that GZLMs allow for a link function connecting the linear model and the dependent variable (ok, I lied, maybe I don't really understand the math). What I really don't understand are the practical differences or reasons for running one analysis and not the other when the probability distribution used in the GZLM is normal (i.e., identity link function?). I get very different results when I run one over the other. Could I run either? My data is somewhat non-normal, but works to some extent both in the ANCOVA and the GZLM. In both cases my hypothesis is supported, but in the GZLM the p value is "better". My thought was that an ANCOVA is a linear model with a normally distributed dependent variable using an identity link function, which is exactly what I can input in a GZLM, but these are still different. Please shed some light on these questions for me, if you can! Based on the first answer I have the additional question: If they are identical except for the significance test that it utilized (i.e., F test vs. Wald Chi Square), which would be most appropriate to use? ANCOVA is the "go-to method", but I am unsure why the F test would be preferable. Can someone shed some light on this question for me?
A generalized linear model specifying an identity link function and a normal family distribution is exactly equivalent to a (general) linear model. If you're getting noticeably different results from each, you're doing something wrong. Note that specifying an identity link is not the same thing as specifying a normal distribution. The distribution and the link function are two different components of the generalized linear model, and each can be chosen independently of the other (although certain links work better with certain distributions, so most software packages specify the choice of links allowed for each distribution). Some software packages may report noticeably different $p$-values when the residual degrees of freedom are small if it calculates these using the asymptotic normal and chi-square distributions for all generalized linear models. All software will report $p$-values based on Student's $t$- and Fisher's $F$-distributions for general linear models, as these are more accurate for small residual degrees of freedom as they do not rely on asymptotics. Student's $t$- and Fisher's $F$-distributions are strictly valid for the normal family only, although some other software for generalized linear models may also use these as approximations when fitting other families with a scale parameter that is estimated from the data.
{ "source": [ "https://stats.stackexchange.com/questions/7261", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3262/" ] }
7,307
Can somebody explain me clear the mathematical logic that would link two statements (a) and (b) together? Let us have a set of values (some distribution). Now, a) Median does not depend on every value [it just depends on one or two middle values]; b) Median is the locus of minimal sum-of-absolute-deviations from it. And likewise, and in contrast, a) (Arithmetic) mean depends on every value; b) Mean is the locus of minimal sum-of-squared-deviations from it. My grasp of it is intuitive so far.
This is two questions: one about how the mean and median minimize loss functions and another about the sensitivities of these estimates to the data. The two questions are connected, as we will see. Minimizing Loss A summary (or estimator) of the center of a batch of numbers can be created by letting the summary value change and imagining that each number in the batch exerts a restoring force on that value. When the force never pushes the value away from a number, then arguably any point at which the forces balance is a "center" of the batch. Quadratic ( $L_2$ ) Loss For instance, if we were to attach a classical spring (following Hooke's Law ) between the summary and each number, the force would be proportional to the distance to each spring. The springs would pull the summary this way and that, eventually settling to a unique stable location of minimal energy. I would like to draw notice to a little sleight-of-hand that just occurred: the energy is proportional to the sum of squared distances. Newtonian mechanics teaches us that force is the rate of change of energy. Achieving an equilibrium--minimizing the energy--results in balancing the forces. The net rate of change in the energy is zero. Let's call this the " $L_2$ summary," or "squared loss summary." Absolute ( $L_1$ ) Loss Another summary can be created by supposing the sizes of the restoring forces are constant , regardless of the distances between the value and the data. The forces themselves are not constant, however, because they must always pull the value towards each data point. Thus, when the value is less than the data point the force is directed positively, but when the value is greater than the data point the force is directed negatively. Now the energy is proportional to the distances between the value and the data. There typically will be an entire region in which the energy is constant and the net force is zero. Any value in this region we might call the " $L_1$ summary" or "absolute loss summary." These physical analogies provide useful intuition about the two summaries. For instance, what happens to the summary if we move one of the data points? In the $L_2$ case with springs attached, moving one data point either stretches or relaxes its spring. The result is a change in force on the summary, so it must change in response. But in the $L_1$ case, most of the time a change in a data point does nothing to the summary, because the force is locally constant. The only way the force can change is for the data point to move across the summary. (In fact, it should be evident that the net force on a value is given by the number of points greater than it--which pull it upwards--minus the number of points less than it--which pull it downwards. Thus, the $L_1$ summary must occur at any location where the number of data values exceeding it exactly equals the number of data values less than it.) Depicting Losses Since both forces and energies add up, in either case we can decompose the net energy into individual contributions from the data points. By graphing the energy or force as a function of the summary value, this provides a detailed picture of what is happening. The summary will be a location at which the energy (or "loss" in statistical parlance) is smallest. Equivalently, it will be a location at which forces balance: the center of the data occurs where the net change in loss is zero. This figure shows energies and forces for a small dataset of six values (marked by faint vertical lines in each plot). The dashed black curves are the totals of the colored curves showing the contributions from the individual values. The x-axis indicates possible values of the summary. The arithmetic mean is a point where squared loss is minimized: it will be located at the vertex (bottom) of the black parabola in the upper left plot. It is always unique. The median is a point where absolute loss is minimized. As noted above, it must occur in the middle of the data. It is not necessarily unique. It will be located at the bottom of the broken black curve in the upper right. (The bottom actually consists of a short flat section between $-0.23$ and $-0.17$ ; any value in this interval is a median.) Analyzing Sensitivity Earlier I described what can happen to the summary when a data point is varied. It is instructive to plot how the summary changes in response to changing any single data point. (These plots are essentially the empirical influence functions . They differ from the usual definition in that they show the actual values of the estimates rather than how much those values are changed.) The value of the summary is labeled by "Estimate" on the y-axes to remind us that this summary is estimating where the middle of the dataset lies. The new (changed) values of each data point are shown on their x-axes. This figure presents the results of varying each of the data values in the batch $-1.02, -0.82, -0.23, -0.17, -0.08, 0.77$ (the same one analyzed in the first figure). There is one plot for each data value, which is highlighted on its plot with a long black tick along the bottom axis. (The remaining data values are shown with short gray ticks.) The blue curve traces the $L_2$ summary--the arithmetic mean--and the red curve traces the $L_1$ summary--the median. (Since often the median is a range of values, the convention of plotting the middle of that range is followed here.) Notice: The sensitivity of the mean is unbounded: those blue lines extend infinitely far up and down. The sensitivity of the median is bounded: there are upper and lower limits to the red curves. Where the median does change, though, it changes much more rapidly than the mean. The slope of each blue line is $1/6$ (generally it is $1/n$ for a dataset with $n$ values), whereas the slopes of the tilted parts of the red lines are all $1/2$ . The mean is sensitive to every data point and this sensitivity has no bounds (as the nonzero slopes of all the colored lines in the bottom left plot of the first figure indicate). Although the median is sensitive to every data point, the sensitivity is bounded (which is why the colored curves in the bottom right plot of the first figure are located within a narrow vertical range around zero). These, of course, are merely visual reiterations of the basic force (loss) law: quadratic for the mean, linear for the median. The interval over which the median can be made to change can vary among the data points. It is always bounded by two of the near-middle values among the data which are not varying . (These boundaries are marked by faint vertical dashed lines.) Because the rate of change of the median is always $1/2$ , the amount by which it might vary therefore is determined by the length of this gap between near-middle values of the dataset. Although only the first point is commonly noted, all the points are important. In particular, It is definitely false that the "median does not depend on every value." This figure provides a counterexample. Nevertheless, the median does not depend "materially" on every value in the sense that although changing individual values can change the median, the amount of change is limited by the gaps among near-middle values in the dataset. In particular, the amount of change is bounded . We say that the median is a "resistant" summary. Although the mean is not resistant , and will change whenever any data value is changed, the rate of change is relatively small. The larger the dataset, the smaller the rate of change. Equivalently, in order to produce a material change in the mean of a large dataset, at least one value must undergo a relatively large variation. This suggests the non-resistance of the mean is of concern only for (a) small datasets or (b) datasets where one or more data might have values extremely far from the middle of the batch. These remarks--which I hope the figures make evident--reveal a deep connection between the loss function and the sensitivity (or resistance) of the estimator. For more about this, begin with one of the Wikipedia articles on M-estimators and then pursue those ideas as far as you like. Code This R code produced the figures and can readily be modified to study any other dataset in the same way: simply replace the randomly-created vector y with any vector of numbers. # # Create a small dataset. # set.seed(17) y <- sort(rnorm(6)) # Some data # # Study how a statistic varies when the first element of a dataset # is modified. # statistic.vary <- function(t, x, statistic) { sapply(t, function(e) statistic(c(e, x[-1]))) } # # Prepare for plotting. # darken <- function(c, x=0.8) { apply(col2rgb(c)/255 * x, 2, function(s) rgb(s[1], s[2], s[3])) } colors <- darken(c("Blue", "Red")) statistics <- c(mean, median); names(statistics) <- c("mean", "median") x.limits <- range(y) + c(-1, 1) y.limits <- range(sapply(statistics, function(f) statistic.vary(x.limits + c(-1,1), c(0,y), f))) # # Make the plots. # par(mfrow=c(2,3)) for (i in 1:length(y)) { # # Create a standard, consistent plot region. # plot(x.limits, y.limits, type="n", xlab=paste("Value of y[", i, "]", sep=""), ylab="Estimate", main=paste("Sensitivity to y[", i, "]", sep="")) #legend("topleft", legend=names(statistics), col=colors, lwd=1) # # Mark the limits of the possible medians. # n <- length(y)/2 bars <- sort(y[-1])[ceiling(n-1):floor(n+1)] abline(v=range(bars), lty=2, col="Gray") rug(y, col="Gray", ticksize=0.05); # # Show which value is being varied. # rug(y[1], col="Black", ticksize=0.075, lwd=2) # # Plot the statistics as the value is varied between x.limits. # invisible(mapply(function(f,c) curve(statistic.vary(x, y, f), col=c, lwd=2, add=TRUE, n=501), statistics, colors)) y <- c(y[-1], y[1]) # Move the next data value to the front } #------------------------------------------------------------------------------# # # Study loss functions. # loss <- function(x, y, f) sapply(x, function(t) sum(f(y-t))) square <- function(t) t^2 square.d <- function(t) 2*t abs.d <- sign losses <- c(square, abs, square.d, abs.d) names(losses) <- c("Squared Loss", "Absolute Loss", "Change in Squared Loss", "Change in Absolute Loss") loss.types <- c(rep("Loss (energy)", 2), rep("Change in loss (force)", 2)) # # Prepare for plotting. # colors <- darken(rainbow(length(y))) x.limits <- range(y) + c(-1, 1)/2 # # Make the plots. # par(mfrow=c(2,2)) for (j in 1:length(losses)) { f <- losses[[j]] y.range <- range(c(0, 1.1*loss(y, y, f))) # # Plot the loss (or its rate of change). # curve(loss(x, y, f), from=min(x.limits), to=max(x.limits), n=1001, lty=3, ylim=y.range, xlab="Value", ylab=loss.types[j], main=names(losses)[j]) # # Draw the x-axis if needed. # if (sign(prod(y.range))==-1) abline(h=0, col="Gray") # # Faintly mark the data values. # abline(v=y, col="#00000010") # # Plot contributions to the loss (or its rate of change). # for (i in 1:length(y)) { curve(loss(x, y[i], f), add=TRUE, lty=1, col=colors[i], n=1001) } rug(y, side=3) }
{ "source": [ "https://stats.stackexchange.com/questions/7307", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3277/" ] }
7,348
I find R can take a long time to generate plots when millions of points are present - unsurprising given that points are plotted individually. Furthermore, such plots are often too cluttered and dense to be useful. Many of the points overlap and form a black mass and a lot of time is spent plotting more points into that mass. Are there any statistical alternatives to representing large $n$ data in a standard scatterplot? I have considered a density plot, but what other alternatives are there?
Look at the hexbin package which implements paper/method by Dan Carr. The pdf vignette has more details which I quote below: 1 Overview Hexagon binning is a form of bivariate histogram useful for visualizing the struc- ture in datasets with large n. The underlying concept of hexagon binning is extremely simple; the xy plane over the set (range(x), range(y)) is tessellated by a regular grid of hexagons. the number of points falling in each hexagon are counted and stored in a data structure the hexagons with count > 0 are plotted using a color ramp or varying the radius of the hexagon in proportion to the counts. The underlying algorithm is extremely fast and eective for displaying the structure of datasets with $n \ge 10^6$ If the size of the grid and the cuts in the color ramp are chosen in a clever fashion than the structure inherent in the data should emerge in the binned plots. The same caveats apply to hexagon binning as apply to histograms and care should be exercised in choosing the binning parameters
{ "source": [ "https://stats.stackexchange.com/questions/7348", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2726/" ] }
7,376
Inter-market analysis is a method of modeling market behavior by means of finding relationships between different markets. Often times, a correlation is computed between two markets, say S&P 500 and 30-Year US treasuries. These computations are more often than not based on price data, which is obvious to everyone that it does not fit the definition of stationary time series. Possible solutions aside (using returns instead), is the computation of correlation whose data is non-stationary even a valid statistical calculation? Would you say that such a correlation calculation is somewhat unreliable, or just plain nonsense?
The correlation measures linear relationship. In informal context relationship means something stable. When we calculate the sample correlation for stationary variables and increase the number of available data points this sample correlation tends to true correlation. It can be shown that for prices, which usually are random walks, the sample correlation tends to random variable. This means that no matter how much data we have, the result will always be different. Note I tried expressing mathematical intuition without the mathematics. From mathematical point of view the explanation is very clear: Sample moments of stationary processes converge in probability to constants. Sample moments of random walks converge to integrals of brownian motion which are random variables. Since relationship is usually expressed as a number and not a random variable, the reason for not calculating the correlation for non-stationary variables becomes evident. Update Since we are interested in correlation between two variables assume first that they come from stationary process $Z_t=(X_t,Y_t)$. Stationarity implies that $EZ_t$ and $cov(Z_t,Z_{t-h})$ do not depend on $t$. So correlation $$corr(X_t,Y_t)=\frac{cov(X_t,Y_t)}{\sqrt{DX_tDY_t}}$$ also does not depend on $t$, since all the quantities in the formula come from matrix $cov(Z_t)$, which does not depend on $t$. So the calculation of sample correlation $$\hat{\rho}=\frac{\frac{1}{T}\sum_{t=1}^T(X_t-\bar{X})(Y_t-\bar{Y})}{\sqrt{\frac{1}{T^2}\sum_{t=1}^T(X_t-\bar{X})^2\sum_{t=1}^T(Y_t-\bar{Y})^2}}$$ makes sense, since we may have reasonable hope that sample correlation will estimate $\rho=corr(X_t,Y_t)$. It turns out that this hope is not unfounded, since for stationary processes satisfying certain conditions we have that $\hat{\rho}\to\rho$, as $T\to\infty$ in probability. Furthermore $\sqrt{T}(\hat{\rho}-\rho)\to N(0,\sigma_{\rho}^2)$ in distribution, so we can test the hypotheses about $\rho$. Now suppose that $Z_t$ is not stationary. Then $corr(X_t,Y_t)$ may depend on $t$. So when we observe a sample of size $T$ we potentialy need to estimate $T$ different correlations $\rho_t$. This is of course infeasible, so in best case scenario we only can estimate some functional of $\rho_t$ such as mean or variance. But the result may not have sensible interpretation. Now let us examine what happens with correlation of probably most studied non-stationary process random walk. We call process $Z_t=(X_t,Y_t)$ a random walk if $Z_t=\sum_{s=1}^t(U_t,V_t)$, where $C_t=(U_t,V_t)$ is a stationary process. For simplicity assume that $EC_t=0$. Then \begin{align} corr(X_tY_t)=\frac{EX_tY_t}{\sqrt{DX_tDY_t}}=\frac{E\sum_{s=1}^tU_t\sum_{s=1}^tV_t}{\sqrt{D\sum_{s=1}^tU_tD\sum_{s=1}^tV_t}} \end{align} To simplify matters further, assume that $C_t=(U_t,V_t)$ is a white noise. This means that all correlations $E(C_tC_{t+h})$ are zero for $h>0$. Note that this does not restrict $corr(U_t,V_t)$ to zero. Then \begin{align} corr(X_t,Y_t)=\frac{tEU_tV_t}{\sqrt{t^2DU_tDV_t}}=corr(U_0,V_0). \end{align} So far so good, though the process is not stationary, correlation makes sense, although we had to make same restrictive assumptions. Now to see what happens to sample correlation we will need to use the following fact about random walks, called functional central limit theorem: \begin{align} \frac{1}{\sqrt{T}}Z_{[Ts]}=\frac{1}{\sqrt{T}}\sum_{t=1}^{[Ts]}C_t\to (cov(C_0))^{-1/2}W_s, \end{align} in distribution, where $s\in[0,1]$ and $W_s=(W_{1s},W_{2s})$ is bivariate Brownian motion (two-dimensional Wiener process). For convenience introduce definition $M_s=(M_{1s},M_{2s})=(cov(C_0))^{-1/2}W_s$. Again for simplicity let us define sample correlation as \begin{align} \hat{\rho}=\frac{\frac{1}{T}\sum_{t=1}^TX_{t}Y_t}{\sqrt{\frac{1}{T}\sum_{t=1}^TX_t^2\frac{1}{T}\sum_{t=1}^TY_t^2}} \end{align} Let us start with the variances. We have \begin{align} E\frac{1}{T}\sum_{t=1}^TX_t^2=\frac{1}{T}E\sum_{t=1}^T\left(\sum_{s=1}^tU_t\right)^2=\frac{1}{T}\sum_{t=1}^Tt\sigma_U^2=\sigma_U\frac{T+1}{2}. \end{align} This goes to infinity as $T$ increases, so we hit the first problem, sample variance does not converge. On the other hand continuous mapping theorem in conjunction with functional central limit theorem gives us \begin{align} \frac{1}{T^2}\sum_{t=1}^TX_t^2=\sum_{t=1}^T\frac{1}{T}\left(\frac{1}{\sqrt{T}}\sum_{s=1}^tU_t\right)^2\to \int_0^1M_{1s}^2ds \end{align} where convergence is convergence in distribution, as $T\to \infty$. Similarly we get \begin{align} \frac{1}{T^2}\sum_{t=1}^TY_t^2\to \int_0^1M_{2s}^2ds \end{align} and \begin{align} \frac{1}{T^2}\sum_{t=1}^TX_tY_t\to \int_0^1M_{1s}M_{2s}ds \end{align} So finally for sample correlation of our random walk we get \begin{align} \hat{\rho}\to \frac{\int_0^1M_{1s}M_{2s}ds}{\sqrt{\int_0^1M_{1s}^2ds\int_0^1M_{2s}^2ds}} \end{align} in distribution as $T\to \infty$. So although correlation is well defined, sample correlation does not converge towards it, as in stationary process case. Instead it converges to a certain random variable.
{ "source": [ "https://stats.stackexchange.com/questions/7376", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3306/" ] }
7,400
Given two histograms, how do we assess whether they are similar or not? Is it sufficient to simply look at the two histograms? The simple one to one mapping has the problem that if a histogram is slightly different and slightly shifted then we'll not get the desired result. Any suggestions?
There are plenty of distance measures between two histograms. You can read a good categorization of these measures in: K. Meshgi, and S. Ishii, “Expanding Histogram of Colors with Gridding to Improve Tracking Accuracy,” in Proc. of MVA’15, Tokyo, Japan, May 2015. The most popular distance functions are listed here for your convenience: $L_0$ or Hellinger Distance $D_{L0} = \sum\limits_{i} h_1(i) \neq h_2(i) $ $L_1$ , Manhattan, or City Block Distance $D_{L1} = \sum_{i}\lvert h_1(i) - h_2(i) \rvert $ $L=2$ or Euclidean Distance $D_{L2} = \sqrt{\sum_{i}\left( h_1(i) - h_2(i) \right) ^2 }$ L $_{\infty}$ or Chybyshev Distance $D_{L\infty} = \max_{i}\lvert h_1(i) - h_2(i) \rvert $ L $_p$ or Fractional Distance (part of Minkowski distance family) $D_{Lp} = \left(\sum\limits_{i}\lvert h_1(i) - h_2(i) \rvert ^p \right)^{1/p}$ and $0<p<1$ Histogram Intersection $D_{\cap} = 1 - \frac{\sum_{i} \left(\min(h_1(i),h_2(i) \right)}{\min\left(\vert h_1(i)\vert,\vert h_2(i) \vert \right)}$ Cosine Distance $D_{CO} = 1 - \sum_i h_1(i)h2_(i)$ Canberra Distance $D_{CB} = \sum_i \frac{\lvert h_1(i)-h_2(i) \rvert}{\min\left( \lvert h_1(i)\rvert,\lvert h_2(i)\rvert \right)}$ Pearson's Correlation Coefficient $ D_{CR} = \frac{\sum_i \left(h_1(i)- \frac{1}{n} \right)\left(h_2(i)- \frac{1}{n} \right)}{\sqrt{\sum_i \left(h_1(i)- \frac{1}{n} \right)^2\sum_i \left(h_2(i)- \frac{1}{n} \right)^2}} $ Kolmogorov-Smirnov Divergance $ D_{KS} = \max_{i}\lvert h_1(i) - h_2(i) \rvert $ Match Distance $D_{MA} = \sum\limits_{i}\lvert h_1(i) - h_2(i) \rvert $ Cramer-von Mises Distance $D_{CM} = \sum\limits_{i}\left( h_1(i) - h_2(i) \right)^2$ $\chi^2$ Statistics $D_{\chi^2} = \sum_i \frac{\left(h_1(i) - h_2(i)\right)^2}{h_1(i) + h_2(i)}$ Bhattacharyya Distance $ D_{BH} = \sqrt{1-\sum_i \sqrt{h_1(i)h_2(i)}} $ & hellinger Squared Chord $ D_{SC} = \sum_i\left(\sqrt{h_1(i)}-\sqrt{h_2(i)}\right)^2 $ Kullback-Liebler Divergance $D_{KL} = \sum_i h_1(i)\log\frac{h_1(i)}{m(i)}$ Jefferey Divergence $D_{JD} = \sum_i \left(h_1(i)\log\frac{h_1(i)}{m(i)}+h_2(i)\log\frac{h_2(i)}{m(i)}\right)$ Earth Mover's Distance (this is the first member of Transportation distances that embed binning information $A$ into the distance, for more information please refer to the above mentioned paper or Wikipedia entry. $ D_{EM} = \frac{\min_{f_{ij}}\sum_{i,j}f_{ij}A_{ij}}{sum_{i,j}f_{ij}}$ $ \sum_j f_{ij} \leq h_1(i) , \sum_j f_{ij} \leq h_2(j) , \sum_{i,j} f_{ij} = \min\left( \sum_i h_1(i) \sum_j h_2(j) \right) $ and $f_{ij}$ represents the flow from $i$ to $j$ Quadratic Distance $D_{QU} = \sqrt{\sum_{i,j} A_{ij}\left(h_1(i) - h_2(j)\right)^2}$ Quadratic-Chi Distance $D_{QC} = \sqrt{\sum_{i,j} A_{ij}\left(\frac{h_1(i) - h_2(i)}{\left(\sum_c A_{ci}\left(h_1(c)+h_2(c)\right)\right)^m}\right)\left(\frac{h_1(j) - h_2(j)}{\left(\sum_c A_{cj}\left(h_1(c)+h_2(c)\right)\right)^m}\right)}$ and $\frac{0}{0} \equiv 0$ A Matlab implementation of some of these distances is available from my GitHub repository . Also, you can search for people like Yossi Rubner, Ofir Pele, Marco Cuturi, and Haibin Ling for more state-of-the-art distances. Update: Alternative explanation for the distances appears here and there in the literature, so I list them here for sake of completeness. Canberra distance (another version) $D_{CB}=\sum_i \frac{|h_1(i)-h_2(i)|}{|h_1(i)|+|h_2(i)|}$ Bray-Curtis Dissimilarity, Sorensen Distance (since the sum of histograms are equal to one, it equals to $D_{L0}$ ) $D_{BC} = 1 - \frac{2 \sum_i h_1(i) = h_2(i)}{\sum_i h_1(i) + \sum_i h_2(i)}$ Jaccard Distance (i.e. intersection over union, another version) $D_{IOU} = 1 - \frac{\sum_i \min(h_1(i),h_2(i))}{\sum_i \max(h_1(i),h_2(i))}$
{ "source": [ "https://stats.stackexchange.com/questions/7400", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3325/" ] }
7,402
I know that a Type II error is where H1 is true, but H0 is not rejected. Question How do I calculate the probability of a Type II error involving a normal distribution, where the standard deviation is known?
In addition to specifying $\alpha$ (probability of a type I error), you need a fully specified hypothesis pair, i.e., $\mu_{0}$ , $\mu_{1}$ and $\sigma$ need to be known. $\beta$ (probability of type II error) is $1 - \textrm{power}$ . I assume a one-sided $H_{1}: \mu_{1} > \mu_{0}$ . In R: > sigma <- 15 # theoretical standard deviation > mu0 <- 100 # expected value under H0 > mu1 <- 130 # expected value under H1 > alpha <- 0.05 # probability of type I error # critical value for a level alpha test > crit <- qnorm(1-alpha, mu0, sigma) # power: probability for values > critical value under H1 > (pow <- pnorm(crit, mu1, sigma, lower.tail=FALSE)) [1] 0.63876 # probability for type II error: 1 - power > (beta <- 1-pow) [1] 0.36124 Edit: visualization xLims <- c(50, 180) left <- seq(xLims[1], crit, length.out=100) right <- seq(crit, xLims[2], length.out=100) yH0r <- dnorm(right, mu0, sigma) yH1l <- dnorm(left, mu1, sigma) yH1r <- dnorm(right, mu1, sigma) curve(dnorm(x, mu0, sigma), xlim=xLims, lwd=2, col="red", xlab="x", ylab="density", main="Normal distribution under H0 and H1", ylim=c(0, 0.03), xaxs="i") curve(dnorm(x, mu1, sigma), lwd=2, col="blue", add=TRUE) polygon(c(right, rev(right)), c(yH0r, numeric(length(right))), border=NA, col=rgb(1, 0.3, 0.3, 0.6)) polygon(c(left, rev(left)), c(yH1l, numeric(length(left))), border=NA, col=rgb(0.3, 0.3, 1, 0.6)) polygon(c(right, rev(right)), c(yH1r, numeric(length(right))), border=NA, density=5, lty=2, lwd=2, angle=45, col="darkgray") abline(v=crit, lty=1, lwd=3, col="red") text(crit+1, 0.03, adj=0, label="critical value") text(mu0-10, 0.025, adj=1, label="distribution under H0") text(mu1+10, 0.025, adj=0, label="distribution under H1") text(crit+8, 0.01, adj=0, label="power", cex=1.3) text(crit-12, 0.004, expression(beta), cex=1.3) text(crit+5, 0.0015, expression(alpha), cex=1.3)
{ "source": [ "https://stats.stackexchange.com/questions/7402", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
7,439
You can have data in wide format or in long format. This is quite an important thing, as the useable methods are different, depending on the format. I know you have to work with melt() and cast() from the reshape package, but there seems some things that I don't get. Can someone give me a short overview how you do this?
There are several resources on Hadley Wickham's website for the package (now called reshape2 ), including a link to a paper on the package in the Journal of Statistical Software. Here is a brief example from the paper: > require(reshape2) Loading required package: reshape2 > data(smiths) > smiths subject time age weight height 1 John Smith 1 33 90 1.87 2 Mary Smith 1 NA NA 1.54 We note that the data are in the wide form. To go to the long form, we make the smiths data frame molten : > melt(smiths) Using subject as id variables subject variable value 1 John Smith time 1.00 2 Mary Smith time 1.00 3 John Smith age 33.00 4 Mary Smith age NA 5 John Smith weight 90.00 6 Mary Smith weight NA 7 John Smith height 1.87 8 Mary Smith height 1.54 Notice how melt() chose one of the variables as the id, but we can state explicitly which to use via argument 'id' : > melt(smiths, id = "subject") subject variable value 1 John Smith time 1.00 2 Mary Smith time 1.00 3 John Smith age 33.00 4 Mary Smith age NA 5 John Smith weight 90.00 6 Mary Smith weight NA 7 John Smith height 1.87 8 Mary Smith height 1.54 Here is another example from ?cast : #Air quality example names(airquality) <- tolower(names(airquality)) aqm <- melt(airquality, id=c("month", "day"), na.rm=TRUE) If we store the molten data frame, we can cast into other forms. In the new version of reshape (called reshape2 ) there are functions acast() and dcast() returning an array-like (array, matrix, vector) result or a data frame respectively. These functions also take an aggregating function (eg mean() ) to provide summaries of data in molten form. For example, following on from the Air Quality example above, we can generate, in wide form, monthly mean values for the variables in the data set: > dcast(aqm, month ~ variable, mean) month ozone solar.r wind temp 1 5 23.61538 181.2963 11.622581 65.54839 2 6 29.44444 190.1667 10.266667 79.10000 3 7 59.11538 216.4839 8.941935 83.90323 4 8 59.96154 171.8571 8.793548 83.96774 5 9 31.44828 167.4333 10.180000 76.90000 There are really only two main functions in reshape2 : melt() and the acast() and dcast() pairing. Look at the examples in the help pages for these two functions, see Hadley's website (link above) and look at the paper I mentioned. That should get you started. You might also look into Hadley's plyr package which does similar things to reshape2 but is designed to do a whole lot more besides.
{ "source": [ "https://stats.stackexchange.com/questions/7439", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3140/" ] }
7,440
I need to determine the KL-divergence between two Gaussians. I am comparing my results to these , but I can't reproduce their result. My result is obviously wrong, because the KL is not 0 for KL(p, p). I wonder where I am doing a mistake and ask if anyone can spot it. Let $p(x) = N(\mu_1, \sigma_1)$ and $q(x) = N(\mu_2, \sigma_2)$ . From Bishop's PRML I know that $$KL(p, q) = - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx$$ where integration is done over all real line, and that $$\int p(x) \log p(x) dx = -\frac{1}{2} (1 + \log 2 \pi \sigma_1^2),$$ so I restrict myself to $\int p(x) \log q(x) dx$ , which I can write out as $$-\int p(x) \log \frac{1}{(2 \pi \sigma_2^2)^{(1/2)}} e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx,$$ which can be separated into $$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \log e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx.$$ Taking the log I get $$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \bigg(-\frac{(x-\mu_2)^2}{2 \sigma_2^2} \bigg) dx,$$ where I separate the sums and get $\sigma_2^2$ out of the integral. $$\frac{1}{2} \log (2 \pi \sigma^2_2) + \frac{\int p(x) x^2 dx - \int p(x) 2x\mu_2 dx + \int p(x) \mu_2^2 dx}{2 \sigma_2^2}$$ Letting $\langle \rangle$ denote the expectation operator under $p$ , I can rewrite this as $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\langle x^2 \rangle - 2 \langle x \rangle \mu_2 + \mu_2^2}{2 \sigma_2^2}.$$ We know that $var(x) = \langle x^2 \rangle - \langle x \rangle ^2$ . Thus $$\langle x^2 \rangle = \sigma_1^2 + \mu_1^2$$ and therefore $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + \mu_1^2 - 2 \mu_1 \mu_2 + \mu_2^2}{2 \sigma_2^2},$$ which I can put as $$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}.$$ Putting everything together, I get to \begin{align*} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &= \frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\ &= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}. \end{align*} Which is wrong since it equals $1$ for two identical Gaussians. Can anyone spot my error? Update Thanks to mpiktas for clearing things up. The correct answer is: $KL(p, q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2}$
OK, my bad. The error is in the last equation: \begin{align} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &=\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\ &= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} \end{align} Note the missing $-\frac{1}{2}$ . The last line becomes zero when $\mu_1=\mu_2$ and $\sigma_1=\sigma_2$ .
{ "source": [ "https://stats.stackexchange.com/questions/7440", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2860/" ] }
7,467
Background I am overseeing the input of data from primary literature into a database . The data entry process is error prone, particularly because users must interpret experimental design, extract data from graphics and tables, and transform results to standardized units. Data are input into a MySQL database through a web interface. Over 10k data points from > 20 variables, > 100 species, and > 500 citations have been included so far. I need to run checks of the quality of not only the variable data, but also the data contained in lookup tables, such as the species associated with each data point, the location of the study, etc. Data entry is ongoing, so QA/QC will need to be run intermittently. The data have not yet been publicly released, but we are planning to release them in the next few months. Currently, my QA/QC involves three steps: a second user checks each data point. visually inspect histogram each variable for outliers. users report questionable data after spurious results are obtained. Questions Are there guidelines that I can use for developing a robust QA/QC procedure for this database? The first step is the most time consuming; is there anything that I can do to make this more efficient?
This response focuses on the second question, but in the process a partial answer to the first question (guidelines for a QA/QC procedure) will emerge. By far the best thing you can do is check data quality at the time entry is attempted. The user checks and reports are labor-intensive and so should be reserved for later in the process, as late as is practicable. Here are some principles, guidelines, and suggestions, derived from extensive experience (with the design and creation of many databases comparable to and much larger than yours). They are not rules; you do not have to follow them to be successful and efficient; but they are all here for excellent reasons and you should think hard about deviating from them. Separate data entry from all intellectually demanding activities . Do not ask data entry operators simultaneously to check anything, count anything, etc. Restrict their work to creating a computer-readable facsimile of the data, nothing more. In particular, this principle implies the data-entry forms should reflect the format in which you originally obtain the data, not the format in which you plan to store the data. It is relatively easy to transform one format to another later, but it's an error-prone process to attempt the transformation on the fly while entering data. Create a data audit trail : whenever anything is done to the data, starting at the data entry stage, document this and record the procedure in a way that makes it easy to go back and check what went wrong (because things will go wrong). Consider filling out fields for time stamps, identifiers of data entry operators, identifiers of sources for the original data (such as reports and their page numbers), etc. Storage is cheap, but the time to track down an error is expensive. Automate everything. Assume any step will have to be redone (at the worst possible time, according to Murphy's Law), and plan accordingly. Don't try to save time now by doing a few "simple steps" by hand. In particular, create support for data entry : make a front end for each table (even a spreadsheet can do nicely) that provides a clear, simple, uniform way to get data in. At the same time the front end should enforce your "business rules:" that is, it should perform as many simple validity checks as it can. (E.g., pH must be between 0 and 14; counts must be positive.) Ideally, use a DBMS to enforce relational integrity checks (e.g., every species associated with a measurement really exists in the database). Constantly count things and check that counts exactly agree. E.g., if a study is supposed to measure attributes of 10 species, make sure (as soon as data entry is complete) that 10 species really are reported. Although checking counts is simple and uninformative, it's great at detecting duplicated and omitted data. If the data are valuable and important, consider independently double-entering the entire dataset . This means that each item will be entered at separate times by two different non-interacting people. This is a great way to catch typos, missing data, and so on. The cross-checking can be completely automated. This is faster, better at catching errors, and more efficient than 100% manual double checking. (The data entry "people" can include devices such as scanners with OCR.) Use a DBMS to store and manage the data. Spreadsheets are great for supporting data entry, but get your data out of the spreadsheets or text files and into a real database as soon as possible. This prevents all kinds of insidious errors while adding lots of support for automatic data integrity checks. If you must, use your statistical software for data storage and management, but seriously consider using a dedicated DBMS: it will do a better job. After all data are entered and automatically checked, draw pictures : make sorted tables, histograms, scatterplots, etc., and look at them all. These are easily automated with any full-fledged statistical package. Do not ask people to do repetitive tasks that the computer can do . The computer is much faster and more reliable at these. Get into the habit of writing (and documenting) little scripts and small programs to do any task that cannot be completed immediately. These will become part of your audit trail and they will enable work to be redone easily. Use whatever platform you're comfortable with and that is suitable to the task. (Over the years, depending on what was available, I have used a wide range of such platforms and all have been effective in their way, ranging from C and Fortran programs through AWK and SED scripts, VBA scripts for Excel and Word, and custom programs written for relational database systems, GIS, and statistical analysis platforms like R and Stata.) If you follow most of these guidelines, approximately 50%-80% of the work in getting data into the database will be database design and writing the supporting scripts. It is not unusual to get 90% through such a project and be less than 50% complete, yet still finish on time: once everything is set up and has been tested, the data entry and checking can be amazingly efficient.
{ "source": [ "https://stats.stackexchange.com/questions/7467", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1381/" ] }
7,515
What are some techniques for sampling two correlated random variables: if their probability distributions are parameterized (e.g., log-normal) if they have non-parametric distributions. The data are two time series for which we can compute non-zero correlation coefficients. We wish to simulate these data in the future, assuming the historical correlation and time series CDF is constant. For case (2), the 1-D analogue would be to construct the CDF and sample from it. So I guess, I could construct a 2-D CDF and do the same thing. However, I wonder if there is a way to come close by using the individual 1-D CDFs and somehow linking the picks. Thanks!
I think what you're looking for is a copula. You've got two marginal distributions (specified by either parametric or empirical cdfs) and now you want to specify the dependence between the two. For the bivariate case there are all kinds of choices, but the basic recipe is the same. I'll use a Gaussian copula for ease of interpretation. To draw from the Gaussian copula with correlation matrix $C$ Draw $(Z=(Z_1, Z_2)\sim N(0, C)$ Set $U_i = \Phi(Z_i)$ for $i=1, 2$ (with $\Phi$ the standard normal cdf). Now $U_1, U_2\sim U[0,1]$, but they're dependent. Set $Y_i = F_i^{-1}(U_i)$ where $F_i^{-1}$ is the (pseudo) inverse of the marginal cdf for variable $i$. This implies that $Y_i$ follow the desired distribution (this step is just inverse transform sampling). Voila! Try it for some simple cases, and look at marginal histograms and scatterpolots, it's fun. No guarantee that this is appropriate for your particular application though (in particular, you might need to replace the Gaussian copula with a t copula) but this should get you started. A good reference on copula modeling is Nelsen (1999), An Introduction to Copulas , but there are some pretty good introductions online too.
{ "source": [ "https://stats.stackexchange.com/questions/7515", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2260/" ] }
7,757
I am trying to predict the outcome of a complex system using neural networks (ANN's). The outcome (dependent) values range between 0 and 10,000. The different input variables have different ranges. All the variables have roughly normal distributions. I consider different options to scale the data before training. One option is to scale the input (independent) and output (dependent) variables to [0, 1] by computing cumulative distribution function using the mean and standard deviation values of each variable, independently. The problem with this method is that if I use the sigmoid activation function at the output, I will very likely miss extreme data, especially those not seen in the training set Another option is to use a z-score. In that case I don't have the extreme data problem; however, I'm limited to a linear activation function at the output. What are other accepted normalization techniques that are in use with ANN's? I tried to look for reviews on this topic, but failed to find anything useful.
A standard approach is to scale the inputs to have mean 0 and a variance of 1. Also linear decorrelation/whitening/pca helps a lot. If you are interested in the tricks of the trade, I can recommend LeCun's efficient backprop paper.
{ "source": [ "https://stats.stackexchange.com/questions/7757", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1496/" ] }
7,815
Many statistical jobs ask for experience with large scale data. What are the sorts of statistical and computational skills that would be need for working with large data sets. For example, how about building regression models given a data set with 10 million samples?
Good answers have already appeared. I will therefore just share some thoughts based on personal experience: adapt the relevant ones to your own situation as needed. For background and context --so you can account for any personal biases that might creep in to this message--much of my work has been in helping people make important decisions based on relatively small datasets. They are small because the data can be expensive to collect (10K dollars for the first sample of a groundwater monitoring well, for instance, or several thousand dollars for analyses of unusual chemicals). I'm used to getting as much as possible out of any data that are available, to exploring them to death, and to inventing new methods to analyze them if necessary. However, in the last few years I have been engaged to work on some fairly large databases, such as one of socioeconomic and engineering data covering the entire US at the Census block level (8.5 million records, 300 fields) and various large GIS databases (which nowadays can run from gigabytes to hundreds of gigabytes in size). With very large datasets one's entire approach and mindset change . There are now too much data to analyze. Some of the immediate (and, in retrospect) obvious implications (with emphasis on regression modeling) include Any analysis you think about doing can take a lot of time and computation. You will need to develop methods of subsampling and working on partial datasets so you can plan your workflow when computing with the entire dataset. (Subsampling can be complicated, because you need a representative subset of the data that is as rich as the entire dataset. And don't forget about cross-validating your models with the held-out data.) Because of this, you will spend more time documenting what you do and scripting everything (so that it can be repeated). As @dsimcha has just noted, good programming skills are useful. Actually, you don't need much in the way of experience with programming environments, but you need a willingness to program, the ability to recognize when programming will help (at just about every step, really) and a good understanding of basic elements of computer science, such as design of appropriate data structures and how to analyze computational complexity of algorithms. That's useful for knowing in advance whether code you plan to write will scale up to the full dataset. Some datasets are large because they have many variables (thousands or tens of thousands, all of them different). Expect to spend a great deal of time just summarizing and understanding the data . A codebook or data dictionary , and other forms of metadata , become essential. Much of your time is spent simply moving data around and reformatting them. You need skills with processing large databases and skills with summarizing and graphing large amounts of data. ( Tufte's Small Multiple comes to the fore here.) Some of your favorite software tools will fail. Forget spreadsheets, for instance. A lot of open source and academic software will just not be up to handling large datasets: the processing will take forever or the software will crash. Expect this and make sure you have multiple ways to accomplish your key tasks. Almost any statistical test you run will be so powerful that it's almost sure to identify a "significant" effect. You have to focus much more on statistical importance , such as effect size, rather than significance. Similarly, model selection is troublesome because almost any variable and any interaction you might contemplate is going to look significant. You have to focus more on the meaningfulness of the variables you choose to analyze. There will be more than enough information to identify appropriate nonlinear transformations of the variables. Know how to do this. You will have enough data to detect nonlinear relationships, changes in trends, nonstationarity, heteroscedasticity , etc. You will never be finished . There are so much data you could study them forever. It's important, therefore, to establish your analytical objectives at the outset and constantly keep them in mind. I'll end with a short anecdote which illustrates one unexpected difference between regression modeling with a large dataset compared to a smaller one. At the end of that project with the Census data, a regression model I had developed needed to be implemented in the client's computing system, which meant writing SQL code in a relational database. This is a routine step but the code generated by the database programmers involved thousands of lines of SQL. This made it almost impossible to guarantee it was bug free--although we could detect the bugs (it gave different results on test data), finding them was another matter. (All you need is one typographical error in a coefficient...) Part of the solution was to write a program that generated the SQL commands directly from the model estimates . This assured that what came out of the statistics package was exactly what went into the RDBMS. As a bonus, a few hours spent on writing this script replaced possibly several weeks of SQL coding and testing. This is a small part of what it means for the statistician to be able to communicate their results.
{ "source": [ "https://stats.stackexchange.com/questions/7815", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3026/" ] }
7,853
I understand two-tailed hypothesis testing. You have $H_0 : \theta = \theta_0$ (vs. $H_1 = \neg H_0 : \theta \ne \theta_0$). The $p$-value is the probability that $\theta$ generates data at least as extreme as what was observed. I don't understand one-tailed hypothesis testing. Here, $H_0 : \theta\le\theta_0$ (vs. $H_1 = \neg H_0 : \theta > \theta_0$). The definition of p-value shouldn't have changed from above: it should still be the probability that $\theta$ generates data at least as extreme as what was observed. But we don't know $\theta$, only that it's upper-bounded by $\theta_0$. So instead, I see texts telling us to assume that $\theta = \theta_0$ (not $\theta \le \theta_0$ as per $H_0$) and calculate the probability that this generates data at least as extreme as what was observed, but only on one end. This seems to have nothing to do with the hypotheses, technically. Now, I understand that this is frequentist hypothesis testing, and that frequentists place no priors on their $\theta$s. But shouldn't that just mean the hypotheses are then impossible to accept or reject, rather than shoehorning the above calculation into the picture?
That's a thoughtful question. Many texts (perhaps for pedagogical reasons) paper over this issue. What's really going on is that $H_0$ is a composite "hypothesis" in your one-sided situation: it's actually a set of hypotheses, not a single one. It is necessary that for every possible hypothesis in $H_0$, the chance of the test statistic falling in the critical region must be less than or equal to the test size. Moreover, if the test is actually to achieve its nominal size (which is a good thing for achieving high power), then the supremum of these chances (taken over all the null hypotheses) should equal the nominal size. In practice, for simple one-parameter tests of location involving certain "nice" families of distributions, this supremum is attained for the hypothesis with parameter $\theta_0$. Thus, as a practical matter, all computation focuses on this one distribution. But we mustn't forget about the rest of the set $H_0$: that is a crucial distinction between two-sided and one-sided tests (and between "simple" and "composite" tests in general). This subtly influences the interpretation of results of one-sided tests. When the null is rejected, we can say the evidence points against the true state of nature being any of the distributions in $H_0$. When the null is not rejected, we can only say there exists a distribution in $H_0$ which is "consistent" with the observed data. We are not saying that all distributions in $H_0$ are consistent with the data: far from it! Many of them may yield extremely low likelihoods.
{ "source": [ "https://stats.stackexchange.com/questions/7853", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1720/" ] }
7,860
Is it possible to visualize the output of Principal Component Analysis in ways that give more insight than just summary tables? Is it possible to do it when the number of observations is large, say ~1e4? And is it possible to do it in R [other environments welcome]?
The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run into a problem with over-plotting. Alpha blending could help there. Here is a PC biplot of the wine data from the UCI ML repository : The points correspond to the PC1 and PC2 scores of each observation. The arrows represent the correlation of the variables with PC1 and PC2. The white circle indicates the theoretical maximum extent of the arrows. The ellipses are 68% data ellipses for each of the 3 wine varieties in the data. I have made the code for generating this plot available here .
{ "source": [ "https://stats.stackexchange.com/questions/7860", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/30/" ] }
7,899
I need to draw a complex graphics for visual data analysis. I have 2 variables and a big number of cases (>1000). For example (number is 100 if to make dispersion less "normal"): x <- rnorm(100,mean=95,sd=50) y <- rnorm(100,mean=35,sd=20) d <- data.frame(x=x,y=y) 1) I need to plot raw data with point size, corresponding the relative frequency of coincidences, so plot(x,y) is not an option - I need point sizes. What should be done to achieve this? 2) On the same plot I need to plot 95% confidence interval ellipse and line representing change of correlation (do not know how to name it correctly) - something like this: library(corrgram) corrgram(d, order=TRUE, lower.panel=panel.ellipse, upper.panel=panel.pts) but with both graphs at one plot. 3) Finally, I need to draw a resulting linar regression model on top of this all: r<-lm(y~x, data=d) abline(r,col=2,lwd=2) but with error range... something like on QQ-plot: but for fitting errors, if it is possible. So the question is: How to achieve all of this at one graph?
Does the picture below look like what you want to achieve? Here's the updated R code, following your comments: do.it <- function(df, type="confidence", ...) { require(ellipse) lm0 <- lm(y ~ x, data=df) xc <- with(df, xyTable(x, y)) df.new <- data.frame(x=seq(min(df$x), max(df$x), 0.1)) pred.ulb <- predict(lm0, df.new, interval=type) pred.lo <- predict(loess(y ~ x, data=df), df.new) plot(xc$x, xc$y, cex=xc$number*2/3, xlab="x", ylab="y", ...) abline(lm0, col="red") lines(df.new$x, pred.lo, col="green", lwd=1.5) lines(df.new$x, pred.ulb[,"lwr"], lty=2, col="red") lines(df.new$x, pred.ulb[,"upr"], lty=2, col="red") lines(ellipse(cor(df$x, df$y), scale=c(sd(df$x),sd(df$y)), centre=c(mean(df$x),mean(df$y))), lwd=1.5, col="green") invisible(lm0) } set.seed(101) n <- 1000 x <- rnorm(n, mean=2) y <- 1.5 + 0.4*x + rnorm(n) df <- data.frame(x=x, y=y) # take a bootstrap sample df <- df[sample(nrow(df), nrow(df), rep=TRUE),] do.it(df, pch=19, col=rgb(0,0,.7,.5)) And here is the ggplotized version produced with the following piece of code: xc <- with(df, xyTable(x, y)) df2 <- cbind.data.frame(x=xc$x, y=xc$y, n=xc$number) df.ell <- as.data.frame(with(df, ellipse(cor(x, y), scale=c(sd(x),sd(y)), centre=c(mean(x),mean(y))))) library(ggplot2) ggplot(data=df2, aes(x=x, y=y)) + geom_point(aes(size=n), alpha=.6) + stat_smooth(data=df, method="loess", se=FALSE, color="green") + stat_smooth(data=df, method="lm") + geom_path(data=df.ell, colour="green", size=1.2) It could be customized a little bit more by adding model fit indices, like Cook's distance, with a color shading effect.
{ "source": [ "https://stats.stackexchange.com/questions/7899", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3376/" ] }
7,935
From what I know, using lasso for variable selection handles the problem of correlated inputs. Also, since it is equivalent to Least Angle Regression, it is not slow computationally. However, many people (for example people I know doing bio-statistics) still seem to favour stepwise or stagewise variable selection. Are there any practical disadvantages of using the lasso that makes it unfavourable?
There is NO reason to do stepwise selection. It's just wrong. LASSO/LAR are the best automatic methods. But they are automatic methods. They let the analyst not think. In many analyses, some variables should be in the model REGARDLESS of any measure of significance. Sometimes they are necessary control variables. Other times, finding a small effect can be substantively important.
{ "source": [ "https://stats.stackexchange.com/questions/7935", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2973/" ] }
7,948
I am running linear regression models and wondering what the conditions are for removing the intercept term. In comparing results from two different regressions where one has the intercept and the other does not, I notice that the $R^2$ of the function without the intercept is much higher. Are there certain conditions or assumptions I should be following to make sure the removal of the intercept term is valid?
The shortest answer: never , unless you are sure that your linear approximation of the data generating process (linear regression model) either by some theoretical or any other reasons is forced to go through the origin . If not the other regression parameters will be biased even if intercept is statistically insignificant (strange but it is so, consult Brooks Introductory Econometrics for instance). Finally, as I do often explain to my students, by leaving the intercept term you insure that the residual term is zero-mean. For your two models case we need more context. It may happen that linear model is not suitable here. For example, you need to log transform first if the model is multiplicative. Having exponentially growing processes it may occasionally happen that $R^2$ for the model without the intercept is "much" higher. Screen the data, test the model with RESET test or any other linear specification test, this may help to see if my guess is true. And, building the models highest $R^2$ is one of the last statistical properties I do really concern about, but it is nice to present to the people who are not so well familiar with econometrics (there are many dirty tricks to make determination close to 1 :)).
{ "source": [ "https://stats.stackexchange.com/questions/7948", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1422/" ] }
7,975
Having worked mostly with cross sectional data so far and very very recently browsing, scanning stumbling through a bunch of introductory time series literature I wonder what which role explanatory variables are playing in time series analysis. I would like to explain a trend instead of de-trending. Most of what I read as an introduction assumes that the series is stemming from some stochastic process. I read about AR(p) and MA processes as well as ARIMA modelling. Wanting to deal with more information than only autoregressive processes I found VAR / VECM and ran some examples, but still I wonder if there is some case that is related closer to what explanatories do in cross sections. The motivation behind this is that decomposition of my series shows that the trend is the major contributor while remainder and seasonal effect hardly play a role. I would like to explain this trend. Can / should I regress my series on multiple different series? Intuitively I would use gls because of serial correlation (I am not so sure about the cor structure). I heard about spurious regression and understand that this is a pitfall, nevertheless I am looking for a way to explain a trend. Is this completely wrong or uncommon? Or have I just missed the right chapter so far?
Based upon the comments that you've offered to the responses, you need to be aware of spurious causation . Any variable with a time trend is going to be correlated with another variable that also has a time trend. For example, my weight from birth to age 27 is going to be highly correlated with your weight from birth to age 27. Obviously, my weight isn't caused by your weight. If it was, I'd ask that you go to the gym more frequently, please. As you are familiar with cross-section data, I'll give you an omitted variables explanation. Let my weight be $x_t$ and your weight be $y_t$, where $$\begin{align*}x_t &= \alpha_0 + \alpha_1 t + \epsilon_t \text{ and} \\ y_t &= \beta_0 + \beta_1 t + \eta_t.\end{align*}$$ Then the regression $$\begin{equation*}y_t = \gamma_0 + \gamma_1 x_t + \nu_t\end{equation*}$$ has an omitted variable---the time trend---that is correlated with the included variable, $x_t$. Hence, the coefficient $\gamma_1$ will be biased (in this case, it will be positive, as our weights grow over time). When you are performing time series analysis, you need to be sure that your variables are stationary or you'll get these spurious causation results. An exception would be integrated series, but I'd refer you to time series texts to hear more about that.
{ "source": [ "https://stats.stackexchange.com/questions/7975", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/704/" ] }
7,977
I am wondering how to generate uniformly distributed points on the surface of the 3-d unit sphere? Also after generating those points, what is the best way to visualize and check whether they are truly uniform on the surface $x^2+y^2+z^2=1$?
A standard method is to generate three standard normals and construct a unit vector from them. That is, when $X_i \sim N(0,1)$ and $\lambda^2 = X_1^2 + X_2^2 + X_3^2$, then $(X_1/\lambda, X_2/\lambda, X_3/\lambda)$ is uniformly distributed on the sphere. This method works well for $d$-dimensional spheres, too. In 3D you can use rejection sampling: draw $X_i$ from a uniform$[-1,1]$ distribution until the length of $(X_1, X_2, X_3)$ is less than or equal to 1, then--just as with the preceding method--normalize the vector to unit length. The expected number of trials per spherical point equals $2^3/(4 \pi / 3)$ = 1.91. In higher dimensions the expected number of trials gets so large this rapidly becomes impracticable. There are many ways to check uniformity . A neat way, although somewhat computationally intensive, is with Ripley's K function . The expected number of points within (3D Euclidean) distance $\rho$ of any location on the sphere is proportional to the area of the sphere within distance $\rho$, which equals $\pi\rho^2$. By computing all interpoint distances you can compare the data to this ideal. General principles of constructing statistical graphics suggest a good way to make the comparison is to plot variance-stabilized residuals $e_i(d_{[i]} - e_i)$ against $i = 1, 2, \ldots, n(n-1)/2=m$ where $d_{[i]}$ is the $i^\text{th}$ smallest of the mutual distances and $e_i = 2\sqrt{i/m}$. The plot should be close to zero. (This approach is unconventional.) Here is a picture of 100 independent draws from a uniform spherical distribution obtained with the first method: Here is the diagnostic plot of the distances: The y scale suggests these values are all close to zero. Here is the accumulation of 100 such plots to suggest what size deviations might actually be significant indicators of non-uniformity: (These plots look an awful lot like Brownian bridges ...there may be some interesting theoretical discoveries lurking here.) Finally, here is the diagnostic plot for a set of 100 uniform random points plus another 41 points uniformly distributed in the upper hemisphere only: Relative to the uniform distribution, it shows a significant decrease in average interpoint distances out to a range of one hemisphere. That in itself is meaningless, but the useful information here is that something is non-uniform on the scale of one hemisphere. In effect, this plot readily detects that one hemisphere has a different density than the other. (A simpler chi-square test would do this with more power if you knew in advance which hemisphere to test out of the infinitely many possible ones.)
{ "source": [ "https://stats.stackexchange.com/questions/7977", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3552/" ] }
8,000
Recurrent neural networks differ from "regular" ones by the fact that they have a "memory" layer. Due to this layer, recurrent NN's are supposed to be useful in time series modelling. However, I'm not sure I understand correctly how to use them. Let's say I have the following time series (from left to right): [0, 1, 2, 3, 4, 5, 6, 7] , my goal is to predict i -th point using points i-1 and i-2 as an input (for each i>2 ). In a "regular", non-recurring ANN I would do process the data as follows: target| input 2| 1 0 3| 2 1 4| 3 2 5| 4 3 6| 5 4 7| 6 5 I would then create a net with two input and one output node and train it with the data above. How does one need to alter this process (if at all) in the case of recurrent networks?
What you describe is in fact a "sliding time window" approach and is different to recurrent networks. You can use this technique with any regression algorithm. There is a huge limitation to this approach: events in the inputs can only be correlatd with other inputs/outputs which lie at most t timesteps apart, where t is the size of the window. E.g. you can think of a Markov chain of order t. RNNs don't suffer from this in theory, however in practice learning is difficult. It is best to illustrate an RNN in contrast to a feedfoward network. Consider the (very) simple feedforward network $y = Wx$ where $y$ is the output, $W$ is the weight matrix, and $x$ is the input. Now, we use a recurrent network. Now we have a sequence of inputs, so we will denote the inputs by $x^{i}$ for the ith input. The corresponding ith output is then calculated via $y^{i} = Wx^i + W_ry^{i-1}$. Thus, we have another weight matrix $W_r$ which incorporates the output at the previous step linearly into the current output. This is of course a simple architecture. Most common is an architecture where you have a hidden layer which is recurrently connected to itself. Let $h^i$ denote the hidden layer at timestep i. The formulas are then: $$h^0 = 0$$ $$h^i = \sigma(W_1x^i + W_rh^{i-1})$$ $$y^i = W_2h^i$$ Where $\sigma$ is a suitable non-linearity/transfer function like the sigmoid. $W_1$ and $W_2$ are the connecting weights between the input and the hidden and the hidden and the output layer. $W_r$ represents the recurrent weights. Here is a diagram of the structure:
{ "source": [ "https://stats.stackexchange.com/questions/8000", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1496/" ] }
8,071
How do I know when to choose between Spearman's $\rho$ and Pearson's $r$? My variable includes satisfaction and the scores were interpreted using the sum of the scores. However, these scores could also be ranked.
If you want to explore your data it is best to compute both, since the relation between the Spearman (S) and Pearson (P) correlations will give some information. Briefly, S is computed on ranks and so depicts monotonic relationships while P is on true values and depicts linear relationships. As an example, if you set: x=(1:100); y=exp(x); % then, corr(x,y,'type','Spearman'); % will equal 1, and corr(x,y,'type','Pearson'); % will be about equal to 0.25 This is because $y$ increases monotonically with $x$ so the Spearman correlation is perfect, but not linearly, so the Pearson correlation is imperfect. corr(x,log(y),'type','Pearson'); % will equal 1 Doing both is interesting because if you have S > P, that means that you have a correlation that is monotonic but not linear. Since it is good to have linearity in statistics (it is easier) you can try to apply a transformation on $y$ (such a log). I hope this helps to make the differences between the types of correlations easier to understand.
{ "source": [ "https://stats.stackexchange.com/questions/8071", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
8,104
Background One of the most commonly used weak prior on variance is the inverse-gamma with parameters $\alpha =0.001, \beta=0.001$ (Gelman 2006) . However, this distribution has a 90%CI of approximately $[3\times10^{19},\infty]$. library(pscl) sapply(c(0.05, 0.95), function(x) qigamma(x, 0.001, 0.001)) [1] 3.362941e+19 Inf From this, I interpret that the $IG(0.001, 0.001)$ gives a low probability that variance will be very high, and the very low probability that variance will be less than 1 $P(\sigma<1|\alpha=0.001, \beta=0.001)=0.006$. pigamma(1, 0.001, 0.001) [1] 0.006312353 Question Am I missing something or is this actually an informative prior? update to clarify, the reason that I was considering this 'informative' is because it claims very strongly that the variance is enormous and well beyond the scale of almost any variance ever measured. follow-up would a meta-analysis of a large number of variance estimates provide a more reasonable prior? Reference Gelman 2006. Prior distributions for variance parameters in hierarchical models . Bayesian Analysis 1(3):515–533
Using the inverse gamma distribution, we get: $$p(\sigma^2|\alpha,\beta) \propto (\sigma^2)^{-\alpha-1} \exp(-\frac{\beta}{\sigma^2})$$ You can see easily that if $\beta \rightarrow 0$ and $\alpha \rightarrow 0$ then the inverse gamma will approach the Jeffreys prior. This distribution is called "uninformative" because it is a proper approximation to the Jeffreys prior $$p(\sigma^2) \propto \frac{1}{\sigma^2}$$ Which is uninformative for scale parameters see page 18 here for example , because this prior is the only one which remains invariant under a change of scale (note that the approximation is not invariant). This has a indefinite integral of $\log(\sigma^2)$ which shows that it is improper if the range of $\sigma^2$ includes either $0$ or $\infty$. But these cases are only problems in the maths - not in the real world. Never actually observe infinite value for variance, and if the observed variance is zero, you have perfect data!. For you can set a lower limit equal to $L>0$ and upper limit equal $U<\infty$, and your distribution is proper. While it may seem strange that this is "uninformative" in that it prefers small variance to large ones, but this is only on one scale. You can show that $\log(\sigma^2)$ has an improper uniform distribution. So this prior does not favor any one scale over any other Although not directly related to your question, I would suggest a "better" non-informative distribution by choosing the upper and lower limits $L$ and $U$ in the Jeffreys prior rather than $\alpha$ and $\beta$. Usually the limits can be set fairly easily with a bit of thought to what $\sigma^2$ actually means in the real world. If it was the error in some kind of physical quantity - $L$ cannot be smaller than the size of an atom, or the smallest size you can observe in your experiment. Further $U$ could not be bigger than the earth (or the sun if you wanted to be really conservative). This way you keep your invariance properties, and its an easier prior to sample from: take $q_{(b)} \sim \mathrm{Uniform}(\log(L),\log(U))$, and then the simulated value as $\sigma^{2}_{(b)}=\exp(q_{(b)})$.
{ "source": [ "https://stats.stackexchange.com/questions/8104", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1381/" ] }
8,106
I am currently reading a paper concerning voting location and voting preference in the 2000 and 2004 election. In it, there is a chart which displays logistic regression coefficients. From courses years back and a little reading up , I understand logistic regression to be a way of describing the relationship between multiple independent variables and a binary response variable. What I'm confused about is, given the table below, because the South has a logistic regression coefficient of .903, does that mean that 90.3% of Southerners vote republican? Because of the logistical nature of the metric, that this direct correlation does not exist. Instead, I assume that you can only say that the south, with .903, votes Republican more than the Mountains/plains, with the regression of .506. Given the latter to be the case, how do I know what is significant and what is not and is it possible to extrapolate a percentage of republican votes given this logistic regression coefficient. As a side note, please edit my post if anything is stated incorrectly
That the author has forced someone as thoughtful as you to have ask a question like this is compelling illustration of why the practice -- still way too common -- of confining reporting of regression model results to a table like this is so unacceptable. You can, as pointed out, try to transform the logit coefficient into some meaningful indication of the effect being estimated for the predictor in question but that's cumbersome and doesn't convey information about the precision of the prediction, which is usually pretty important in a logistic regression model (on voting in particular). Also, the use of multiple asterisks to report "levels" of significance reinforces the misconception that p-values are some meaningful index of effect size ("wow--that one has 3 asterisks!!"); for crying out loud, w/ N's of 10,000 to 20,000, completely trivial differences will be "significant" at p < .001 blah blah. There is absolutely no need to mystify in this way. The logistic regression model is an equation that can be used (through determinate calculation or better still simulation) to predict probability of an outcome conditional on specified values for predictors, subject to measurement error. So the researcher should report what the impact of predictors of interest are on the probability of the outcome variable of interest, & associated CI, as measured in units the practical importance of which can readily be grasped. To assure ready grasping, the results should be graphically displayed. Here, for example, the researcher could report that being a rural as opposed to an urban voter increases the likelihood of voting Republican, all else equal, by X pct points (I'm guessing around 17 in 2000; "divide by 4" is a reasonable heuristic) +/- x% at 0.95 level of confidence-- if that's something that is useful to know. The reporting of pseudo R^2 is also a sign that the modeler is engaged in statistical ritual rather than any attempt to illuminate. There are scores of ways to compute "pseudo R^2"; one might complain that the one used here is not specified, but why bother? All are next to meaningless. The only reason anyone uses pseudo R^2 is that they or the reviewer who is torturing them learned (likely 25 or more yrs ago) that OLS linear regression is the holy grail of statistics & thinks the only thing one is ever trying to figure out is "variance explained." There are plenty of defensible ways to assess the adequacy of overall model fit for logistic analysis, and likelihood ratio conveys meaningful information for comparing models that reflect alternative hypotheses. King, G. How Not to Lie with Statistics . Am. J. Pol. Sci. 30, 666-687 (1986). If you read a paper in which reporting is more or less confined to a table like this don't be confused, don't be intimidated, & definitely don't be impressed; instead be angry & tell the researcher he or she is doing a lousy job (particularly if he or she is polluting your local intellectual environment w/ mysticism & awe--amazing how many completely mediocre thinkers trick smart people into thinking they know something just b/c they can produce a table that the latter can't understand). For smart, & temperate, expositions of these ideas, see King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation . Am. J. Pol. Sci. 44, 347-361 (2000); and Gelman, A., Pasarica, C. & Dodhia, R. Let's Practice What We Preach: Turning Tables into Graphs . Am. Stat. 56, 121-130 (2002).
{ "source": [ "https://stats.stackexchange.com/questions/8106", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3652/" ] }
8,225
I have R data frame like this: age group 1 23.0883 1 2 25.8344 1 3 29.4648 1 4 32.7858 2 5 33.6372 1 6 34.9350 1 7 35.2115 2 8 35.2115 2 9 35.2115 2 10 36.7803 1 ... I need to get data frame in the following form: group mean sd 1 34.5 5.6 2 32.3 4.2 ... Group number may vary, but their names and quantity could be obtained by calling levels(factor(data$group)) What manipulations should be done with the data to get the result?
Here is the plyr one line variant using ddply : dt <- data.frame(age=rchisq(20,10),group=sample(1:2,20,rep=T)) ddply(dt,~group,summarise,mean=mean(age),sd=sd(age)) Here is another one line variant using new package data.table . dtf <- data.frame(age=rchisq(100000,10),group=factor(sample(1:10,100000,rep=T))) dt <- data.table(dtf) dt[,list(mean=mean(age),sd=sd(age)),by=group] This one is faster, though this is noticeable only on table with 100k rows. Timings on my Macbook Pro with 2.53 Ghz Core 2 Duo processor and R 2.11.1: > system.time(aa <- ddply(dtf,~group,summarise,mean=mean(age),sd=sd(age))) utilisateur système écoulé 0.513 0.180 0.692 > system.time(aa <- dt[,list(mean=mean(age),sd=sd(age)),by=group]) utilisateur système écoulé 0.087 0.018 0.103 Further savings are possible if we use setkey : > setkey(dt,group) > system.time(dt[,list(mean=mean(age),sd=sd(age)),by=group]) utilisateur système écoulé 0.040 0.007 0.048
{ "source": [ "https://stats.stackexchange.com/questions/8225", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3376/" ] }
8,511
Christopher Manning's writeup on logistic regression in R shows a logistic regression in R as follows: ced.logr <- glm(ced.del ~ cat + follows + factor(class), family=binomial) Some output: > summary(ced.logr) Call: glm(formula = ced.del ~ cat + follows + factor(class), family = binomial("logit")) Deviance Residuals: Min 1Q Median 3Q Max -3.24384 -1.34325 0.04954 1.01488 6.40094 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.31827 0.12221 -10.787 < 2e-16 catd -0.16931 0.10032 -1.688 0.091459 catm 0.17858 0.08952 1.995 0.046053 catn 0.66672 0.09651 6.908 4.91e-12 catv -0.76754 0.21844 -3.514 0.000442 followsP 0.95255 0.07400 12.872 < 2e-16 followsV 0.53408 0.05660 9.436 < 2e-16 factor(class)2 1.27045 0.10320 12.310 < 2e-16 factor(class)3 1.04805 0.10355 10.122 < 2e-16 factor(class)4 1.37425 0.10155 13.532 < 2e-16 (Dispersion parameter for binomial family taken to be 1) Null deviance: 958.66 on 51 degrees of freedom Residual deviance: 198.63 on 42 degrees of freedom AIC: 446.10 Number of Fisher Scoring iterations: 4 He then goes into some detail about how to interpret coefficients, compare different models, and so on. Quite useful. However, how much variance does the model account for? A Stata page on logistic regression says: Technically, $R^2$ cannot be computed the same way in logistic regression as it is in OLS regression. The pseudo-$R^2$, in logistic regression, is defined as $1 - \frac{L1}{L0}$, where $L0$ represents the log likelihood for the "constant-only" model and $L1$ is the log likelihood for the full model with constant and predictors. I understand this at the high level. The constant-only model would be without any of the parameters (only the intercept term). Log likelihood is a measure of how closely the parameters fit the data. In fact, Manning sort of hints that the deviance might be $-2 \log L$. Perhaps null deviance is constant-only and residual deviance is $-2 \log L$ of the model? However, I'm not crystal clear on it. Can someone verify how one actually computes the pseudo-$R^2$ in R using this example?
Don't forget the rms package, by Frank Harrell. You'll find everything you need for fitting and validating GLMs. Here is a toy example (with only one predictor): set.seed(101) n <- 200 x <- rnorm(n) a <- 1 b <- -2 p <- exp(a+b*x)/(1+exp(a+b*x)) y <- factor(ifelse(runif(n)<p, 1, 0), levels=0:1) mod1 <- glm(y ~ x, family=binomial) summary(mod1) This yields: Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.8959 0.1969 4.55 5.36e-06 *** x -1.8720 0.2807 -6.67 2.56e-11 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 258.98 on 199 degrees of freedom Residual deviance: 181.02 on 198 degrees of freedom AIC: 185.02 Now, using the lrm function, require(rms) mod1b <- lrm(y ~ x) You soon get a lot of model fit indices, including Nagelkerke $R^2$, with print(mod1b) : Logistic Regression Model lrm(formula = y ~ x) Model Likelihood Discrimination Rank Discrim. Ratio Test Indexes Indexes Obs 200 LR chi2 77.96 R2 0.445 C 0.852 0 70 d.f. 1 g 2.054 Dxy 0.705 1 130 Pr(> chi2) <0.0001 gr 7.801 gamma 0.705 max |deriv| 2e-08 gp 0.319 tau-a 0.322 Brier 0.150 Coef S.E. Wald Z Pr(>|Z|) Intercept 0.8959 0.1969 4.55 <0.0001 x -1.8720 0.2807 -6.67 <0.0001 Here, $R^2=0.445$ and it is computed as $\left(1-\exp(-\text{LR}/n)\right)/\left(1-\exp(-(-2L_0)/n)\right)$, where LR is the $\chi^2$ stat (comparing the two nested models you described), whereas the denominator is just the max value for $R^2$. For a perfect model, we would expect $\text{LR}=2L_0$, that is $R^2=1$. By hand, > mod0 <- update(mod1, .~.-x) > lr.stat <- lrtest(mod0, mod1) > (1-exp(-as.numeric(lr.stat$stats[1])/n))/(1-exp(2*as.numeric(logLik(mod0)/n))) [1] 0.4445742 > mod1b$stats["R2"] R2 0.4445742 Ewout W. Steyerberg discussed the use of $R^2$ with GLM, in his book Clinical Prediction Models (Springer, 2009, § 4.2.2 pp. 58-60). Basically, the relationship between the LR statistic and Nagelkerke's $R^2$ is approximately linear (it will be more linear with low incidence). Now, as discussed on the earlier thread I linked to in my comment, you can use other measures like the $c$ statistic which is equivalent to the AUC statistic (there's also a nice illustration in the above reference, see Figure 4.6).
{ "source": [ "https://stats.stackexchange.com/questions/8511", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2849/" ] }
8,605
I would like to perform column-wise normalization of a matrix in R. Given a matrix m , I want to normalize each column by dividing each element by the sum of the column. One (hackish) way to do this is as follows: m / t(replicate(nrow(m), colSums(m))) Is there a more succinct/elegant/efficient way to achieve the same task?
This is what sweep and scale are for. sweep(m, 2, colSums(m), FUN="/") scale(m, center=FALSE, scale=colSums(m)) Alternatively, you could use recycling, but you have to transpose it twice. t(t(m)/colSums(m)) Or you could construct the full matrix you want to divide by, like you did in your question. Here's another way you might do that. m/colSums(m)[col(m)] And notice also caracal's addition from the comments: m %*% diag(1/colSums(m))
{ "source": [ "https://stats.stackexchange.com/questions/8605", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1537/" ] }
8,661
I'm trying to undertake a logistic regression analysis in R . I have attended courses covering this material using STATA. I am finding it very difficult to replicate functionality in R . Is it mature in this area? There seems to be little documentation or guidance available. Producing odds ratio output seems to require installing epicalc and/or epitools and/or others, none of which I can get to work, are outdated or lack documentation. I've used glm to do the logistic regression. Any suggestions would be welcome. I'd better make this a real question. How do I run a logistic regression and produce odds rations in R ? Here's what I've done for a univariate analysis: x = glm(Outcome ~ Age, family=binomial(link="logit")) And for multivariate: y = glm(Outcome ~ Age + B + C, family=binomial(link="logit")) I've then looked at x , y , summary(x) and summary(y) . Is x$coefficients of any value?
if you want to interpret the estimated effects as relative odds ratios, just do exp(coef(x)) (gives you $e^\beta$, the multiplicative change in the odds ratio for $y=1$ if the covariate associated with $\beta$ increases by 1). For profile likelihood intervals for this quantity, you can do require(MASS) exp(cbind(coef(x), confint(x))) EDIT: @caracal was quicker...
{ "source": [ "https://stats.stackexchange.com/questions/8661", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2824/" ] }
8,662
I have the sample population of a certain signal's registered amplitude maxima. Population is about 15 million samples. I produced a histogram of the population, but cannot guess the distribution with such a histogram. EDIT1: File with raw sample values is here: raw data Can anyone help estimate the distribution with the following histogram:
Use fitdistrplus: Here's the CRAN link to fitdistrplus. Here's the old vignette link for fitdistrplus. If the vignette link doesn't work, do a search for "Use of the library fitdistrplus to specify a distribution from data". The vignette does a good job of explaining how to use the package. You can look at how various distributions fit in a short period of time. It also produces a Cullen/Frey Diagram. #Example from the vignette library(fitdistrplus) x1 <- c(6.4, 13.3, 4.1, 1.3, 14.1, 10.6, 9.9, 9.6, 15.3, 22.1, 13.4, 13.2, 8.4, 6.3, 8.9, 5.2, 10.9, 14.4) plotdist(x1) descdist(x1) f1g <- fitdist(x1, "gamma") plot(f1g) summary(f1g)
{ "source": [ "https://stats.stackexchange.com/questions/8662", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2820/" ] }
8,817
I am considering using Python libraries for doing my Machine Learning experiments. Thus far, I had been relying on WEKA, but have been pretty dissatisfied on the whole. This is primarily because I have found WEKA to be not so well supported (very few examples, documentation is sparse and community support is less than desirable in my experience), and have found myself in sticky situations with no help forthcoming. Another reason I am contemplating this move is because I am really liking Python (I am new to Python), and don't want to go back to coding in Java. So my question is, what are the more comprehensive scalable (100k features, 10k examples) and well supported libraries for doing ML in Python out there? I am particularly interested in doing text classification, and so would like to use a library that has a good collection of classifiers, feature selection methods (Information Gain, Chi-Sqaured etc.), and text pre-processing capabilities (stemming, stopword removal, tf-idf etc.). Based on the past e-mail threads here and elsewhere, I have been looking at PyML, scikits-learn and Orange so far. How have people's experiences been with respect to the above 3 metrics that I mention? Any other suggestions?
About the scikit-learn option: 100k (sparse) features and 10k samples is reasonably small enough to fit in memory hence perfectly doable with scikit-learn (same size as the 20 newsgroups dataset). Here is a tutorial I gave at PyCon 2011 with a chapter on text classification with exercises and solutions: http://scikit-learn.github.com/scikit-learn-tutorial/ (online HTML version) https://github.com/downloads/scikit-learn/scikit-learn-tutorial/scikit_learn_tutorial.pdf (PDF version) https://github.com/scikit-learn/scikit-learn-tutorial (source code + exercises) I also gave a talk on the topic which is an updated version of the version I gave at PyCon FR. Here are the slides (and the embedded video in the comments): http://www.slideshare.net/ogrisel/statistical-machine-learning-for-text-classification-with-scikitlearn-and-nltk As for feature selection, have a look at this answer on quora where all the examples are based on the scikit-learn documentation: http://www.quora.com/What-are-some-feature-selection-methods/answer/Olivier-Grisel We don't have collocation feature extraction in scikit-learn yet. Use nltk and nltk-trainer to do this in the mean time: https://github.com/japerk/nltk-trainer
{ "source": [ "https://stats.stackexchange.com/questions/8817", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3301/" ] }
8,974
There seems to be in increasing discussion about pie charts. The main arguments against it seem to be: Area is perceived with less power than length. Pie charts have very low data-point-to-pixel ratio However, I think they can be somehow useful when portraying proportions. I agree to use a table in most cases but when you are writing a business report and you've just included hundreds of tables why not having a pie chart? I'm curious about what the community thinks about this topic. Further references are welcome. I include a couple of links: http://www.juiceanalytics.com/writing/the-problem-with-pie-charts/ http://www.usf.uni-osnabrueck.de/~breiter/tools/piechart/warning.en.html In order to conclude this question I decided to build an example of pie-chart vs waffle-chart.
I wouldn't say there's an increasing interest or debate about the use of pie charts. They are just found everywhere on the web and in so-called "predictive analytic" solutions. I guess you know Tufte's work (he also discussed the use of multiple pie charts ), but more funny is the fact that the second chapter of Wilkinson's Grammar of Graphics starts with "How to make a pie chart?". You're probably also aware that Cleveland's dotplot , or even a barchart, will convey much more precise information. The problem seems to really stem from the way our visual system is able to deal with spatial information. It is even quoted in the R software; from the on-line help for pie , Cleveland (1985), page 264: “Data that can be shown by pie charts always can be shown by a dot chart. This means that judgements of position along a common scale can be made instead of the less accurate angle judgements.” This statement is based on the empirical investigations of Cleveland and McGill as well as investigations by perceptual psychologists. Cleveland, W. S. (1985) The elements of graphing data . Wadsworth: Monterey, CA, USA. There are variations of pie charts (e.g., donut-like charts) that all raise the same problems: We are not good at evaluating angle and area. Even the ones used in "corrgram", as described in Friendly, Corrgrams: Exploratory displays for correlation matrices , American Statistician (2002) 56:316, are hard to read, IMHO. At some point, however, I wondered whether they might still be useful, for example (1) displaying two classes is fine but increasing the number of categories generally worsen the reading (especially with strong imbalance between %), (2) relative judgments are better than absolute ones, that is displaying two pie charts side by side should favor a better appreciation of the results than a simple estimate from, say a pie chart mixing all results (e.g. a two-way cross-classification table). Incidentally, I asked a similar question to Hadley Wickham who kindly pointed me to the following articles: Spence, I. (2005). No Humble Pie: The Origins and Usage of a Statistical Chart . Journal of Educational and Behavioral Statistics , 30(4), 353–368. Heer, J. and Bostock, M. (2010). Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design . CHI 2010 , April 10–15, 2010, Atlanta, Georgia, USA. In sum, I think they are just good for grossly depicting the distribution of 2 to 3 classes (I use them, from time to time, to show the distribution of males and females in a sample on top of an histogram of ages), but they must be accompanied by relative frequencies or counts for being really informative. A table would still do a better job since you can add margins, and go beyond 2-way classifications. Finally, there are alternative displays that are built upon the idea of pie chart. I can think of square pie or waffle chart , described by Robert Kosara in Understanding Pie Charts .
{ "source": [ "https://stats.stackexchange.com/questions/8974", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2902/" ] }
9,001
Are there well known formulas for the order statistics of certain random distributions? Particularly the first and last order statistics of a normal random variable, but a more general answer would also be appreciated. Edit: To clarify, I am looking for approximating formulas that can be more-or-less explicitly evaluated, not the exact integral expression. For example, I have seen the following two approximations for the first order statistic (ie the minimum) of a normal rv: $e_{1:n} \geq \mu - \frac{n-1}{\sqrt{2n-1}}\sigma$ and $e_{1:n} \approx \mu + \Phi^{-1} \left( \frac{1}{n+1} \right)\sigma$ The first of these, for $n=200$, gives approximately $e_{1:200} \geq \mu - 10\sigma$ which seems like a wildly loose bound. The second gives $e_{1:200} \approx \mu - 2.58\sigma$ whereas a quick Monte Carlo gives $e_{1:200} \approx \mu - 2.75\sigma$, so it's not a bad approximation but not great either, and more importantly I don't have any intuition about where it comes from. Any help?
The classic reference is Royston (1982)[1] which has algorithms going beyond explicit formulas. It also quotes a well-known formula by Blom (1958): $E(r:n) \approx \mu + \Phi^{-1}(\frac{r-\alpha}{n-2\alpha+1})\sigma$ with $\alpha=0.375$. This formula gives a multiplier of -2.73 for $n=200, r=1$. [1]: Algorithm AS 177: Expected Normal Order Statistics (Exact and Approximate) J. P. Royston. Journal of the Royal Statistical Society. Series C (Applied Statistics) Vol. 31, No. 2 (1982), pp. 161-165
{ "source": [ "https://stats.stackexchange.com/questions/9001", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2425/" ] }
9,053
Why does a cross-validation procedure overcome the problem of overfitting a model?
I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross-validation error does not have a negligible variance, especially if the size of the dataset is small; in other words you get a slightly different value depending on the particular sample of data you use. This means that if you have many degrees of freedom in model selection (e.g. lots of features from which to select a small subset, many hyper-parameters to tune, many models from which to choose) you can over-fit the cross-validation criterion as the model is tuned in ways that exploit this random variation rather than in ways that really do improve performance, and you can end up with a model that performs poorly. For a discussion of this, see Cawley and Talbot "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR, vol. 11, pp. 2079−2107, 2010 Sadly cross-validation is most likely to let you down when you have a small dataset, which is exactly when you need cross-validation the most. Note that k-fold cross-validation is generally more reliable than leave-one-out cross-validation as it has a lower variance, but may be more expensive to compute for some models (which is why LOOCV is sometimes used for model selection, even though it has a high variance).
{ "source": [ "https://stats.stackexchange.com/questions/9053", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3269/" ] }
9,131
Let's take the following example: set.seed(342) x1 <- runif(100) x2 <- runif(100) y <- x1+x2 + 2*x1*x2 + rnorm(100) fit <- lm(y~x1*x2) This creates a model of y based on x1 and x2, using a OLS regression. If we wish to predict y for a given x_vec we could simply use the formula we get from the summary(fit) . However, what if we want to predict the lower and upper predictions of y? (for a given confidence level). How then would we build the formula?
You will need matrix arithmetic. I'm not sure how Excel will go with that. Anyway, here are the details. Suppose your regression is written as $\mathbf{y} = \mathbf{X}\mathbf{\beta} + \mathbf{e}$. Let $\mathbf{X}^*$ be a row vector containing the values of the predictors for the forecasts (in the same format as $\mathbf{X}$). Then the forecast is given by $$ \hat{y} = \mathbf{X}^*\hat{\mathbf{\beta}} = \mathbf{X}^*(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{Y} $$ with an associated variance $$ \sigma^2 \left[1 + \mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'\right]. $$ Then a 95% prediction interval can be calculated (assuming normally distributed errors) as $$ \hat{y} \pm 1.96 \hat{\sigma} \sqrt{1 + \mathbf{X}^* (\mathbf{X}'\mathbf{X})^{-1} (\mathbf{X}^*)'}. $$ This takes account of the uncertainty due to the error term $e$ and the uncertainty in the coefficient estimates. However, it ignores any errors in $\mathbf{X}^*$. So if the future values of the predictors are uncertain, then the prediction interval calculated using this expression will be too narrow.
{ "source": [ "https://stats.stackexchange.com/questions/9131", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
9,171
I'm brand new to this R thing but am unsure which model to select. I did a stepwise forward regression selecting each variable based on the lowest AIC. I came up with 3 models that I'm unsure which is the "best". Model 1: Var1 (p=0.03) AIC=14.978 Model 2: Var1 (p=0.09) + Var2 (p=0.199) AIC = 12.543 Model 3: Var1 (p=0.04) + Var2 (p=0.04) + Var3 (p=0.06) AIC= -17.09 I'm inclined to go with Model #3 because it has the lowest AIC (I heard negative is ok) and the p-values are still rather low. I've ran 8 variables as predictors of Hatchling Mass and found that these three variables are the best predictors. My next forward stepwise I choose Model 2 because even though the AIC was slightly larger the p values were all smaller. Do you agree this is the best? Model 1: Var1 (p=0.321) + Var2 (p=0.162) + Var3 (p=0.163) + Var4 (p=0.222) AIC = 25.63 Model 2: Var1 (p=0.131) + Var2 (p=0.009) + Var3 (p=0.0056) AIC = 26.518 Model 3: Var1 (p=0.258) + Var2 (p=0.0254) AIC = 36.905 thanks!
AIC is a goodness of fit measure that favours smaller residual error in the model, but penalises for including further predictors and helps avoiding overfitting. In your second set of models model 1 (the one with the lowest AIC) may perform best when used for prediction outside your dataset. A possible explanation why adding Var4 to model 2 results in a lower AIC, but higher p values is that Var4 is somewhat correlated with Var1, 2 and 3. The interpretation of model 2 is thus easier.
{ "source": [ "https://stats.stackexchange.com/questions/9171", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4027/" ] }
9,202
I am about to try out a BUGS style environment for estimating Bayesian models. Are there any important advantages to consider in choosing between OpenBugs or JAGS? Is one likely to replace the other in the foreseeable future? I will be using the chosen Gibbs Sampler with R. I don't have a specific application yet, but rather I am deciding which to intall and learn.
BUGS/OpenBugs has a peculiar build system which made compiling the code difficult to impossible on some systems — such as Linux (and IIRC OS X) where people had to resort to Windows emulation etc. Jags, on the other hand, is a completely new project written with standard GNU tools and hence portable to just about anywhere — and therefore usable anywhere. So in short, if your system is Windows then you do have a choice, and a potential cost of being stuck to Bugs if you ever move. If you are not on Windows, then Jags is likely to be the better choice.
{ "source": [ "https://stats.stackexchange.com/questions/9202", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3700/" ] }
9,220
Assume you are given two objects whose exact locations are unknown, but are distributed according to normal distributions with known parameters (e.g. $a \sim N(m, s)$ and $b \sim N(v, t))$. We can assume these are both bivariate normals, such that the positions are described by a distribution over $(x,y)$ coordinates (i.e. $m$ and $v$ are vectors containing the expected $(x,y)$ coordinates for $a$ and $b$ respectively). We will also assume the objects are independent. Does anyone know if the distribution of the squared Euclidean distance between these two objects is a known parametric distribution? Or how to derive the PDF / CDF for this function analytically?
The answer to this question can be found in the book Quadratic forms in random variables by Mathai and Provost (1992, Marcel Dekker, Inc.). As the comments clarify, you need to find the distribution of $Q = z_1^2 + z_2^2$ where $z = a - b$ follows a bivariate normal distribution with mean $\mu$ and covariance matrix $\Sigma$. This is a quadratic form in the bivariate random variable $z$. Briefly, one nice general result for the $p$-dimensional case where $z \sim N_p(\mu, \Sigma)$ and $$Q = \sum_{j=1}^p z_j^2$$ is that the moment generating function is $$E(e^{tQ}) = e^{t \sum_{j=1}^p \frac{b_j^2 \lambda_j}{1-2t\lambda_j}}\prod_{j=1}^p (1-2t\lambda_j)^{-1/2}$$ where $\lambda_1, \ldots, \lambda_p$ are the eigenvalues of $\Sigma$ and $b$ is a linear function of $\mu$. See Theorem 3.2a.2 (page 42) in the book cited above (we assume here that $\Sigma$ is non-singular). Another useful representation is 3.1a.1 (page 29) $$Q = \sum_{j=1}^p \lambda_j(u_j + b_j)^2$$ where $u_1, \ldots, u_p$ are i.i.d. $N(0, 1)$. The entire Chapter 4 in the book is devoted to the representation and computation of densities and distribution functions, which is not at all trivial. I am only superficially familiar with the book, but my impression is that all the general representations are in terms of infinite series expansions. So in a certain way the answer to the question is, yes, the distribution of the squared euclidean distance between two bivariate normal vectors belongs to a known (and well studied) class of distributions parametrized by the four parameters $\lambda_1, \lambda_2 > 0$ and $b_1, b_2 \in \mathbb{R}$. However, I am pretty sure you won't find this distribution in your standard textbooks. Note, moreover, that $a$ and $b$ do not need to be independent. Joint normality is enough (which is automatic if they are independent and each normal), then the difference $a-b$ follows a normal distribution.
{ "source": [ "https://stats.stackexchange.com/questions/9220", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1913/" ] }
9,311
I can see that there are a lot of formal differences between Kullback–Leibler vs Kolmogorov-Smirnov distance measures. However, both are used to measure the distance between distributions. Is there a typical situation where one should be used instead of the other? What is the rationale to do so?
The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for example. It's not a distance in the typical (metric) sense, because of lack of symmetry and triangle inequality, and so it's used in places where the directionality is meaningful. The KS-distance is typically used in the context of a non-parametric test. In fact, I've rarely seen it used as a generic "distance between distributions", where the $\ell_1$ distance, the Jensen-Shannon distance, and other distances are more common.
{ "source": [ "https://stats.stackexchange.com/questions/9311", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3592/" ] }
9,312
SPSS returns lower and upper bounds for Reliability. While calculating the Standard Error of Measurement, should we use the Lower and Upper bounds or continue using the Reliability estimate. I am using the formula : $$\text{SEM}\% =\left(\text{SD}\times\sqrt{1-R_1} \times 1/\text{mean}\right) × 100$$ where SD is the standard deviation, $R_1$ is the intraclass correlation for a single measure (one-way ICC).
The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for example. It's not a distance in the typical (metric) sense, because of lack of symmetry and triangle inequality, and so it's used in places where the directionality is meaningful. The KS-distance is typically used in the context of a non-parametric test. In fact, I've rarely seen it used as a generic "distance between distributions", where the $\ell_1$ distance, the Jensen-Shannon distance, and other distances are more common.
{ "source": [ "https://stats.stackexchange.com/questions/9312", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
9,357
When you are trying to fit models to a large dataset, the common advice is to partition the data into three parts: the training, validation, and test dataset. This is because the models usually have three "levels" of parameters: the first "parameter" is the model class (e.g. SVM, neural network, random forest), the second set of parameters are the "regularization" parameters or "hyperparameters" (e.g. lasso penalty coefficient, choice of kernel, neural network structure) and the third set are what are usually considered the "parameters" (e.g. coefficients for the covariates.) Given a model class and a choice of hyperparameters, one selects the parameters by choosing the parameters which minimize error on the training set. Given a model class, one tunes the hyperparameters by minimizing error on the validation set. One selects the model class by performance on the test set. But why not more partitions? Often one can split the hyperparameters into two groups, and use a "validation 1" to fit the first and "validation 2" to fit the second. Or one could even treat the size of the training data/validation data split as a hyperparameter to be tuned. Is this already a common practice in some applications? Is there any theoretical work on the optimal partitioning of data?
First, I think you're mistaken about what the three partitions do. You don't make any choices based on the test data. Your algorithms adjust their parameters based on the training data. You then run them on the validation data to compare your algorithms (and their trained parameters) and decide on a winner. You then run the winner on your test data to give you a forecast of how well it will do in the real world. You don't validate on the training data because that would overfit your models. You don't stop at the validation step's winner's score because you've iteratively been adjusting things to get a winner in the validation step, and so you need an independent test (that you haven't specifically been adjusting towards) to give you an idea of how well you'll do outside of the current arena. Second, I would think that one limiting factor here is how much data you have. Most of the time, we don't even want to split the data into fixed partitions at all, hence CV.
{ "source": [ "https://stats.stackexchange.com/questions/9357", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3567/" ] }
9,358
I have four numeric variables. All of them are measures of soil quality. Higher the variable, higher the quality. The range for all of them is different: Var1 from 1 to 10 Var2 from 1000 to 2000 Var3 from 150 to 300 Var4 from 0 to 5 I need to combine four variables into single soil quality score which will successfully rank order. My idea is very simple. Standardize all four variables, sum them up and whatever you get is the score which should rank-order. Do you see any problem with applying this approach. Is there any other (better) approach that you would recommend? Thanks Edit: Thanks guys. A lot of discussion went into "domain expertise"... Agriculture stuff... Whereas I expected more stats-talk. In terms of technique that I will be using... It will probably be simple z-score summation + logistic regression as an experiment. Because vast majority of samples has poor quality 90% I'm going to combine 3 quality categories into one and basically have binary problem (somequality vs no-quality). I kill two birds with one stone. I increase my sample in terms of event rate and I make a use of experts by getting them to clasify my samples. Expert classified samples will then be used to fit log-reg model to maximize level of concordance / discordance with the experts.... How does that sound to you?
The proposed approach may give a reasonable result, but only by accident. At this distance--that is, taking the question at face value, with the meanings of the variables disguised--some problems are apparent: It is not even evident that each variable is positively related to "quality." For example, what if a 10 for 'Var1' means the "quality" is worse than the quality when Var1 is 1? Then adding it to the sum is about as wrong a thing as one can do; it needs to be subtracted. Standardization implies that "quality" depends on the data set itself. Thus the definition will change with different data sets or with additions and deletions to these data. This can make the "quality" into an arbitrary, transient, non-objective construct and preclude comparisons between datasets. There is no definition of "quality". What is it supposed to mean? Ability to block migration of contaminated water? Ability to support organic processes? Ability to promote certain chemical reactions? Soils good for one of these purposes may be especially poor for others. The problem as stated has no purpose: why does "quality" need to be ranked? What will the ranking be used for--input to more analysis, selecting the "best" soil, deciding a scientific hypothesis, developing a theory, promoting a product? The consequences of the ranking are not apparent. If the ranking is incorrect or inferior, what will happen? Will the world be hungrier, the environment more contaminated, scientists more misled, gardeners more disappointed? Why should a linear combination of variables be appropriate? Why shouldn't they be multiplied or exponentiated or combined as a posynomial or something even more esoteric? Raw soil quality measures are commonly re-expressed. For example, log permeability is usually more useful than the permeability itself and log hydrogen ion activity (pH) is much more useful than the activity. What are the appropriate re-expressions of the variables for determining "quality"? One would hope that soils science would answer most of these questions and indicate what the appropriate combination of the variables might be for any objective sense of "quality." If not, then you face a multi-attribute valuation problem . The Wikipedia article lists dozens of methods for addressing this. IMHO, most of them are inappropriate for addressing a scientific question. One of the few with a solid theory and potential applicability to empirical matters is Keeney & Raiffa's multiple attribute valuation theory (MAVT). It requires you to be able to determine, for any two specific combinations of the variables, which of the two should rank higher. A structured sequence of such comparisons reveals (a) appropriate ways to re-express the values; (b) whether or not a linear combination of the re-expressed values will produce the correct ranking; and (c) if a linear combination is possible, it will let you compute the coefficients. In short, MAVT provides algorithms for solving your problem provided you already know how to compare specific cases.
{ "source": [ "https://stats.stackexchange.com/questions/9358", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ] }
9,477
My attempts: I couldn't get confidence intervals in interaction.plot() and on the other hand plotmeans() from package 'gplot' wouldn't display two graphs. Furthermore, I couldn't impose two plotmeans() graphs one on top of the other because by default the axis are different. I had some success using plotCI() from package 'gplot' and superimposing two graphs but still the match of the axis wasn't perfect. Any advice on how to make an interaction plot with confidence intervals? Either by one function, or advice on how to superimpose plotmeans() or plotCI() graphs. code sample br=structure(list(tangle = c(140L, 50L, 40L, 140L, 90L, 70L, 110L, 150L, 150L, 110L, 110L, 50L, 90L, 140L, 110L, 50L, 60L, 40L, 40L, 130L, 120L, 140L, 70L, 50L, 140L, 120L, 130L, 50L, 40L, 80L, 140L, 100L, 60L, 70L, 50L, 60L, 60L, 130L, 40L, 130L, 100L, 70L, 110L, 80L, 120L, 110L, 40L, 100L, 40L, 60L, 120L, 120L, 70L, 80L, 130L, 60L, 100L, 100L, 60L, 70L, 90L, 100L, 140L, 70L, 100L, 90L, 130L, 70L, 130L, 40L, 80L, 130L, 150L, 110L, 120L, 140L, 90L, 60L, 90L, 80L, 120L, 150L, 90L, 150L, 50L, 50L, 100L, 150L, 80L, 90L, 110L, 150L, 150L, 120L, 80L, 80L), gtangles = c(141L, 58L, 44L, 154L, 120L, 90L, 128L, 147L, 147L, 120L, 127L, 66L, 118L, 141L, 111L, 59L, 72L, 45L, 52L, 144L, 139L, 143L, 73L, 59L, 148L, 141L, 135L, 63L, 51L, 88L, 147L, 110L, 68L, 78L, 63L, 64L, 70L, 133L, 49L, 129L, 100L, 78L, 128L, 91L, 121L, 109L, 48L, 113L, 50L, 68L, 135L, 120L, 85L, 97L, 136L, 59L, 112L, 103L, 62L, 87L, 92L, 116L, 141L, 70L, 121L, 92L, 137L, 85L, 117L, 51L, 84L, 128L, 162L, 102L, 127L, 151L, 115L, 57L, 93L, 92L, 117L, 140L, 95L, 159L, 57L, 65L, 130L, 152L, 90L, 117L, 116L, 147L, 140L, 116L, 98L, 95L), up = c(-1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, -1L, -1L, 1L, 1L, 1L, 1L, -1L, -1L, -1L, -1L, 1L, 1L, -1L, -1L, 1L, 1L, -1L, 1L, 1L, -1L, 1L, 1L, 1L, 1L, 1L, -1L, -1L, 1L, 1L, 1L, 1L, -1L, -1L, 1L, 1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, 1L, -1L, -1L, -1L, -1L, -1L, 1L, -1L, 1L, 1L, -1L, -1L, -1L, -1L, 1L, -1L, 1L, -1L, -1L, -1L, 1L, -1L, 1L, -1L, 1L, 1L, 1L, -1L, -1L, -1L, -1L, -1L, -1L, 1L, -1L, 1L, 1L, -1L, -1L, 1L, 1L, 1L, -1L, 1L, 1L, 1L)), .Names = c("tangle", "gtangles", "up" ), class = "data.frame", row.names = c(NA, -96L)) plotmeans2 <- function(br, alph) { dt=br; tmp <- split(br$gtangles, br$tangle); means <- sapply(tmp, mean); stdev <- sqrt(sapply(tmp, var)); n <- sapply(tmp,length); ciw <- qt(alph, n) * stdev / sqrt(n) plotCI(x=means, uiw=ciw, col="black", barcol="blue", lwd=1,ylim=c(40,150), xlim=c(1,12)); par(new=TRUE) dt= subset(br,up==1); tmp <- split(dt$gtangles, dt$tangle); means <- sapply(tmp, mean); stdev <- sqrt(sapply(tmp, var)); n <- sapply(tmp,length); ciw <- qt(0.95, n) * stdev / sqrt(n) plotCI(x=means, uiw=ciw, type='l',col="black", barcol="red", lwd=1,ylim=c(40,150), xlim=c(1,12),pch='+'); abline(v=6);abline(h=90);abline(30,10); par(new=TRUE); dt=subset(br,up==-1); tmp <- split(dt$gtangles, dt$tangle); means <- sapply(tmp, mean); stdev <- sqrt(sapply(tmp, var)); n <- sapply(tmp,length); ciw <- qt(0.95, n) * stdev / sqrt(n) plotCI(x=means, uiw=ciw, type='l', col="black", barcol="blue", lwd=1,ylim=c(40,150), xlim=c(1,12),pch='-');abline(v=6);abline(h=90); abline(30,10); } plotmeans2(br,.95)
If you're willing to use ggplot , you can try the following code. With a continuous predictor library(ggplot2) gp <- ggplot(data=br, aes(x=tangle, y=gtangles)) gp + geom_point() + stat_smooth(method="lm", fullrange=T) + facet_grid(. ~ up) for a facetted interaction plot For a standard interaction plot (like the one produced by interaction.plot() ), you just have to remove the facetting. gp <- ggplot(data=br, aes(x=tangle, y=gtangles, colour=factor(up))) gp + geom_point() + stat_smooth(method="lm") With a discrete predictor Using the ToothGrowth dataset (see help(ToothGrowth) ), ToothGrowth$dose.cat <- factor(ToothGrowth$dose, labels=paste("d", 1:3, sep="")) df <- with(ToothGrowth , aggregate(len, list(supp=supp, dose=dose.cat), mean)) df$se <- with(ToothGrowth , aggregate(len, list(supp=supp, dose=dose.cat), function(x) sd(x)/sqrt(10)))[,3] opar <- theme_update(panel.grid.major = theme_blank(), panel.grid.minor = theme_blank(), panel.background = theme_rect(colour = "black")) gp <- ggplot(df, aes(x=dose, y=x, colour=supp, group=supp)) gp + geom_line(aes(linetype=supp), size=.6) + geom_point(aes(shape=supp), size=3) + geom_errorbar(aes(ymax=x+se, ymin=x-se), width=.1) theme_set(opar)
{ "source": [ "https://stats.stackexchange.com/questions/9477", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1084/" ] }
9,510
If I wanted to get the probability of 9 successes in 16 trials with each trial having a probability of 0.6 I could use a binomial distribution. What could I use if each of the 16 trials has a different probability of success?
This is the sum of 16 (presumably independent) Binomial trials. The assumption of independence allows us to multiply probabilities. Whence, after two trials with probabilities $p_1$ and $p_2$ of success the chance of success on both trials is $p_1 p_2$, the chance of no successes is $(1-p_1)(1-p_2)$, and the chance of one success is $p_1(1-p_2) + (1-p_1)p_2$. That last expression owes its validity to the fact that the two ways of getting exactly one success are mutually exclusive: at most one of them can actually happen. That means their probabilities add . By means of these two rules--independent probabilities multiply and mutually exclusive ones add--you can work out the answers for, say, 16 trials with probabilities $p_1, \ldots, p_{16}$. To do so, you need to account for all the ways of obtaining each given number of successes (such as 9). There are $\binom{16}{9} = 11440$ ways to achieve 9 successes. One of them, for example, occurs when trials 1, 2, 4, 5, 6, 11, 12, 14, and 15 are successes and the others are failures. The successes had probabilities $p_1, p_2, p_4, p_5, p_6, p_{11}, p_{12}, p_{14},$ and $p_{15}$ and the failures had probabilities $1-p_3, 1-p_7, \ldots, 1-p_{13}, 1-p_{16}$. Multiplying these 16 numbers gives the chance of this particular sequence of outcomes. Summing this number along with the 11,439 remaining such numbers gives the answer. Of course you would use a computer. With many more than 16 trials, there is a need to approximate the distribution. Provided none of the probabilities $p_i$ and $1-p_i$ get too small, a Normal approximation tends to work well. With this method you note that the expectation of the sum of $n$ trials is $\mu = p_1 + p_2 + \cdots + p_n$ and (because the trials are independent) the variance is $\sigma^2 = p_1(1-p_1) + p_2(1-p_2) + \cdots + p_n(1-p_n)$. You then pretend the distribution of sums is Normal with mean $\mu$ and standard deviation $\sigma$. The answers tend to be good for computing probabilities corresponding to a proportion of successes that differs from $\mu$ by no more than a few multiples of $\sigma$. As $n$ grows large this approximation gets ever more accurate and works for even larger multiples of $\sigma$ away from $\mu$.
{ "source": [ "https://stats.stackexchange.com/questions/9510", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4150/" ] }
9,535
I've run across a couple guides suggesting that I use R's nlm for maximum likelihood estimation. But none of them (including R's documentation ) gives much theoretical guidance for when to use or not use the function. As far as I can tell, nlm is just doing gradient descent along the lines of Newton's method. Are there principles for when it's reasonable to use this approach? What alternatives are available? Also, are there limits on the size of the arrays, etc. one can pass to nlm?
There are a number of general-purpose optimization routines in base R that I'm aware of: optim , nlminb , nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood). Here are some things that you might want to consider in choosing which one to use. optim can use a number of different algorithms including conjugate gradient, Newton, quasi-Newton, Nelder-Mead and simulated annealing. The last two don't need gradient information and so can be useful if gradients aren't available or not feasible to calculate (but are likely to be slower and require more parameter fine-tuning, respectively). It also has an option to return the computed Hessian at the solution, which you would need if you want standard errors along with the solution itself. nlminb uses a quasi-Newton algorithm that fills the same niche as the "L-BFGS-B" method in optim . In my experience it seems a bit more robust than optim in that it's more likely to return a solution in marginal cases where optim will fail to converge, although that's likely problem-dependent. It has the nice feature, if you provide an explicit gradient function, of doing a numerical check of its values at the solution. If these values don't match those obtained from numerical differencing, nlminb will give a warning; this helps to ensure you haven't made a mistake in specifying the gradient (easy to do with complicated likelihoods). nlm only uses a Newton algorithm. This can be faster than other algorithms in the sense of needing fewer iterations to reach convergence, but has its own drawbacks. It's more sensitive to the shape of the likelihood, so if it's strongly non-quadratic, it may be slower or you may get convergence to a false solution. The Newton algorithm also uses the Hessian, and computing that can be slow enough in practice that it more than cancels out any theoretical speedup.
{ "source": [ "https://stats.stackexchange.com/questions/9535", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4110/" ] }
9,561
This is an elementary question, but I wasn't able to find the answer. I have two measurements: n1 events in time t1 and n2 events in time t2, both produced (say) by Poisson processes with possibly-different lambda values. This is actually from a news article, which essentially claims that since $n_1/t_1\neq n_2/t_2$ that the two are different, but I'm not sure that the claim is valid. Suppose that the time periods were not chosen maliciously (to maximize the events in one or the other). Can I just do a t -test, or would that not be appropriate? The number of events is too small for me to comfortably call the distributions approximately normal.
To test the Poisson mean, the conditional method was proposed by Przyborowski and Wilenski (1940). The conditional distribution of X1 given X1+X2 follows a binomial distribution whose success probability is a function of the ratio two lambda. Therefore, hypothesis testing and interval estimation procedures can be readily developed from the exact methods for making inferences about the binomial success probability. There usually two methods are considered for this purpose, C-test E-test You can find the details about these two tests in this paper. A more powerful test for comparing two Poisson means
{ "source": [ "https://stats.stackexchange.com/questions/9561", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1378/" ] }
9,573
Long ago I learnt that normal distribution was necessary to use a two sample T-test. Today a colleague told me that she learnt that for N>50 normal distribution was not necessary. Is that true? If true is that because of the central limit theorem?
Normality assumption of a t-test Consider a large population from which you could take many different samples of a particular size. (In a particular study, you generally collect just one of these samples.) The t-test assumes that the means of the different samples are normally distributed; it does not assume that the population is normally distributed. By the central limit theorem, means of samples from a population with finite variance approach a normal distribution regardless of the distribution of the population. Rules of thumb say that the sample means are basically normally distributed as long as the sample size is at least 20 or 30. For a t-test to be valid on a sample of smaller size, the population distribution would have to be approximately normal. The t-test is invalid for small samples from non-normal distributions, but it is valid for large samples from non-normal distributions. Small samples from non-normal distributions As Michael notes below, sample size needed for the distribution of means to approximate normality depends on the degree of non-normality of the population. For approximately normal distributions, you won't need as large sample as a very non-normal distribution. Here are some simulations you can run in R to get a feel for this. First, here are a couple of population distributions. curve(dnorm,xlim=c(-4,4)) #Normal curve(dchisq(x,df=1),xlim=c(0,30)) #Chi-square with 1 degree of freedom Next are some simulations of samples from the population distributions. In each of these lines, "10" is the sample size, "100" is the number of samples and the function after that specifies the population distribution. They produce histograms of the sample means. hist(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') hist(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') For a t-test to be valid, these histograms should be normal. require(car) qqp(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') qqp(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') Utility of a t-test I have to note that all of the knowledge I just imparted is somewhat obsolete; now that we have computers, we can do better than t-tests. As Frank notes, you probably want to use Wilcoxon tests anywhere you were taught to run a t-test.
{ "source": [ "https://stats.stackexchange.com/questions/9573", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4176/" ] }
9,617
I'm hoping that someone can explain, in layman's terms, what a characteristic function is and how it is used in practice. I've read that it is the Fourier transform of the pdf, so I guess I know what it is, but I still don't understand its purpose. If someone could provide an intuitive description of its purpose and perhaps an example of how it is typically used, that would be fantastic! Just one last note: I have seen the Wikipedia page , but am apparently too dense to understand what is going on. What I'm looking for is an explanation that someone not immersed in the wonders of probability theory, say a computer scientist, could understand.
Back in the day, people used logarithm tables to multiply numbers faster. Why is this? Logarithms convert multiplication to addition, since $\log(ab) = \log(a) + \log(b)$. So in order to multiply two large numbers $a$ and $b$, you found their logarithms, added the logarithms, $z = \log(a) + \log(b)$, and then looked up $\exp(z)$ on another table. Now, characteristic functions do a similar thing for probability distributions. Suppose $X$ has a distribution $f$ and $Y$ has a distribution $g$, and $X$ and $Y$ are independent. Then the distribution of $X+Y$ is the convolution of $f$ and $g$, $f * g$. Now the characteristic function is an analogy of the "logarithm table trick" for convolution, since if $\phi_f$ is the characteristic function of $f$, then the following relation holds: $$ \phi_f \phi_g = \phi_{f * g} $$ Furthermore, also like in the case of logarithms,it is easy to find the inverse of the characteristic function: given $\phi_h$ where $h$ is an unknown density, we can obtain $h$ by the inverse Fourier transform of $\phi_h$. The characteristic function converts convolution to multiplication for density functions the same way that logarithms convert multiplication into addition for numbers. Both transformations convert a relatively complicated operation into a relatively simple one.
{ "source": [ "https://stats.stackexchange.com/questions/9617", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1913/" ] }
9,653
I've learnt that small sample size may lead to insufficient power and type 2 error. However, I have the feeling that small samples just may be generally unreliable and may lead to any kind of result by chance. Is that true?
As a general principle, small sample size will not increase the Type I error rate for the simple reason that the test is arranged to control the Type I rate. (There are minor technical exceptions associated with discrete outcomes, which can cause the nominal Type I rate not to be achieved exactly especially with small sample sizes.) There is an important principle here: if your test has acceptable size (= nominal Type I rate) and acceptable power for the effect you're looking for, then even if the sample size is small it's ok. The danger is that if we otherwise know little about the situation--maybe these are all the data we have--then we might be concerned about "Type III" errors: that is, model mis-specification. They can be difficult to check with small sample sets. As a practical example of the interplay of ideas, I will share a story. Long ago I was asked to recommend a sample size to confirm an environmental cleanup. This was during the pre-cleanup phase before we had any data. My plan called for analyzing the 1000 or so samples that would be obtained during cleanup (to establish that enough soil had been removed at each location) to assess the post-cleanup mean and variance of the contaminant concentration. Then (to simplify greatly), I said we would use a textbook formula--based on specified power and test size--to determine the number of independent confirmation samples that would be used to prove the cleanup was successful. What made this memorable was that after the cleanup was done, the formula said to use only 3 samples. Suddenly my recommendation did not look very credible! The reason for needing only 3 samples is that the cleanup was aggressive and worked well. It reduced average contaminant concentrations to about 100 give or take 100 ppm, consistently below the target of 500 ppm. In the end this approach worked because we had obtained the 1000 previous samples (albeit of lower analytical quality: they had greater measurement error) to establish that the statistical assumptions being made were in fact good ones for this site. That is how the potential for Type III error was handled. One more twist for your consideration: knowing the regulatory agency would never approve using just 3 samples, I recommended obtaining 5 measurements. These were to be made of 25 random samples of the entire site, composited in groups of 5. Statistically there would be only 5 numbers in the final hypothesis test, but we achieved greater power to detect an isolated "hot spot" by taking 25 physical samples. This highlights the important relationship between how many numbers are used in the test and how they were obtained. There's more to statistical decision making than just algorithms with numbers! To my everlasting relief, the five composite values confirmed the cleanup target was met.
{ "source": [ "https://stats.stackexchange.com/questions/9653", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4176/" ] }
9,664
Suppose I have a set of sample data from an unknown or complex distribution, and I want to perform some inference on a statistic $T$ of the data. My default inclination is to just generate a bunch of bootstrap samples with replacement, and calculate my statistic $T$ on each bootstrap sample to create an estimated distribution for $T$. What are examples where this is a bad idea? For example, one case where naively performing this bootstrap would fail is if I'm trying to use the bootstrap on time series data (say, to test whether I have significant autocorrelation). The naive bootstrap described above (generating the $i$th datapoint of the nth bootstrap sample series by sampling with replacement from my original series) would (I think) be ill-advised, since it ignores the structure in my original time series, and so we get fancier bootstrap techniques like the block bootstrap. To put it another way, what is there to the bootstrap besides "sampling with replacement"?
If the quantity of interest, usually a functional of a distribution, is reasonably smooth and your data are i.i.d., you're usually in pretty safe territory. Of course, there are other circumstances when the bootstrap will work as well. What it means for the bootstrap to "fail" Broadly speaking, the purpose of the bootstrap is to construct an approximate sampling distribution for the statistic of interest. It's not about actual estimation of the parameter. So, if the statistic of interest (under some rescaling and centering) is $\newcommand{\Xhat}{\hat{X}_n}\Xhat$ and $\Xhat \to X_\infty$ in distribution, we'd like our bootstrap distribution to converge to the distribution of $X_\infty$. If we don't have this, then we can't trust the inferences made. The canonical example of when the bootstrap can fail, even in an i.i.d. framework is when trying to approximate the sampling distribution of an extreme order statistic. Below is a brief discussion. Maximum order statistic of a random sample from a $\;\mathcal{U}[0,\theta]$ distribution Let $X_1, X_2, \ldots$ be a sequence of i.i.d. uniform random variables on $[0,\theta]$. Let $\newcommand{\Xmax}{X_{(n)}} \Xmax = \max_{1\leq k \leq n} X_k$. The distribution of $\Xmax$ is $$ \renewcommand{\Pr}{\mathbb{P}}\Pr(\Xmax \leq x) = (x/\theta)^n \>. $$ (Note that by a very simple argument, this actually also shows that $\Xmax \to \theta$ in probability, and even, almost surely , if the random variables are all defined on the same space.) An elementary calculation yields $$ \Pr( n(\theta - \Xmax) \leq x ) = 1 - \Big(1 - \frac{x}{\theta n}\Big)^n \to 1 - e^{-x/\theta} \>, $$ or, in other words, $n(\theta - \Xmax)$ converges in distribution to an exponential random variable with mean $\theta$. Now, we form a (naive) bootstrap estimate of the distribution of $n(\theta - \Xmax)$ by resampling $X_1, \ldots, X_n$ with replacement to get $X_1^\star,\ldots,X_n^\star$ and using the distribution of $n(\Xmax - \Xmax^\star)$ conditional on $X_1,\ldots,X_n$. But, observe that $\Xmax^\star = \Xmax$ with probability $1 - (1-1/n)^n \to 1 - e^{-1}$, and so the bootstrap distribution has a point mass at zero even asymptotically despite the fact that the actual limiting distribution is continuous. More explicitly, though the true limiting distribution is exponential with mean $\theta$, the limiting bootstrap distribution places a point mass at zero of size $1−e^{-1} \approx 0.632$ independent of the actual value of $\theta$ . By taking $\theta$ sufficiently large, we can make the probability of the true limiting distribution arbitrary small for any fixed interval $[0,\varepsilon)$, yet the bootstrap will ( still !) report that there is at least probability 0.632 in this interval! From this it should be clear that the bootstrap can behave arbitrarily badly in this setting. In summary, the bootstrap fails (miserably) in this case. Things tend to go wrong when dealing with parameters at the edge of the parameter space. An example from a sample of normal random variables There are other similar examples of the failure of the bootstrap in surprisingly simple circumstances. Consider a sample $X_1, X_2, \ldots$ from $\mathcal{N}(\mu,1)$ where the parameter space for $\mu$ is restricted to $[0,\infty)$. The MLE in this case is $\newcommand{\Xbar}{\bar{X}}\Xhat = \max(\bar{X},0)$. Again, we use the bootstrap estimate $\Xhat^\star = \max(\Xbar^\star, 0)$. Again, it can be shown that the distribution of $\sqrt{n}(\Xhat^\star - \Xhat)$ (conditional on the observed sample) does not converge to the same limiting distribution as $\sqrt{n}(\Xhat - \mu)$. Exchangeable arrays Perhaps one of the most dramatic examples is for an exchangeable array. Let $\newcommand{\bm}[1]{\mathbf{#1}}\bm{Y} = (Y_{ij})$ be an array of random variables such that, for every pair of permutation matrices $\bm{P}$ and $\bm{Q}$, the arrays $\bm{Y}$ and $\bm{P} \bm{Y} \bm{Q}$ have the same joint distribution. That is, permuting rows and columns of $\bm{Y}$ keeps the distribution invariant. (You can think of a two-way random effects model with one observation per cell as an example, though the model is much more general.) Suppose we wish to estimate a confidence interval for the mean $\mu = \mathbb{E}(Y_{ij}) = \mathbb{E}(Y_{11})$ (due to the exchangeability assumption described above the means of all the cells must be the same). McCullagh (2000) considered two different natural (i.e., naive) ways of bootstrapping such an array. Neither of them get the asymptotic variance for the sample mean correct. He also considers some examples of a one-way exchangeable array and linear regression. References Unfortunately, the subject matter is nontrivial, so none of these are particularly easy reads. P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap . Ann. Stat. , vol. 9, no. 6 (1981), 1196–1217. D. W. K. Andrews, Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space , Econometrica , vol. 68, no. 2 (2000), 399–405. P. McCullagh, Resampling and exchangeable arrays , Bernoulli , vol. 6, no. 2 (2000), 285–301. E. L. Lehmann and J. P. Romano, Testing Statistical Hypotheses , 3rd. ed., Springer (2005). [Chapter 15: General Large Sample Methods]
{ "source": [ "https://stats.stackexchange.com/questions/9664", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1106/" ] }
9,666
My new computer clock runs at a rate that changes in odd steps over time, even after I tuned it via the Linux adjtimex software. Here is a plot of the change in the cumulative clock drift for each of about 1700 samples taken every 10000 seconds, with a bunch of missed points and outliers when I was off the network and ntpdate wouldn't work. E.g. early on, the clock gained about 0.09 seconds every 10000 seconds (9 ppm). I'm looking for some clever library functions that can automatically identify what I'll call the various statistical "modes" of this data set - i.e. there is one mode in the middle where y is about -0.02 for a long time, then another one near 0.09 early on and one at 0.202 at the end. Most code for finding the mode of data that I've seen deals with integers and discrete data, but as you see this has pretty messy floating point values. At any rate, ideally I'd like a summary that automatically finds the modes I identified above, and also gives me a standard deviation for each. Start/stop points for each mode for extra credit. Python code preferred.
If the quantity of interest, usually a functional of a distribution, is reasonably smooth and your data are i.i.d., you're usually in pretty safe territory. Of course, there are other circumstances when the bootstrap will work as well. What it means for the bootstrap to "fail" Broadly speaking, the purpose of the bootstrap is to construct an approximate sampling distribution for the statistic of interest. It's not about actual estimation of the parameter. So, if the statistic of interest (under some rescaling and centering) is $\newcommand{\Xhat}{\hat{X}_n}\Xhat$ and $\Xhat \to X_\infty$ in distribution, we'd like our bootstrap distribution to converge to the distribution of $X_\infty$. If we don't have this, then we can't trust the inferences made. The canonical example of when the bootstrap can fail, even in an i.i.d. framework is when trying to approximate the sampling distribution of an extreme order statistic. Below is a brief discussion. Maximum order statistic of a random sample from a $\;\mathcal{U}[0,\theta]$ distribution Let $X_1, X_2, \ldots$ be a sequence of i.i.d. uniform random variables on $[0,\theta]$. Let $\newcommand{\Xmax}{X_{(n)}} \Xmax = \max_{1\leq k \leq n} X_k$. The distribution of $\Xmax$ is $$ \renewcommand{\Pr}{\mathbb{P}}\Pr(\Xmax \leq x) = (x/\theta)^n \>. $$ (Note that by a very simple argument, this actually also shows that $\Xmax \to \theta$ in probability, and even, almost surely , if the random variables are all defined on the same space.) An elementary calculation yields $$ \Pr( n(\theta - \Xmax) \leq x ) = 1 - \Big(1 - \frac{x}{\theta n}\Big)^n \to 1 - e^{-x/\theta} \>, $$ or, in other words, $n(\theta - \Xmax)$ converges in distribution to an exponential random variable with mean $\theta$. Now, we form a (naive) bootstrap estimate of the distribution of $n(\theta - \Xmax)$ by resampling $X_1, \ldots, X_n$ with replacement to get $X_1^\star,\ldots,X_n^\star$ and using the distribution of $n(\Xmax - \Xmax^\star)$ conditional on $X_1,\ldots,X_n$. But, observe that $\Xmax^\star = \Xmax$ with probability $1 - (1-1/n)^n \to 1 - e^{-1}$, and so the bootstrap distribution has a point mass at zero even asymptotically despite the fact that the actual limiting distribution is continuous. More explicitly, though the true limiting distribution is exponential with mean $\theta$, the limiting bootstrap distribution places a point mass at zero of size $1−e^{-1} \approx 0.632$ independent of the actual value of $\theta$ . By taking $\theta$ sufficiently large, we can make the probability of the true limiting distribution arbitrary small for any fixed interval $[0,\varepsilon)$, yet the bootstrap will ( still !) report that there is at least probability 0.632 in this interval! From this it should be clear that the bootstrap can behave arbitrarily badly in this setting. In summary, the bootstrap fails (miserably) in this case. Things tend to go wrong when dealing with parameters at the edge of the parameter space. An example from a sample of normal random variables There are other similar examples of the failure of the bootstrap in surprisingly simple circumstances. Consider a sample $X_1, X_2, \ldots$ from $\mathcal{N}(\mu,1)$ where the parameter space for $\mu$ is restricted to $[0,\infty)$. The MLE in this case is $\newcommand{\Xbar}{\bar{X}}\Xhat = \max(\bar{X},0)$. Again, we use the bootstrap estimate $\Xhat^\star = \max(\Xbar^\star, 0)$. Again, it can be shown that the distribution of $\sqrt{n}(\Xhat^\star - \Xhat)$ (conditional on the observed sample) does not converge to the same limiting distribution as $\sqrt{n}(\Xhat - \mu)$. Exchangeable arrays Perhaps one of the most dramatic examples is for an exchangeable array. Let $\newcommand{\bm}[1]{\mathbf{#1}}\bm{Y} = (Y_{ij})$ be an array of random variables such that, for every pair of permutation matrices $\bm{P}$ and $\bm{Q}$, the arrays $\bm{Y}$ and $\bm{P} \bm{Y} \bm{Q}$ have the same joint distribution. That is, permuting rows and columns of $\bm{Y}$ keeps the distribution invariant. (You can think of a two-way random effects model with one observation per cell as an example, though the model is much more general.) Suppose we wish to estimate a confidence interval for the mean $\mu = \mathbb{E}(Y_{ij}) = \mathbb{E}(Y_{11})$ (due to the exchangeability assumption described above the means of all the cells must be the same). McCullagh (2000) considered two different natural (i.e., naive) ways of bootstrapping such an array. Neither of them get the asymptotic variance for the sample mean correct. He also considers some examples of a one-way exchangeable array and linear regression. References Unfortunately, the subject matter is nontrivial, so none of these are particularly easy reads. P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap . Ann. Stat. , vol. 9, no. 6 (1981), 1196–1217. D. W. K. Andrews, Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space , Econometrica , vol. 68, no. 2 (2000), 399–405. P. McCullagh, Resampling and exchangeable arrays , Bernoulli , vol. 6, no. 2 (2000), 285–301. E. L. Lehmann and J. P. Romano, Testing Statistical Hypotheses , 3rd. ed., Springer (2005). [Chapter 15: General Large Sample Methods]
{ "source": [ "https://stats.stackexchange.com/questions/9666", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2434/" ] }
9,699
Is there a possibility to use R in a webinterface without the need to install it? I have only one small script which I like to run but I just want to give it a shot without a long installation procedure. Thank you.
Yes, there are some Rweb interface, like this one (dead as of September 2020), RDDR online REPL , or Repl.it . Note: Installation of the R software is pretty straightforward and quick, on any platform.
{ "source": [ "https://stats.stackexchange.com/questions/9699", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/230/" ] }
9,751
I often hear that post hoc tests after an ANOVA can only be used if the ANOVA itself was significant. However, post hoc tests adjust $p$-values to keep the global type I error rate at 5%, don't they? So why do we need the global test first? If we don't need a global test is the terminology "post hoc" correct? Or are there multiple kinds of post hoc tests, some assuming a significant global test result and others without that assumption?
Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so. " An unfortunate common practice is to pursue multiple comparisons only when the hull hypothesis of homogeneity is rejected. " ( Hsu, page 177 ) Will the results of post tests be valid if the overall P value for the ANOVA is greater than 0.05? Surprisingly, the answer is yes. With one exception, post tests are valid even if the overall ANOVA did not find a significant difference among means. The exception is the first multiple comparison test invented, the protected Fisher Least Significant Difference (LSD) test. The first step of the protected LSD test is to check if the overall ANOVA rejects the null hypothesis of identical means. If it doesn't, individual comparisons should not be made. But this protected LSD test is outmoded, and no longer recommended. Is it possible to get a 'significant' result from a multiple comparisons test even when the overall ANOVA was not significant? Yes it is possible. The exception is Scheffe's test. It is intertwined with the overall F test. If the overall ANOVA has a P value greater than 0.05, then the Scheffe's test won't find any significant post tests. In this case, performing post tests following an overall nonsignificant ANOVA is a waste of time but won't lead to invalid conclusions. But other multiple comparison tests can find significant differences (sometimes) even when the overall ANOVA showed no significant differences among groups. How can I understand the apparent contradiction between an ANOVA saying, in effect, that all group means are identical and a post test finding differences? The overall one-way ANOVA tests the null hypothesis that all the treatment groups have identical mean values, so any difference you happened to observe is due to random sampling. Each post test tests the null hypothesis that two particular groups have identical means. The post tests are more focused, so have power to find differences between groups even when the overall ANOVA reports that the differences among the means are not statistically significant. Are the results of the overall ANOVA useful at all? ANOVA tests the overall null hypothesis that all the data come from groups that have identical means. If that is your experimental question -- does the data provide convincing evidence that the means are not all identical -- then ANOVA is exactly what you want. More often, your experimental questions are more focused and answered by multiple comparison tests (post tests). In these cases, you can safely ignore the overall ANOVA results and jump right to the post test results. Note that the multiple comparison calculations all use the mean-square result from the ANOVA table. So even if you don't care about the value of F or the P value, the post tests still require that the ANOVA table be computed.
{ "source": [ "https://stats.stackexchange.com/questions/9751", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4176/" ] }
9,885
I have a Machine Learning course this semester and the professor asked us to find a real-world problem and solve it by one of machine learning methods introduced in the class, as: Decision Trees Artificial Neural Networks Support Vector Machines Instance-based Learning ( kNN , LWL ) Bayesian Networks Reinforcement learning I am one of fans of stackoverflow and stackexchange and know database dumps of these websites are provided to the public because they are awesome! I hope I could find a good machine learning challenge about these databases and solve it. My idea One idea came to my mind is predicting tags for questions based on the entered words in question body. I think the Bayesian network is the right tool for learning tags for a question but need more research. Anyway, after learning phase when user finishes entering the question some tags should be suggested to him. Please tell me : I want to ask the stats community as experienced people about ML two questions: Do you think tag suggestion is at least a problem which has any chance to solve? Do you have any advice about it? I am a little worried because stackexchange does not implement such feature yet. Do you have any other/better idea for the ML project that is based on stackexchange database? I find it really hard to find something to learn from stackexchange databases. Consideration about database errors: I would like to point that although the databases are huge and have many instances, they are not perfect and are prune to error. The obvious one is the age of users that is unreliable. Even selected tags for the question are not 100% correct. Anyway, we should consider the percent of correctness of data in selecting a problem. Consideration about the problem itself: My project should not be about data-mining or something like this. It just should be an application of ML methods in real-world.
Yes , I think tag prediction is an interesting one and one for which you have a good shot at "success". Below are some thoughts intended to potentially aid in brainstorming and further exploration of this topic. I think there are many potentially interesting directions that such a project could take. I would guess that a serious attempt at just one or two of the below would make for a more than adequate project and you're likely to come up with more interesting questions than those I've posed. I'm going to take a very wide view as to what is considered machine learning . Undoubtedly some of my suggestions would be better classified as exploratory data analysis and more traditional statistical analysis . But, perhaps, it will help in some small way as you formulate your own interesting questions. You'll note, I try to address questions that I think would be interesting in terms of enhancing the functionality of the site. Of course, there are many other interesting questions as well that may not be that related to site friendliness. Basic descriptive analysis of user behavior : I'm guessing there is a very clear cyclic weekly pattern to user participation on this site. When does the site get the most traffic? What does the graph of user participation on the site look like, say, stratified by hour over the week? You'd want to adjust for potential changes in overall popularity of the site over time. This leads to the question, how has the site's popularity changed since inception? How does the participation of a "typical" user vary with time since joining? I'm guessing it ramps up pretty quickly at the start, then plateaus, and probably heads south after a few weeks or so of joining. Optimal submission of questions and answers : Getting insight on the first question seems to naturally lead to some more interesting (in an ML sense) questions. Say I have a question I need an answer to. If I want to maximize my probability of getting a response, when should I submit it? If I am responding to a question and I want to maximize my vote count, when should I submit my answer? Maybe the answers to these two are very different. How does this vary by the topic of the question (say, e.g., as defined by the associated tags)? Biclustering of users and topics : Which users are most alike in terms of their interests, again, perhaps as measured by tags? What topics are most similar according to which users participate? Can you come up with a nice visualization of these relationships? Offshoots of this would be to try to predict which user(s) is most likely to submit an answer to a particular question. (Imagine providing such technology to SE so that users could be notified of potentially interesting questions, not simply based on tags.) Clustering of answerers by behavior : It seems that there are a few different basic behavioral patterns regarding how answerers use this site. Can you come up with features and a clustering algorithm to cluster answerers according to their behavior. Are the clusters interpretable? Suggesting new tags : Can you come up with suggestions for new tags based on inferring topics from the questions and answers currently in the database. For example, I believe the tag [mixture-model] was recently added because someone noticed we were getting a bunch of related questions. But, it seems an information-retrieval approach should be able to extract such topics directly and potentially suggest them to moderators. Semisupervised learning of geographic locations : ( This one may be a bit touchy from a privacy perspective. ) Some users list where they are located. Others do not. Using usage patterns and potentially vocabulary, etc, can you put a geographic confidence region on the location of each user? Intuitively, it would seem that this would be (much) more accurate in terms of longitude than latitude. Automated flagging of possible duplicates and highly related questions : The site already has a similar sort of feature with the Related bar in the right margin. Finding nearly exact duplicates and suggesting them could be useful to the moderators. Doing this across sites in the SE community would seem to be new. Churn prediction and user retention : Using features from each user's history, can you predict the next time you expect to see them? Can you predict the probability they will return to the site conditional on how long they've been absent and features of their past behavior? This could be used, e.g., to try to notice when users are at risk of "churn" and engage them (say, via email) in an effort to retain them. A typical approach would shoot out an email after some fixed period of inactivity. But, each user is very different and there is lots of information about lots of users, so a more tailored approach could be developed.
{ "source": [ "https://stats.stackexchange.com/questions/9885", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2148/" ] }
9,886
I have a set of data that looks at the number of "hits" a specific program makes over the course of time. The data goes back to September 2010, and includes data up to March 2011, so the data points are monthly. What I want to see if the most recent data (March 2011) shows a statistically significant decrease in the number of "hits" this program makes. I have a feeling there might not be a test that would fit this perfectly, as the data is a bit limited. I can also pull data weekly for the same time frame, which would build 31 points (at which point I would still want to look at the most recent unit for comparison). There hasn't been a population mean built for this data as of yet, as the data can only be pulled as far back as Jan 2010 (but the data from then is not reliable). For reference, just 9 weeks data (as i pulled that first) reveals mean= 1013.67 n=9 st.dev= 53.57 Most recent week= 991 Just eyeballing it does not appear statistically significant as a drop in "hits", however I'll need to perform this analysis every few weeks, and wondering if there's something reliable I can use. Thanks ahead of time for the input!
Yes , I think tag prediction is an interesting one and one for which you have a good shot at "success". Below are some thoughts intended to potentially aid in brainstorming and further exploration of this topic. I think there are many potentially interesting directions that such a project could take. I would guess that a serious attempt at just one or two of the below would make for a more than adequate project and you're likely to come up with more interesting questions than those I've posed. I'm going to take a very wide view as to what is considered machine learning . Undoubtedly some of my suggestions would be better classified as exploratory data analysis and more traditional statistical analysis . But, perhaps, it will help in some small way as you formulate your own interesting questions. You'll note, I try to address questions that I think would be interesting in terms of enhancing the functionality of the site. Of course, there are many other interesting questions as well that may not be that related to site friendliness. Basic descriptive analysis of user behavior : I'm guessing there is a very clear cyclic weekly pattern to user participation on this site. When does the site get the most traffic? What does the graph of user participation on the site look like, say, stratified by hour over the week? You'd want to adjust for potential changes in overall popularity of the site over time. This leads to the question, how has the site's popularity changed since inception? How does the participation of a "typical" user vary with time since joining? I'm guessing it ramps up pretty quickly at the start, then plateaus, and probably heads south after a few weeks or so of joining. Optimal submission of questions and answers : Getting insight on the first question seems to naturally lead to some more interesting (in an ML sense) questions. Say I have a question I need an answer to. If I want to maximize my probability of getting a response, when should I submit it? If I am responding to a question and I want to maximize my vote count, when should I submit my answer? Maybe the answers to these two are very different. How does this vary by the topic of the question (say, e.g., as defined by the associated tags)? Biclustering of users and topics : Which users are most alike in terms of their interests, again, perhaps as measured by tags? What topics are most similar according to which users participate? Can you come up with a nice visualization of these relationships? Offshoots of this would be to try to predict which user(s) is most likely to submit an answer to a particular question. (Imagine providing such technology to SE so that users could be notified of potentially interesting questions, not simply based on tags.) Clustering of answerers by behavior : It seems that there are a few different basic behavioral patterns regarding how answerers use this site. Can you come up with features and a clustering algorithm to cluster answerers according to their behavior. Are the clusters interpretable? Suggesting new tags : Can you come up with suggestions for new tags based on inferring topics from the questions and answers currently in the database. For example, I believe the tag [mixture-model] was recently added because someone noticed we were getting a bunch of related questions. But, it seems an information-retrieval approach should be able to extract such topics directly and potentially suggest them to moderators. Semisupervised learning of geographic locations : ( This one may be a bit touchy from a privacy perspective. ) Some users list where they are located. Others do not. Using usage patterns and potentially vocabulary, etc, can you put a geographic confidence region on the location of each user? Intuitively, it would seem that this would be (much) more accurate in terms of longitude than latitude. Automated flagging of possible duplicates and highly related questions : The site already has a similar sort of feature with the Related bar in the right margin. Finding nearly exact duplicates and suggesting them could be useful to the moderators. Doing this across sites in the SE community would seem to be new. Churn prediction and user retention : Using features from each user's history, can you predict the next time you expect to see them? Can you predict the probability they will return to the site conditional on how long they've been absent and features of their past behavior? This could be used, e.g., to try to notice when users are at risk of "churn" and engage them (say, via email) in an effort to retain them. A typical approach would shoot out an email after some fixed period of inactivity. But, each user is very different and there is lots of information about lots of users, so a more tailored approach could be developed.
{ "source": [ "https://stats.stackexchange.com/questions/9886", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4292/" ] }
9,937
In a data frame, I would like to get the column's index by name. For example: x <- data.frame(foo=c('a','b','c'),bar=c(4,5,6),quux=c(4,5,6)) I want to know the column index for "bar". I came up with the following but it seems inelegant. Is there a more straightforward builtin that I am missing? seq(1,length(names(x)))[names(x) == "bar"] [1] 2
probably this is the simplest way: which(names(x)=="bar")
{ "source": [ "https://stats.stackexchange.com/questions/9937", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1138/" ] }