source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
16,198 | What do you call an average that does not include outliers? For example if you have a set: {90,89,92,91,5} avg = 73.4 but excluding the outlier (5) we have {90,89,92,91(,5)} avg = 90.5 How do you describe this average in statistics? | It's called the trimmed mean . Basically what you do is compute the mean of the middle 80% of your data, ignoring the top and bottom 10%. Of course, these numbers can vary, but that's the general idea. | {
"source": [
"https://stats.stackexchange.com/questions/16198",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/112726/"
]
} |
16,218 | Is there a difference between the phrases "testing of hypothesis" and "test of significance" or are they the same? After a detailed answer from @Micheal Lew, I have one confusion that nowadays hypothesis (e.g., t-test to test mean) are example of either "significance testing" or "hypothesis testing"?
Or it is a combination of both?
How you would differentiate them with simple example? | Significance testing is what Fisher devised and hypothesis testing is what Neyman and Pearson devised to replace significance testing. They are not the same and are mutually incompatible to an extent that would surprise most users of null hypothesis tests. Fisher's significance tests yield a p value that represents how extreme the observations are under the null hypothesis. That p value is an index of evidence against the null hypothesis and is the level of significance. Neyman and Pearson's hypothesis tests set up both a null hypothesis and an alternative hypothesis and work as a decision rule for accepting the null hypothesis. Briefly (there is more to it than I can put here) you choose an acceptable rate of false positive inference, alpha (usually 0.05), and either accept or reject the null based on whether the p value is above or below alpha. You have to abide by the statistical test's decision if you wish to protect against false positive errors. Fisher's approach allows you to take anything you like into account in interpreting the result, for example pre-existing evidence can be informally taken into account in the interpretation and presentation of the result. In the N-P approach that can only be done in the experimental design stage, and seems to be rarely done. In my opinion the Fisherian approach is more useful in basic bioscientific work than is the N-P approach. There is a substantial literature about inconsistencies between significance testing and hypothesis testing and about the unfortunate hybridisation of the two. You could start with this paper:
Goodman, Toward evidence-based medical statistics. 1: The P value fallacy. https://pubmed.ncbi.nlm.nih.gov/10383371/ | {
"source": [
"https://stats.stackexchange.com/questions/16218",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5864/"
]
} |
16,312 | I have read about controversies regarding hypothesis testing with some commentators suggesting that hypothesis testing should not be used. Some commentators suggest that confidence intervals should be used instead. What is the difference between confidence intervals and hypothesis testing? Explanation with reference and examples would be appreciated. | You can use a confidence interval (CI) for hypothesis testing. In the typical case, if the CI for an effect does not span 0 then you can reject the null hypothesis. But a CI can be used for more, whereas reporting whether it has been passed is the limit of the usefulness of a test. The reason you're recommended to use CI instead of just a t-test, for example, is because then you can do more than just test hypotheses. You can make a statement about the range of effects you believe to be likely (the ones in the CI). You can't do that with just a t-test. You can also use it to make statements about the null, which you can't do with a t-test. If the t-test doesn't reject the null then you just say that you can't reject the null, which isn't saying much. But if you have a narrow confidence interval around the null then you can suggest that the null, or a value close to it, is likely the true value and suggest the effect of the treatment, or independent variable, is too small to be meaningful (or that your experiment doesn't have enough power and precision to detect an effect important to you because the CI includes both that effect and 0). Added Later: I really should have said that, while you can use a CI like a test it isn't one. It's an estimate of a range where you think the parameter values lies. You can make test like inferences but you're just so much better off never talking about it that way. Which is better? A) The effect is 0.6, t (29) = 2.8, p < 0.05. This statistically significant effect is... (some discussion ensues about this statistical significance without any mention of or even strong ability to discuss the practical implication of the magnitude of the finding... under a Neyman-Pearson framework the magnitude of the t and p values is pretty much meaningless and all you can discuss is whether the effect is present or isn't found to be present. You can never really talk about there not actually being an effect based on the test.) or B) Using a 95% confidence interval I estimate the effect to be between 0.2 and 1.0. (some discussion ensues talking about the actual effect of interest, whether it's plausible values are ones that have any particular meaning and any use of the word significant for exactly what it's supposed to mean. In addition, the width of the CI can go directly to a discussion of whether this is a strong finding or whether you can only reach a more tentative conclusion) If you took a basic statistics class you might initially gravitate toward A. And there may be some cases where it is a better way to report a result. But for most work B is by far and away superior. A range estimate is not a test. | {
"source": [
"https://stats.stackexchange.com/questions/16312",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5864/"
]
} |
16,334 | I've never had a course in statistics, so I hope I'm asking in the right place here. Suppose I have only two data describing a normal distribution: the mean $\mu$ and variance $\sigma^2$. I want to use a computer to randomly sample from this distribution such that I respect these two statistics. It's pretty obvious that I can handle the mean by simply normalizing around 0: just add $\mu$ to each sample before outputting the sample. But I don't see how to programmatically generate samples to respect $\sigma^2$. My program will be in a conventional programming language; I don't have access to any statistical packages. | If you can sample from a given distribution with mean 0 and variance 1, then you can easily sample from a scale-location transformation of that distribution, which has mean $\mu$ and variance $\sigma^2$. If $x$ is a sample from a mean 0 and variance 1 distribution then
$$\sigma x + \mu$$
is a sample with mean $\mu$ and variance $\sigma^2$. So, all you have to do is to scale the variable by the standard deviation $\sigma$ (square root of the variance) before adding the mean $\mu$. How you actually get a simulation from a normal distribution with mean 0 and variance 1 is a different story. It's fun and interesting to know how to implement such things, but whether you use a statistical package or programming language or not, I will recommend that you obtain and use a suitable function or library for the random number generation. If you want advice on what library to use you might want to add specific information on which programming language(s) you are using. Edit: In the light of the comments, some other answers and the fact that Fixee accepted this answer, I will give some more details on how one can use transformations of uniform variables to produce normal variables. One method, already mentioned in a comment by VitalStatistix , is the Box-Muller method that takes two independent uniform random variables and produces two independent normal random variables. A similar method that avoids the computation of two transcendental functions sin and cos at the expense of a few more simulations was posted as an answer by francogrex . A completely general method is the transformation of a uniform random variable by the inverse distribution function. If $U$ is uniformly distributed on $[0,1]$ then
$$\Phi^{-1}(U)$$
has a standard normal distribution. Though there is no explicit analytic formula for $\Phi^{-1}$, it can be computed by accurate numerical approximations. The current implementation in R (last I checked) uses this idea. The method is conceptually very simple, but requires an accurate implementation of $\Phi^{-1}$, which is probably not as widespread as the (other) transcendental functions log , sin and cos . Several answers mention the possibility of using the central limit theorem to approximate the normal distribution as an average of uniform random variables. This is not generally recommended. Arguments presented, such as matching the mean 0 and variance 1, and considerations of support of the distribution are not convincing. In Exercise 2.3 in "Introducing Monte Carlo Methods with R" by Christian P. Robert and George Casella this generator is called antiquated and the approximation is called very poor . There is a bewildering number of other ideas. Chapter 3 and, in particular, Section 3.4, in "The Art of Computer Programming" Vol. 2 by Donald E. Knuth is a classical reference on random number generation. Brian Ripley wrote Computer Generation of Random Variables: A Tutorial , which may be useful. The book mentioned by Robert and Casella, or perhaps Chapter 2 in their other book, "Monte Carlo statistical methods", is also recommended. At the end of the day, a correctly implemented method is not better than the uniform pseudo random number generator used. Personally, I prefer to rely on special purpose libraries that I believe are trustworthy. I almost always rely on the methods implemented in R either directly in R or via the API in C/C++. Obviously, this is not a solution for everybody, but I am not familiar enough with other libraries to recommend alternatives. | {
"source": [
"https://stats.stackexchange.com/questions/16334",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6609/"
]
} |
16,349 | I want to derive the limits for the $100(1-\alpha)\%$ confidence interval for the ratio of two means. Suppose, $X_1 \sim N(\theta_1, \sigma^2)$ and $X_2 \sim N(\theta_2, \sigma^2)$
being independent, the mean ratio $\Gamma = \theta_1/\theta_2$. I tried to solve:
$$\text{Pr}(-z(\alpha/2)) \leq X_1 - \Gamma X_2 / \sigma \sqrt {1 + \gamma^2} \leq z(\alpha/2)) = 1 - \alpha$$ but that equation couldn't be solved for many cases (no roots). Am I doing something wrong? Is there a better approach? Thanks | Fieller's method does what you want -- compute a confidence interval for the quotient of two means, both assumed to be sampled from Gaussian distributions. The original citation is: Fieller EC: The biological standardization of Insulin. Suppl to J R Statist Soc 1940, 7:1-64. The Wikipedia article does a good job of summarizing. I've created an online calculator that does the computation. Here is a page summarizing the math from the first edition of my Intuitive Biostatistics | {
"source": [
"https://stats.stackexchange.com/questions/16349",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4569/"
]
} |
16,381 | What are the usual assumptions for linear regression? Do they include: a linear relationship between the independent and dependent variable independent errors normal distribution of errors homoscedasticity Are there any others? | The answer depends heavily on how do you define complete and usual. Suppose we write linear regression model in the following way: $
\newcommand{\x}{\mathbf{x}}
\newcommand{\bet}{\boldsymbol\beta}
\DeclareMathOperator{\E}{\mathbb{E}}
\DeclareMathOperator{\Var}{Var}
\DeclareMathOperator{\Cov}{Cov}
\DeclareMathOperator{\Tr}{Tr}
$ $$y_i = \x_i'\bet + u_i$$ where $\mathbf{x}_i$ is the vector of predictor variables, $\beta$ is the parameter of interest, $y_i$ is the response variable, and $u_i$ are the disturbance. One of the possible estimates of $\beta$ is the least squares estimate: $$
\hat\bet
= \textrm{argmin}_{\bet}\sum(y_i-\x_i\bet)^2
= \left(\sum \x_i \x_i'\right)^{-1} \sum \x_i y_i
.$$ Now practically all of the textbooks deal with the assumptions when this estimate $\hat\bet$ has desirable properties, such as unbiasedness, consistency, efficiency, some distributional properties, etc. Each of these properties requires certain assumptions, which are not the same. So the better question would be to ask which assumptions are needed for wanted properties of the LS estimate. The properties I mention above require some probability model for regression. And here we have the situation where different models are used in different applied fields. The simple case is to treat $y_i$ as an independent random variables, with $\x_i$ being non-random. I do not like the word usual, but we can say that this is the usual case in most applied fields (as far as I know). Here is the list of some of the desirable properties of statistical estimates: The estimate exists. Unbiasedness: $E\hat\bet=\bet$ . Consistency: $\hat\bet \to \bet$ as $n\to\infty$ ( $n$ here is the size of a data sample). Efficiency: $\Var(\hat\bet)$ is smaller than $\Var(\tilde\bet)$ for alternative estimates $\tilde\bet$ of $\bet$ . The ability to either approximate or calculate the distribution function of $\hat\bet$ . Existence Existence property might seem weird, but it is very important. In the definition of $\hat\beta$ we invert the matrix $\sum \x_i \x_i'.$ It is not guaranteed that the inverse of this matrix exists for all possible variants of $\x_i$ . So we immediately get our first assumption: Matrix $\sum \x_i \x_i'$ should be of full rank, i.e. invertible. Unbiasedness We have $$
\E\hat\bet
= \left(\sum \x_i \x_i' \right)^{-1}\left(\sum \x_i \E y_i \right)
= \bet,
$$ if $$\E y_i = \x_i \bet.$$ We may number it the second assumption, but we may have stated it outright, since this is one of the natural ways to define linear relationship. Note that to get unbiasedness we only require that $\E y_i = \x_i \bet$ for all $i$ , and $\x_i$ are constants. Independence property is not required. Consistency For getting the assumptions for consistency we need to state more clearly what do we mean by $\to$ . For sequences of random variables we have different modes of convergence: in probability, almost surely, in distribution and $p$ -th moment sense. Suppose we want to get the convergence in probability. We can use either law of large numbers, or directly use the multivariate Chebyshev inequality (employing the fact that $\E \hat\bet = \bet$ ): $$\Pr(\lVert \hat\bet - \bet \rVert >\varepsilon)\le \frac{\Tr(\Var(\hat\bet))}{\varepsilon^2}.$$ (This variant of the inequality comes directly from applying Markov's inequality to $\lVert \hat\bet - \bet\rVert^2$ , noting that $\E \lVert \hat\bet - \bet\rVert^2 = \Tr \Var(\hat\bet)$ .) Since convergence in probability means that the left hand term must vanish for any $\varepsilon>0$ as $n\to\infty$ , we need that $\Var(\hat\bet)\to 0$ as $n\to\infty$ . This is perfectly reasonable since with more data the precision with which we estimate $\bet$ should increase. We have that $$
\Var(\hat\bet)
=\left( \sum \x_i \x_i' \right)^{-1} \left( \sum_i \sum_j \x_i \x_j' \Cov(y_i, y_j) \right) \left(\sum \mathbf{x}_i\mathbf{x}_i'\right)^{-1}.$$ Independence ensures that $\Cov(y_i, y_j) = 0$ , hence the expression simplifies to $$
\Var(\hat\bet) =
\left( \sum \x_i \x_i' \right)^{-1}
\left( \sum_i \x_i \x_i' \Var(y_i) \right)
\left( \sum \x_i \x_i' \right)^{-1}
.$$ Now assume $\Var(y_i) = \text{const}$ , then $$
\Var(\hat\beta)
= \left(\sum \x_i \x_i' \right)^{-1} \Var(y_i)
.$$ Now if we additionally require that $\frac{1}{n} \sum \x_i \x_i'$ is bounded for each $n$ , we immediately get $$\Var(\bet) \to 0 \text{ as } n \to \infty.$$ So to get the consistency we assumed that there is no autocorrelation ( $\Cov(y_i, y_j) = 0$ ), the variance $\Var(y_i)$ is constant, and the $\x_i$ do not grow too much. The first assumption is satisfied if $y_i$ comes from independent samples. Efficiency The classic result is the Gauss-Markov theorem . The conditions for it is exactly the first two conditions for consistency and the condition for unbiasedness. Distributional properties If $y_i$ are normal we immediately get that $\hat\bet$ is normal, since it is a linear combination of normal random variables. If we assume previous assumptions of independence, uncorrelatedness and constant variance we get that $$
\hat\bet \sim \mathcal{N}\left(\bet, \sigma^2\left(\sum \x_i \x_i' \right)^{-1} \right)$$ where $\Var(y_i)=\sigma^2$ . If $y_i$ are not normal, but independent, we can get approximate distribution of $\hat\bet$ thanks to the central limit theorem. For this we need to assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \to A$$ for some matrix $A$ . The constant variance for asymptotic normality is not required if we assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \Var(y_i) \to B.$$ Note that with constant variance of $y$ , we have that $B = \sigma^2 A$ . The central limit theorem then gives us the following result: $$\sqrt{n}(\hat\bet - \bet) \to \mathcal{N}\left(0, A^{-1} B A^{-1} \right).$$ So from this we see that independence and constant variance for $y_i$ and certain assumptions for $\mathbf{x}_i$ gives us a lot of useful properties for LS estimate $\hat\bet$ . The thing is that these assumptions can be relaxed. For example we required that $\x_i$ are not random variables. This assumption is not feasible in econometric applications. If we let $\x_i$ be random, we can get similar results if use conditional expectations and take into account the randomness of $\x_i$ . The independence assumption also can be relaxed. We already demonstrated that sometimes only uncorrelatedness is needed. Even this can be further relaxed and it is still possible to show that the LS estimate will be consistent and asymptoticaly normal. See for example White's book for more details. | {
"source": [
"https://stats.stackexchange.com/questions/16381",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6620/"
]
} |
16,390 | I have been quite happily using mixed effects models for a while now with longitudinal data. I wish I could fit AR relationships in lmer (I think I'm right that I can't do this?) but I don't think it's desperately important so I don't worry too much. I've just come across generalized estimating equations (GEE), and they seem to offer a lot more flexibility than ME models. At the risk of asking an over-general question, is there any advice as to which is better for different tasks? I've seen some papers comparing them, and they tend to be of the form: "In this highly specialised area, don't use GEEs for X, don't use ME models for Y". I haven't found any more general advice. Can anyone enlighten me? Thank you! | Use GEE when you're interested in uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent in linear models, but not in non-linear (e.g. logistic). To see this, take, for example the random effects logistic model of the $j$'th observation of the $i$'th subject, $Y_{ij}$; $$ \log \left( \frac{p_{ij}}{1-p_{ij}} \right)
= \mu + \eta_{i} $$ where $\eta_{i} \sim N(0,\sigma^{2})$ is a random effect for subject $i$ and $p_{ij} = P(Y_{ij} = 1|\eta_{i})$. If you used a random effects model on these data, then you would get an estimate of $\mu$ that accounts for the fact that a mean zero normally distributed perturbation was applied to each individual, making it individual specific. If you used GEE on these data, you would estimate the population average log odds. In this case that would be $$ \nu = \log \left( \frac{ E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)}{
1-E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)} \right) $$ $\nu \neq \mu$, in general. For example, if $\mu = 1$ and $\sigma^{2} = 1$, then $\nu \approx .83$. Although the random effects have mean zero on the transformed (or linked ) scale, their effect is not mean zero on the original scale of the data. Try simulating some
data from a mixed effects logistic regression model and comparing the population level average with the inverse-logit of the intercept and you will see that they are not equal, as in this example. This difference in the interpretation of the coefficients is the fundamental difference between GEE and random effects models . Edit: In general, a mixed effects model with no predictors can be written as $$ \psi \big( E(Y_{ij}|\eta_{i}) \big) = \mu + \eta_{i} $$ where $\psi$ is a link function. Whenever $$ \psi \Big( E_{\eta} \Big( \psi^{-1} \big( E(Y_{ij}|\eta_{i}) \big) \Big) \Big) \neq E_{\eta} \big( E(Y_{ij}|\eta_{i}) \big) $$ there will be a difference between the population average coefficients (GEE) and the individual specific coefficients (random effects models). That is, the averages change by transforming the data, integrating out the random effects on the transformed scale, and then transformating back. Note that in the linear model, (that is, $\psi(x) = x$), the equality does hold, so they are equivalent. Edit 2: It is also worth noting that the "robust" sandwich-type standard errors produced by a GEE model provide valid asymptotic confidence intervals (e.g. they actually cover 95% of the time) even if the correlation structure specified in the model is not correct. Edit 3: If your interest is in understanding the association structure in the data, the GEE estimates of associations are notoriously inefficient (and sometimes inconsistent). I've seen a reference for this but can't place it right now. | {
"source": [
"https://stats.stackexchange.com/questions/16390",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/199/"
]
} |
16,402 | Is anyone aware of a statistics resource (preferably 20 to 40 pages maximum) that reviews basic stats for people who took statistics classes already? This resource could be handed out as a refresher to those who need it. The reason why I'm not looking for a book is that I find that people will more likely read a 20/40 pager than a 500 page book that goes into too much detail for the intended scope. The ideal resource will explain the statistical methods, hypotheses, various methods, etc. It has to cover methods like Chi-squared. It has to be written in an easy to read and digest manner. | Use GEE when you're interested in uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent in linear models, but not in non-linear (e.g. logistic). To see this, take, for example the random effects logistic model of the $j$'th observation of the $i$'th subject, $Y_{ij}$; $$ \log \left( \frac{p_{ij}}{1-p_{ij}} \right)
= \mu + \eta_{i} $$ where $\eta_{i} \sim N(0,\sigma^{2})$ is a random effect for subject $i$ and $p_{ij} = P(Y_{ij} = 1|\eta_{i})$. If you used a random effects model on these data, then you would get an estimate of $\mu$ that accounts for the fact that a mean zero normally distributed perturbation was applied to each individual, making it individual specific. If you used GEE on these data, you would estimate the population average log odds. In this case that would be $$ \nu = \log \left( \frac{ E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)}{
1-E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)} \right) $$ $\nu \neq \mu$, in general. For example, if $\mu = 1$ and $\sigma^{2} = 1$, then $\nu \approx .83$. Although the random effects have mean zero on the transformed (or linked ) scale, their effect is not mean zero on the original scale of the data. Try simulating some
data from a mixed effects logistic regression model and comparing the population level average with the inverse-logit of the intercept and you will see that they are not equal, as in this example. This difference in the interpretation of the coefficients is the fundamental difference between GEE and random effects models . Edit: In general, a mixed effects model with no predictors can be written as $$ \psi \big( E(Y_{ij}|\eta_{i}) \big) = \mu + \eta_{i} $$ where $\psi$ is a link function. Whenever $$ \psi \Big( E_{\eta} \Big( \psi^{-1} \big( E(Y_{ij}|\eta_{i}) \big) \Big) \Big) \neq E_{\eta} \big( E(Y_{ij}|\eta_{i}) \big) $$ there will be a difference between the population average coefficients (GEE) and the individual specific coefficients (random effects models). That is, the averages change by transforming the data, integrating out the random effects on the transformed scale, and then transformating back. Note that in the linear model, (that is, $\psi(x) = x$), the equality does hold, so they are equivalent. Edit 2: It is also worth noting that the "robust" sandwich-type standard errors produced by a GEE model provide valid asymptotic confidence intervals (e.g. they actually cover 95% of the time) even if the correlation structure specified in the model is not correct. Edit 3: If your interest is in understanding the association structure in the data, the GEE estimates of associations are notoriously inefficient (and sometimes inconsistent). I've seen a reference for this but can't place it right now. | {
"source": [
"https://stats.stackexchange.com/questions/16402",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/59/"
]
} |
16,458 | I am computing a very simple Kalman filter (random walk + noise model). I find that the output of the filter is very similar to a moving average. Is there an equivalence between the two? If not, what is the difference? | A random walk + noise model can be shown to be equivalent to a EWMA (exponentially weighted moving average). The kalman gain ends up being the same as the EWMA weighting. This is shown to some details in Time Series Analysis by State Space , if you Google Kalman Filter and EWMA you will find a number of resources that discuss the equivalence. In fact you can use the state space equivalence to build confidence intervals for EWMA estimates, etc. | {
"source": [
"https://stats.stackexchange.com/questions/16458",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1709/"
]
} |
16,493 | For a prediction interval in linear regression you still use $\hat{E}[Y|x] = \hat{\beta_0}+\hat{\beta}_{1}x$ to generate the interval. You also use this to generate a confidence interval of $E[Y|x_0]$. What's the difference between the two? | Your question isn't quite correct. A confidence interval gives a range for $\text{E}[y \mid x]$, as you say. A prediction interval gives a range for $y$ itself. Naturally, our best guess for $y$ is $\text{E}[y \mid x]$, so the intervals will both be centered around the same value, $x\hat{\beta}$. As @Greg says, the standard errors are going to be different---we guess the expected value of $\text{E}[y \mid x]$ more precisely than we estimate $y$ itself. Estimating $y$ requires including the variance that comes from the true error term. To illustrate the difference, imagine that we could get perfect estimates of our $\beta$ coefficients. Then, our estimate of $\text{E}[y \mid x]$ would be perfect. But we still wouldn't be sure what $y$ itself was because there is a true error term that we need to consider. Our confidence "interval" would just be a point because we estimate $\text{E}[y \mid x]$ exactly right, but our prediction interval would be wider because we take the true error term into account. Hence, a prediction interval will be wider than a confidence interval. | {
"source": [
"https://stats.stackexchange.com/questions/16493",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6630/"
]
} |
16,516 | How can I calculate the confidence interval of a mean in a non-normally distributed sample? I understand bootstrap methods are commonly used here, but I am open to other options. While I am looking for a non-parametric option, if someone can convince me that a parametric solution is valid that would be fine. The sample size is > 400. If anyone could give a sample in R it would be much appreciated. | First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a rather non-representative value. Consider the log-normal distribution: x <- rlnorm(1000)
plot(density(x), xlim=c(0, 10))
abline(v=mean(x), col="red")
abline(v=mean(x, tr=.20), col="darkgreen")
abline(v=median(x), col="blue") The mean (red line) is rather far away from the bulk of the data. 20% trimmed mean (green) and median (blue) are closer to the "typical" value. The results depend on the type of your "non-normal" distribution (a histogram of your actual data would be helpful). If it is not skewed, but has heavy tails, your CIs will be very wide. In any case, I think that bootstrapping indeed is a good approach, as it also can give you asymmetrical CIs. The R package simpleboot is a good start: library(simpleboot)
# 20% trimmed mean bootstrap
b1 <- one.boot(x, mean, R=2000, tr=.2)
boot.ci(b1, type=c("perc", "bca")) ... gives you following result: # The bootstrap trimmed mean:
> b1$t0
[1] 1.144648
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 2000 bootstrap replicates
Intervals :
Level Percentile BCa
95% ( 1.062, 1.228 ) ( 1.065, 1.229 )
Calculations and Intervals on Original Scale | {
"source": [
"https://stats.stackexchange.com/questions/16516",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/179/"
]
} |
16,608 | Say I have two normal distributions A and B with means $\mu_A$ and $\mu_B$ and variances $\sigma_A$ and $\sigma_B$. I want to take a weighted mixture of these two distributions using weights $p$ and $q$ where $0\le p \le 1$ and $q = 1-p$. I know that the mean of this mixture would be $\mu_{AB} = (p\times\mu_A) + (q\times\mu_B)$. What would the variance be? A concrete example would be if I knew the parameters for the distribution of male and female height. If I had a room of people that was 60% male, I could produce the expected mean height for the whole room, but what about the variance? | The variance is the second moment minus the square of the first moment, so it suffices to compute moments of mixtures. In general, given distributions with PDFs $f_i$ and constant (non-random) weights $p_i$, the PDF of the mixture is $$f(x) = \sum_i{p_i f_i(x)},$$ from which it follows immediately for any moment $k$ that $$\mu^{(k)} = \mathbb{E}_{f}[x^k] = \sum_i{p_i \mathbb{E}_{f_i}[x^k]} = \sum_i{p_i \mu_i^{(k)}}.$$ I have written $\mu^{(k)}$ for the $k^{th}$ moment of $f$ and $\mu_i^{(k)}$ for the $k^{th}$ moment of $f_i$. Using these formulae, the variance can be written $$\text{Var}(f) = \mu^{(2)} - \left(\mu^{(1)}\right)^2 = \sum_i{p_i \mu_i^{(2)}} - \left(\sum_i{p_i \mu_i^{(1)}}\right)^2.$$ Equivalently, if the variances of the $f_i$ are given as $\sigma^2_i$, then $\mu^{(2)}_i = \sigma^2_i + \left(\mu^{(1)}_i\right)^2$, enabling the variance of the mixture $f$ to be written in terms of the variances and means of its components as $$\eqalign{
\text{Var}(f) &= \sum_i{p_i \left(\sigma^2_i + \left(\mu^{(1)}_i\right)^2\right)} - \left(\sum_i{p_i \mu_i^{(1)}}\right)^2 \\
&= \sum_i{p_i \sigma^2_i} + \sum_i{p_i\left(\mu_i^{(1)}\right)^2} - \left(\sum_{i}{p_i \mu_i^{(1)}}\right)^2.
}$$ In words, this is the (weighted) average variance plus the average squared mean minus the square of the average mean. Because squaring is a convex function, Jensen's Inequality asserts that the average squared mean can be no less than the square of the average mean. This allows us to understand the formula as stating the variance of the mixture is the mixture of the variances plus a non-negative term accounting for the (weighted) dispersion of the means. In your case the variance is $$p_A \sigma_A^2 + p_B \sigma_B^2 + \left[p_A\mu_A^2 + p_B\mu_B^2 - (p_A \mu_A + p_B \mu_B)^2\right].$$ We can interpret this is a weighted mixture of the two variances, $p_A\sigma_A^2 + p_B\sigma_B^2$, plus a (necessarily positive) correction term to account for the shifts from the individual means relative to the overall mixture mean. The utility of this variance in interpreting data, such as given in the question, is doubtful, because the mixture distribution will not be Normal (and may depart substantially from it, to the extent of exhibiting bimodality). | {
"source": [
"https://stats.stackexchange.com/questions/16608",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/287/"
]
} |
16,689 | I understand the logic of coding for data analysis. My question below is on the use of a specific code. Is there a reason why gender is often coded as 0 for female and 1 for male? Why is this coding considered 'standard'? Compare this with Female = 1 and Male = 2. Is there a problem with this coding? | Reasons to prefer zero-one coding of binary variables: The mean of a zero-one variable represents the proportion in the category represented by the value one (e.g., the percentage of males). In a simple regression $y = a + bx$ where $x$ is the zero-one variable, the constant has a straightforward interpretation (e.g., $a$ is the mean of $y$ for females). Any coding of a binary variable where the difference between the two values is one (i.e., zero-one, but also one-two) gives a straightforward interpretation to the regression coefficient (e.g., $b$ is the effect of going from female to male on y). Assorted points about coding binary variables: Any coding of a binary variable that preserves the order of the categories (e.g., female = 0, male = 1; female = 1, male = 2; female = 1007, male =2000; etc.) will not affect the correlation of the binary variable with other variables. Any tables that report a binary variable in this way should make it clear how the variable was coded. It can also be useful to label the variable by the category that represent the value of one: e.g., y = a + b * Male rather than y = a + b * Gender . For some binary variables, one category more naturally should be coded as one. For example, when looking at the difference between treatment and control, control should be zero, and treatment should be one, because the regression coefficient is best thought of as the effect of the treatment. Flipping the categories (e.g., making female = 1 and male = 0, rather than female = 0 and male = 1) will flip the sign of correlations and regression coefficients. In the case of gender, there is typically no natural reason to code the variable female = 0, male = 1, versus male = 0, female = 1. However, convention may suggest one coding is more familiar to a reader; or choosing a coding that makes the regression coefficient positive may ease interpretation. Also, in some contexts, one gender may be thought of as the reference category; for example, if you were studying the effect of being female in a male dominated profession on income, it might make sense to code male = 0, and female = 1, in order to speak of the effect of being female. Scaling regression coefficients in thoughtful ways can have a powerful effect on the interpretability of regression coefficients. Andrew Gelman discusses this quite a bit; see for example his 2008 paper Scaling regression inputs by dividing by two standard deviations (PDF) in Statistics in Medicine , 27, 2865-2873. Coding male and female as -1 and +1 is another option that can provide meaningful coefficients (see "what is effect coding" ). | {
"source": [
"https://stats.stackexchange.com/questions/16689",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6096/"
]
} |
16,710 | I've come across a very good text on Bayes/MCMC. IT suggests that a standardisation of your independent variables will make an MCMC (Metropolis) algorithm more efficient, but also that it may reduce (multi)collinearity. Can that be true? Is this something I should be doing as standard .(Sorry). Kruschke 2011, Doing Bayesian Data Analysis. (AP) edit: for example > data(longley)
> cor.test(longley$Unemployed, longley$Armed.Forces)
Pearson's product-moment correlation
data: longley$Unemployed and longley$Armed.Forces
t = -0.6745, df = 14, p-value = 0.5109
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.6187113 0.3489766
sample estimates:
cor
-0.1774206
> standardise <- function(x) {(x-mean(x))/sd(x)}
> cor.test(standardise(longley$Unemployed), standardise(longley$Armed.Forces))
Pearson's product-moment correlation
data: standardise(longley$Unemployed) and standardise(longley$Armed.Forces)
t = -0.6745, df = 14, p-value = 0.5109
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.6187113 0.3489766
sample estimates:
cor
-0.1774206 This hasn't reduced the correlation or therefore the albeit limited linear dependence of vectors. What's going on? R | It doesn't change the collinearity between the main effects at all. Scaling doesn't either. Any linear transform won't do that. What it changes is the correlation between main effects and their interactions. Even if A and B are independent with a correlation of 0, the correlation between A, and A:B will be dependent upon scale factors. Try the following in an R console. Note that rnorm just generates random samples from a normal distribution with population values you set, in this case 50 samples. The scale function standardizes the sample to a mean of 0 and SD of 1. set.seed(1) # the samples will be controlled by setting the seed - you can try others
a <- rnorm(50, mean = 0, sd = 1)
b <- rnorm(50, mean = 0, sd = 1)
mean(a); mean(b)
# [1] 0.1004483 # not the population mean, just a sample
# [1] 0.1173265
cor(a ,b)
# [1] -0.03908718 The incidental correlation is near 0 for these independent samples. Now normalize to mean of 0 and SD of 1. a <- scale( a )
b <- scale( b )
cor(a, b)
# [1,] -0.03908718 Again, this is the exact same value even though the mean is 0 and SD = 1 for both a and b . cor(a, a*b)
# [1,] -0.01038144 This is also very near 0. (a*b can be considered the interaction term) However, usually the SD and mean of predictors differ quite a bit so let's change b . Instead of taking a new sample I'll rescale the original b to have a mean of 5 and SD of 2. b <- b * 2 + 5
cor(a, b)
# [1] -0.03908718 Again, that familiar correlation we've seen all along. The scaling is having no impact on the correlation between a and b . But!! cor(a, a*b)
# [1,] 0.9290406 Now that will have a substantial correlation which you can make go away by centring and/or standardizing. I generally go with just the centring. EDIT: @Tim has an answer here that's a bit more directly on topic. I didn't have Kruschke at the time. The correlation between intercept and slope is similar to the issue of correlation with interactions though. They're both about conditional relationships. The intercept is conditional on the slope; but unlike an interaction it's one way because the slope is not conditional on the intercept. Regardless, if the slope varies so will the intercept unless the mean of the predictor is 0. Standardizing or centring the predictor variables will minimize the effect of the intercept changing with the slope because the mean will be at 0 and therefore the regression line will pivot at the y-axis and it's slope will have no effect on the intercept. | {
"source": [
"https://stats.stackexchange.com/questions/16710",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
16,750 | I am searching for [free] software that can produce nice looking graphical models, e.g. Any suggestions would be appreciated. | I currently have a similar problem (drawing multiple path diagrams for my dissertation), and so I was examining many of the options listed here already to draw similar diagrams. Many of the listed resources for drawing such vector graphics (such as in microsoft office or google drawings) can produce really nice path diagrams, with fairly minimal effort. But, part of the reason I was unsatisfied with such programs is that I needed to produce many diagrams, with only fairly minor changes between each diagram (e.g. add another node, change a label). The point and click vector graphics tools aren't well suited for this, and take more effort than need be to make such minor changes. Also it becomes difficult to maintain a template between many drawings. So, I decided to examine options to produce such graphics programattically. Graphviz, as was already mentioned by thias, came really close to having all the bells and whistles I wanted for my graphics (as well as quite simple code to produce them), but it fell short for my needs in two ways; 1) mathematical fonts are lacking (e.g. I'm not sure if you can label a node with the $\beta$ symbol in Graphviz, 2) curved lines are hard to draw (see this post on drawing path diagrams using Graphviz on @Stask's website). Because of these limitations I have currently settled (very happily) on using the Tikz/pgf drawing library in Latex. An example is below of my attempt at reproducing your graphic (the biggest pain was the labels in the lower right corners of the boxes!); \documentclass[11pt]{report}
\usepackage{tikz}
\usetikzlibrary{fit,positioning}
\begin{document}
\begin{figure}
\centering
\begin{tikzpicture}
\tikzstyle{main}=[circle, minimum size = 10mm, thick, draw =black!80, node distance = 16mm]
\tikzstyle{connect}=[-latex, thick]
\tikzstyle{box}=[rectangle, draw=black!100]
\node[main, fill = white!100] (alpha) [label=below:$\alpha$] { };
\node[main] (theta) [right=of alpha,label=below:$\theta$] { };
\node[main] (z) [right=of theta,label=below:z] {};
\node[main] (beta) [above=of z,label=below:$\beta$] { };
\node[main, fill = black!10] (w) [right=of z,label=below:w] { };
\path (alpha) edge [connect] (theta)
(theta) edge [connect] (z)
(z) edge [connect] (w)
(beta) edge [connect] (w);
\node[rectangle, inner sep=0mm, fit= (z) (w),label=below right:N, xshift=13mm] {};
\node[rectangle, inner sep=4.4mm,draw=black!100, fit= (z) (w)] {};
\node[rectangle, inner sep=4.6mm, fit= (z) (w),label=below right:M, xshift=12.5mm] {};
\node[rectangle, inner sep=9mm, draw=black!100, fit = (theta) (z) (w)] {};
\end{tikzpicture}
\end{figure}
\end{document}
%note - compiled with pdflatex Now, I am already writing up my dissertation in Latex, so if you just want the image without having to compile a whole Latex document it is slightly inconveniant, but there are some fairly minor workarounds to produce an image more directly (see this question over on stackoverflow). There are a host of other benifits to using Tikz for such a project though Extensive documentation. The pgf manual holds your hand through making some similar diagrams. And once you get your feet wet... A huge library of examples is there to demonstrate how to produce a huge variety of graphics. And finally, the Tex stack exchange site is a good place to ask any questions about Tikz. They have some wizzes over there making some pretty fancy graphics (check out their blog for some examples). At this time I have not considered some of the libraries for drawing the diagrams in the statistical package R directly from the specified models, but in the future I may consider them to a greater extent. There are some nice examples from the qgraph library for a proof of concept of what can be accomplished in R. | {
"source": [
"https://stats.stackexchange.com/questions/16750",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2113/"
]
} |
16,753 | I am trying to implement a Watson Nadaraya classifier . There is one thing I didn't understand from the equation: $${F}(x)=\frac{\sum_{i=1}^n K_h(x-X_i) Y_i}{\sum_{i=1}^nK_h(x-X_i)}$$ What should I use for the kernel K? I have a 2-dimensional dataset which has 1000 samples (each sample is like this: [-0.10984628, 5.53485135] ). What confuses me is, based on my data, the input of the kernel function will be something like this: K([-0.62978309, 0.10464536]) And what I understand, it'll produce some number instead of an array, therefore I can go ahead and calculate F(x) which will also be a number. Then I'll check whether it is > or <= than zero. But I couldn't find any kernel that produces a number. So confused. Edit: I tried to implement my classifier based on the comments, but I got a very low accuracy. I appreciate if someone notices what's wrong with it. def gauss(x):
return (1.0 / np.sqrt(2 * np.pi)) * np.exp(- 0.5 * x**2)
def transform(X, h):
A = []
for i in X:
A.append(stats.norm.pdf(i[0],0,h)*stats.norm.pdf(i[1],0,h))
return A
N = 100
# pre-assign some mean and variance
mean1 = (0,9)
mean2 = (0,5)
cov = [[0.3,0.7],[0.7,0.3]]
# generate a dataset
dataset1 = np.random.multivariate_normal(mean1,cov,N)
dataset2 = np.random.multivariate_normal(mean2,cov,N)
X = np.vstack((dataset1, dataset2))
# pre-assign labels
Y1 = [1]*N
Y2 = [-1]*N
Y = Y1 + Y2
# assing a width
h = 0.5
#now, transform the data
X2 = transform(X, h)
j = 0
predicted = []
for i in X2:
# apply the equation
fx = sum((gauss(i-X2))*Y)/float(np.sum(gauss(i-X2)))
# if fx>0, it belongs to class 1
if fx >0:
predicted.append(1)
else:
predicted.append(-1)
j = j+1 | I currently have a similar problem (drawing multiple path diagrams for my dissertation), and so I was examining many of the options listed here already to draw similar diagrams. Many of the listed resources for drawing such vector graphics (such as in microsoft office or google drawings) can produce really nice path diagrams, with fairly minimal effort. But, part of the reason I was unsatisfied with such programs is that I needed to produce many diagrams, with only fairly minor changes between each diagram (e.g. add another node, change a label). The point and click vector graphics tools aren't well suited for this, and take more effort than need be to make such minor changes. Also it becomes difficult to maintain a template between many drawings. So, I decided to examine options to produce such graphics programattically. Graphviz, as was already mentioned by thias, came really close to having all the bells and whistles I wanted for my graphics (as well as quite simple code to produce them), but it fell short for my needs in two ways; 1) mathematical fonts are lacking (e.g. I'm not sure if you can label a node with the $\beta$ symbol in Graphviz, 2) curved lines are hard to draw (see this post on drawing path diagrams using Graphviz on @Stask's website). Because of these limitations I have currently settled (very happily) on using the Tikz/pgf drawing library in Latex. An example is below of my attempt at reproducing your graphic (the biggest pain was the labels in the lower right corners of the boxes!); \documentclass[11pt]{report}
\usepackage{tikz}
\usetikzlibrary{fit,positioning}
\begin{document}
\begin{figure}
\centering
\begin{tikzpicture}
\tikzstyle{main}=[circle, minimum size = 10mm, thick, draw =black!80, node distance = 16mm]
\tikzstyle{connect}=[-latex, thick]
\tikzstyle{box}=[rectangle, draw=black!100]
\node[main, fill = white!100] (alpha) [label=below:$\alpha$] { };
\node[main] (theta) [right=of alpha,label=below:$\theta$] { };
\node[main] (z) [right=of theta,label=below:z] {};
\node[main] (beta) [above=of z,label=below:$\beta$] { };
\node[main, fill = black!10] (w) [right=of z,label=below:w] { };
\path (alpha) edge [connect] (theta)
(theta) edge [connect] (z)
(z) edge [connect] (w)
(beta) edge [connect] (w);
\node[rectangle, inner sep=0mm, fit= (z) (w),label=below right:N, xshift=13mm] {};
\node[rectangle, inner sep=4.4mm,draw=black!100, fit= (z) (w)] {};
\node[rectangle, inner sep=4.6mm, fit= (z) (w),label=below right:M, xshift=12.5mm] {};
\node[rectangle, inner sep=9mm, draw=black!100, fit = (theta) (z) (w)] {};
\end{tikzpicture}
\end{figure}
\end{document}
%note - compiled with pdflatex Now, I am already writing up my dissertation in Latex, so if you just want the image without having to compile a whole Latex document it is slightly inconveniant, but there are some fairly minor workarounds to produce an image more directly (see this question over on stackoverflow). There are a host of other benifits to using Tikz for such a project though Extensive documentation. The pgf manual holds your hand through making some similar diagrams. And once you get your feet wet... A huge library of examples is there to demonstrate how to produce a huge variety of graphics. And finally, the Tex stack exchange site is a good place to ask any questions about Tikz. They have some wizzes over there making some pretty fancy graphics (check out their blog for some examples). At this time I have not considered some of the libraries for drawing the diagrams in the statistical package R directly from the specified models, but in the future I may consider them to a greater extent. There are some nice examples from the qgraph library for a proof of concept of what can be accomplished in R. | {
"source": [
"https://stats.stackexchange.com/questions/16753",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6727/"
]
} |
16,758 | I wonder if there is always a maximizer for any maximum (log-)likelihood estimation problem? In other words, is there some distribution and some of its parameters, for which the MLE problem does not have a maximizer? My question comes from a claim of an engineer that the cost function (likelihood or log-likelihood, I am not sure which was intended) in MLE is always concave and therefore it always has a maximizer. Thanks and regards! | Perhaps the engineer had in mind canonical exponential families: in their natural parametrization, the parameter space is convex and the log-likelihood is concave (see Thm 1.6.3 in Bickel & Doksum's Mathematical Statistics, Volume 1 ). Also, under some mild technical conditions (basically that the model be "full rank", or equivalently, that the natural parameter by identifiable), the log-likelihood function is strictly concave, which implies there exists a unique maximizer. (Corollary 1.6.2 in the same reference.) [Also, the lecture notes cited by @biostat make the same point.] Note that the natural parametrization of a canonical exponential family is usually different from the standard parametrization. So, while @cardinal points out that the log-likelihood for the family $\mathcal{N}(\mu,\sigma^2)$ is not convex in $\sigma^2$, it will be concave in the natural parameters, which are $\eta_1 = \mu / \sigma^2$ and $\eta_2 = -1/\sigma^2$. | {
"source": [
"https://stats.stackexchange.com/questions/16758",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1005/"
]
} |
16,921 | From Wikipedia , there are three interpretations of the degrees of freedom of a statistic: In statistics, the number of degrees of freedom is the number of
values in the final calculation of a statistic that are free to vary . Estimates of statistical parameters can be based upon different
amounts of information or data. The number of independent pieces of
information that go into the estimate of a parameter is called the
degrees of freedom (df). In general, the degrees of freedom of an
estimate of a parameter is equal to the number of independent scores
that go into the estimate minus the number of parameters used as
intermediate steps in the estimation of the parameter itself (which,
in sample variance, is one, since the sample mean is the only
intermediate step). Mathematically, degrees of freedom is the dimension of the domain of a
random vector , or essentially the number of 'free' components: how
many components need to be known before the vector is fully
determined . The bold words are what I don't quite understand. If possible, some mathematical formulations will help clarify the concept. Also do the three interpretations agree with each other? | This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests. Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are: The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances). The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance. The F-test (of ratios of estimated variances). The Chi-squared test , comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates. In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it. We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined. "Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three side lengths $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three side lengths can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain ( not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. (Thus, locally at any point $\omega\in\mathbb{R}^5$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for points $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent.) However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent. Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination , because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test: You have a collection of data values $(x_1, \ldots, x_n)$, considered as a sample of a population. You have estimated some parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean $\theta_1$ and standard deviation $\theta_2 = \theta_p$ of a Normal distribution, hypothesizing that the population is normally distributed but not knowing (in advance of obtaining the data) what $\theta_1$ or $\theta_2$ might be. In advance, you created a set of $k$ "bins" for the data. (It may be problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.) You have a lot of data--enough to assure that almost all bins ought to have counts of 5 or greater. (This, we hope, will enable the sampling distribution of the $\chi^2$ statistic to be approximated adequately by some $\chi^2$ distribution.) Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios $$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$ This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter $\nu$ often referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this I have $k$ counts. That's $k$ pieces of data. But there are ( functional ) relationships among them. To start with, I know in advance that the sum of the counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they (the parameters) are all ( functionally ) independent, that leaves only $k-p-1$ ( functionally ) independent "degrees of freedom": that's the value to use for $\nu$. The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question. Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 independent and identically distributed (iid) standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc .). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use the bin counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions. The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram: The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data. You might expect the problem to be due to the small size of the data sets ($n$=20) or perhaps the small size of the number of bins. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation. Things went wrong because I violated two requirements of the Chi-squared test: You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.) You must base that estimate on the counts, not on the actual data! (This is crucial .) The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped. The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature. We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.) With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all. A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition . I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses. Edit (Jan 2017) Here is R code to produce the figure following "The standard wisdom about DF..." #
# Simulate data, one iteration per column of `x`.
#
n <- 20
n.sim <- 1e4
bins <- qnorm(seq(0, 1, 1/4))
x <- matrix(rnorm(n*n.sim), nrow=n)
#
# Compute statistics.
#
m <- colMeans(x)
s <- apply(sweep(x, 2, m), 2, sd)
counts <- apply(matrix(as.numeric(cut(x, bins)), nrow=n), 2, tabulate, nbins=4)
expectations <- mapply(function(m,s) n*diff(pnorm(bins, m, s)), m, s)
chisquared <- colSums((counts - expectations)^2 / expectations)
#
# Plot histograms of means, variances, and chi-squared stats. The first
# two confirm all is working as expected.
#
mfrow <- par("mfrow")
par(mfrow=c(1,3))
red <- "#a04040" # Intended to show correct distributions
blue <- "#404090" # To show the putative chi-squared distribution
hist(m, freq=FALSE)
curve(dnorm(x, sd=1/sqrt(n)), add=TRUE, col=red, lwd=2)
hist(s^2, freq=FALSE)
curve(dchisq(x*(n-1), df=n-1)*(n-1), add=TRUE, col=red, lwd=2)
hist(chisquared, freq=FALSE, breaks=seq(0, ceiling(max(chisquared)), 1/4),
xlim=c(0, 13), ylim=c(0, 0.55),
col="#c0c0ff", border="#404040")
curve(ifelse(x <= 0, Inf, dchisq(x, df=2)), add=TRUE, col=red, lwd=2)
curve(ifelse(x <= 0, Inf, dchisq(x, df=1)), add=TRUE, col=blue, lwd=2)
par(mfrow=mfrow) | {
"source": [
"https://stats.stackexchange.com/questions/16921",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1005/"
]
} |
16,939 | Can someone please offer a nice succinct explanation why it is not a good idea to teach students that a p-value is the prob(their findings are due to [random] chance). My understanding is that a p-value is the prob(getting more extreme data | null hypothesis is true). My real interest is what is the harm of telling them it is the former (aside from the fact it is simply not so). | I have a different interpretation of the meaning of the wrong statement than @Karl does. I think that it is a statement about the data, rather than about the null. I understand it as asking for the probability of getting your estimate due to chance. I don't know what that means---it's not a well-specified claim. But I do understand what is likely meant by the probability of getting my estimate by chance given that the true estimate is equal to a particular value. For example, I can understand what it means to get a very large difference in average heights between men and women given that their average heights are actually the same. That's well specified. And that is what the p-value gives. What's missing in the wrong statement is the condition that the null is true. Now, we might object that this isn't statement perfect (the chance of getting an exact value for an estimator is 0, for example). But it's far better than the way that most would interpret a p-value. The key point that I say over and over again when I teach hypothesis testing is "Step one is to assume that the null hypothesis is true. Everything is calculated given this assumption." If people remember that, that's pretty good. | {
"source": [
"https://stats.stackexchange.com/questions/16939",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6072/"
]
} |
17,109 | I want to measure the entropy/ information density/ pattern-likeness of a two-dimensional binary matrix. Let me show some pictures for clarification: This display should have a rather high entropy: A) This should have medium entropy: B) These pictures, finally, should all have near-zero-entropy: C) D) E) Is there some index that captures the entropy, resp. the "pattern-likeness" of these displays? Of course, each algorithm (e.g., compression algorithms; or the rotation algorithm proposed by ttnphns ) is sensitive to other features of the display. I am looking for an algorithm that tries to capture following properties: Rotational and axial symmetry The amount of clustering Repetitions Maybe more complicated, the algorith could be sensitive to properties of the psychological " Gestalt principle ", in particular: The law of proximity: The law of symmetry: Symmetrical images are perceived collectively, even in spite of distance: Displays with these properties should get assigned a "low entropy value"; displays with rather random / unstructured points should get assigned a "high entropy value". I am aware that most probably no single algorithm will capture all of these features; therefore suggestions for algorithms which address only some or even only a single feature are highly welcome as well. In particular, I am looking for concrete, existing algorithms or for specific, implementable ideas (and I will award the bounty according to these criteria). | There is a simple procedure that captures all the intuition, including the psychological and geometrical elements. It relies on using spatial proximity , which is the basis of our perception and provides an intrinsic way to capture what is only imperfectly measured by symmetries. To do this, we need to measure the "complexity" of these arrays at varying local scales. Although we have much flexibility to choose those scales and choose the sense in which we measure "proximity," it is simple enough and effective enough to use small square neighborhoods and to look at averages (or, equivalently, sums) within them. To this end, a sequence of arrays can be derived from any $m$ by $n$ array by forming moving neighborhood sums using $k=2$ by $2$ neighborhoods, then $3$ by $3$ , etc, up to $\min(n,m)$ by $\min(n,m)$ (although by then there are usually too few values to provide anything reliable). To see how this works, let's do the calculations for the arrays in the question, which I will call $a_1$ through $a_5$ , from top to bottom. Here are plots of the moving sums for $k=1,2,3,4$ ( $k=1$ is the original array, of course) applied to $a_1$ . Clockwise from the upper left, $k$ equals $1$ , $2$ , $4$ , and $3$ . The arrays are $5$ by $5$ , then $4$ by $4$ , $2$ by $2$ , and $3$ by $3$ , respectively. They all look sort of "random." Let's measure this randomness with their base-2 entropy. When an array $a$ contains various distinct values with proportions $p_1,$ $p_2,$ etc. , its entropy (by definition) is $$H(a) = -p_1\log_2(p_1) - p_2\log_2(p_2) - \cdots$$ For instance, array $a_1$ has ten black cells and 15 white cells, whence they are in proportions of $10/25$ and $15/25,$ respectively. Its entropy therefore is $$H(a_1) = -(10/25)\log_2(10/25) - (15/25)\log_2(15/25) \approx 0.970951.$$ For $a_1$ , the sequence of these entropies for $k=1,2,3,4$ is $(0.97, 0.99, 0.92, 1.5)$ . Let's call this the "profile" of $a_1$ . Here, in contrast, are the moving sums of $a_4$ : For $k=2, 3, 4$ there is little variation, whence low entropy. The profile is $(1.00, 0, 0.99, 0)$ . Its values are consistently close to or lower than the values for $a_1$ , confirming the intuitive sense that there is a strong "pattern" present in $a_4$ . We need a frame of reference for interpreting these profiles. A perfectly random array of binary values will have just about half its values equal to $0$ and the other half equal to $1$ , for an entropy close to $1$ . The moving sums within $k$ by $k$ neighborhoods will tend to have binomial distributions, giving them predictable entropies (at least for large arrays) that can be approximated by $1 + \log_2(k)$ : These results are borne out by simulation with arrays up to $m=n=100$ . However, they break down for small arrays (such as the $5$ by $5$ arrays here) due to correlation among neighboring windows (once the window size is about half the dimensions of the array) and due to the small amount of data. Here is a reference profile of random $5$ by $5$ arrays generated by simulation along with plots of some actual profiles: In this plot the reference profile is solid blue. The array profiles correspond to $a_1$ : red, $a_2$ : gold, $a_3$ : green, $a_4$ : light blue. (Including $a_5$ would obscure the picture because it is close to the profile of $a_4$ .) Overall the profiles correspond to the ordering in the question: they get lower at most values of $k$ as the apparent ordering increases. The exception is $a_1$ : until the end, for $k=4$ , its moving sums tend to have among the lowest entropies. This reveals a surprising regularity: every $2$ by $2$ neighborhood in $a_1$ has exactly $1$ or $2$ black squares, never any more or less. It's much less "random" than one might think. (This is partly due to the loss of information that accompanies summing the values in each neighborhood, a procedure that condenses $2^{k^2}$ possible neighborhood configurations into just $k^2+1$ different possible sums. If we wanted to account specifically for the clustering and orientation within each neighborhood, then instead of using moving sums we would use moving concatenations. That is, each $k$ by $k$ neighborhood has $2^{k^2}$ possible different configurations; by distinguishing them all, we can obtain a finer measure of entropy. I suspect that such a measure would elevate the profile of $a_1$ compared to the other images.) This technique of creating a profile of entropies over a controlled range of scales, by summing (or concatenating or otherwise combining) values within moving neighborhoods, has been used in analysis of images. It is a two-dimensional generalization of the well-known idea of analyzing text first as a series of letters, then as a series of digraphs (two-letter sequences), then as trigraphs, etc. It also has some evident relations to fractal analysis (which explores properties of the image at finer and finer scales). If we take some care to use a block moving sum or block concatenation (so there are no overlaps between windows), one can derive simple mathematical relationships among the successive entropies; however, I suspect that using the moving window approach may be more powerful and is a little less arbitrary (because it does not depend on precisely how the image is divided into blocks). Various extensions are possible. For instance, for a rotationally invariant profile, use circular neighborhoods rather than square ones. Everything generalizes beyond binary arrays, of course. With sufficiently large arrays one can even compute locally varying entropy profiles to detect non-stationarity. If a single number is desired, instead of an entire profile, choose the scale at which the spatial randomness (or lack thereof) is of interest. In these examples, that scale would correspond best to a $3$ by $3$ or $4$ by $4$ moving neighborhood, because for their patterning they all rely on groupings that span three to five cells (and a $5$ by $5$ neighborhood just averages away all variation in the array and so is useless). At the latter scale, the entropies for $a_1$ through $a_5$ are $1.50$ , $0.81$ , $0$ , $0$ , and $0$ ; the expected entropy at this scale (for a uniformly random array) is $1.34$ . This justifies the sense that $a_1$ "should have rather high entropy." To distinguish $a_3$ , $a_4$ , and $a_5$ , which are tied with $0$ entropy at this scale, look at the next finer resolution ( $3$ by $3$ neighborhoods): their entropies are $1.39$ , $0.99$ , $0.92$ , respectively (whereas a random grid is expected to have a value of $1.77$ ). By these measures, the original question puts the arrays in exactly the right order. | {
"source": [
"https://stats.stackexchange.com/questions/17109",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6082/"
]
} |
17,119 | I searched many websites to know what exactly lift will do? The results that I found all were about using it in applications not itself. I know about the support and confidence function. From Wikipedia, in data mining, lift is a measure of the performance of a model at predicting or classifying cases, measuring against a random choice model. But how? Confidence*support is the value of lift I searched another formulas too but I can't understand why the lift charts are important in accuracy of predicted values I mean I want to know what policy and reason is behind lift? | I'll give an example of how "lift" is useful... Imagine you are running a direct mail campaign where you mail customers an offer in the hopes they respond. Historical data shows that when you mail your customer base completely at random about 8% of them respond to the mailing (i.e. they come in and shop with the offer). So, if you mail 1,000 customers you can expect 80 responders. Now, you decide to fit a logistic regression model to your historical data to find patterns that are predictive of whether a customer is likely to respond to a mailing. Using the logistic regression model each customer is assigned a probability of responding and you can assess the accuracy because you know whether they actually responded. Once each customer is assigned their probability, you rank them from highest to lowest scoring customer. Then you could generate some "lift" graphics like these: Ignore the top chart for now. The bottom chart is saying that after we sort the customers based on their probability of responding (high to low), and then break them up into ten equal bins, the response rate in bin #1 (the top 10% of customers) is 29% vs 8% of random customers, for a lift of 29/8 = 3.63. By the time we get to scored customers in the 4th bin, we have captured so many the previous three that the response rate is lower than what we would expect mailing people at random. Looking at the top chart now, what this says is that if we use the probability scores on customers we can get 60% of the total responders we'd get mailing randomly by only mailing the top 30% of scored customers. That is, using the model we can get 60% of the expected profit for 30% of the mail cost by only mailing the top 30% of scored customers, and this is what lift really refers to. | {
"source": [
"https://stats.stackexchange.com/questions/17119",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6885/"
]
} |
17,149 | What are some practical objections to the use of Bayesian statistical methods in any context? No, I don't mean the usual carping about choice of prior. I'll be delighted if this gets no answers. | I'm going to give you an answer. Four drawbacks actually. Note that none of these are actually objections that should drive one all the way to frequentist analysis, but there are cons to going with a Bayesian framework: Choice of prior. This is the usual carping for a reason, though in my case it's not the usual "priors are subjective!" but that coming up with a prior that's well reasoned and actually represents your best attempt at summarizing a prior is a great deal of work in many cases. An entire aim of my dissertation, for example, can be summed up as "estimate priors". It's computationally intensive. Especially for models involving many variables. For a large dataset with many variables being estimated, it may very well be prohibitively computationally intensive, especially in certain circumstances where the data cannot readily be thrown onto a cluster or the like. Some of the ways to resolve this, like augmented data rather than MCMC, are somewhat theoretically challenging, at least to me. Posterior distributions are somewhat more difficult to incorporate into a meta-analysis, unless a frequentist, parametric description of the distribution has been provided. Depending on what journal the analysis is intended for, either the use of Bayes generally, or your choice of priors, gives your paper slightly more points where a reviewer can dig into it. Some of these are reasonable reviewer objections, but some just stem from the nature of Bayes and how familiar people in some fields are with it. None of these things should stop you. Indeed, none of these things have stopped me, and hopefully doing Bayesian analysis will help address at least number 4. | {
"source": [
"https://stats.stackexchange.com/questions/17149",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
17,251 | I'm looking for a non-technical definition of the lasso and what it is used for. | The LASSO (Least Absolute Shrinkage and Selection Operator) is a regression method that involves penalizing the absolute size of the regression coefficients. By penalizing (or equivalently constraining the sum of the absolute values of the estimates) you end up in a situation where some of the parameter estimates may be exactly zero. The larger the penalty applied, the further estimates are shrunk towards zero. This is convenient when we want some automatic feature/variable selection, or when dealing with highly correlated predictors, where standard regression will usually have regression coefficients that are 'too large'. https://web.stanford.edu/~hastie/ElemStatLearn/ (Free download) has a good description of the LASSO and related methods. | {
"source": [
"https://stats.stackexchange.com/questions/17251",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6927/"
]
} |
17,336 | Here is the article that motivated this question: Does impatience make us fat? I liked this article, and it nicely demonstrates the concept of “controlling for other variables” (IQ, career, income, age, etc) in order to best isolate the true relationship between just the 2 variables in question. Can you explain to me how you actually control for variables on a typical data set? E.g., if you have 2 people with the same impatience level and BMI, but different incomes, how do you treat these data? Do you categorize them into different subgroups that do have similar income, patience, and BMI? But, eventually there are dozens of variables to control for (IQ, career, income, age, etc) How do you then aggregate these (potentially) 100’s of subgroups? In fact, I have a feeling this approach is barking up the wrong tree, now that I’ve verbalized it. Thanks for shedding any light on something I've meant to get to the bottom of for a few years now...! | There are many ways to control for variables. The easiest, and one you came up with, is to stratify your data so you have sub-groups with similar characteristics - there are then methods to pool those results together to get a single "answer". This works if you have a very small number of variables you want to control for, but as you've rightly discovered, this rapidly falls apart as you split your data into smaller and smaller chunks. A more common approach is to include the variables you want to control for in a regression model. For example, if you have a regression model that can be conceptually described as: BMI = Impatience + Race + Gender + Socioeconomic Status + IQ The estimate you will get for Impatience will be the effect of Impatience within levels of the other covariates - regression allows you to essentially smooth over places where you don't have much data (the problem with the stratification approach), though this should be done with caution. There are yet more sophisticated ways of controlling for other variables, but odds are when someone says "controlled for other variables", they mean they were included in a regression model. Alright, you've asked for an example you can work on, to see how this goes. I'll walk you through it step by step. All you need is a copy of R installed. First, we need some data. Cut and paste the following chunks of code into R. Keep in mind this is a contrived example I made up on the spot, but it shows the process. covariate <- sample(0:1, 100, replace=TRUE)
exposure <- runif(100,0,1)+(0.3*covariate)
outcome <- 2.0+(0.5*exposure)+(0.25*covariate) That's your data. Note that we already know the relationship between the outcome, the exposure, and the covariate - that's the point of many simulation studies (of which this is an extremely basic example. You start with a structure you know, and you make sure your method can get you the right answer. Now then, onto the regression model. Type the following: lm(outcome~exposure) Did you get an Intercept = 2.0 and an exposure = 0.6766? Or something close to it, given there will be some random variation in the data? Good - this answer is wrong. We know it's wrong. Why is it wrong? We have failed to control for a variable that effects the outcome and the exposure. It's a binary variable, make it anything you please - gender, smoker/non-smoker, etc. Now run this model: lm(outcome~exposure+covariate) This time you should get coefficients of Intercept = 2.00, exposure = 0.50 and a covariate of 0.25. This, as we know, is the right answer. You've controlled for other variables. Now, what happens when we don't know if we've taken care of all of the variables that we need to (we never really do)? This is called residual confounding , and its a concern in most observational studies - that we have controlled imperfectly, and our answer, while close to right, isn't exact. Does that help more? | {
"source": [
"https://stats.stackexchange.com/questions/17336",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6967/"
]
} |
17,345 | I don't really understand the difference between exponential and geometric distribution. | Did you try looking at Wikipedia ? The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. In contrast, the exponential distribution describes the time for a continuous process to change state. | {
"source": [
"https://stats.stackexchange.com/questions/17345",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6296/"
]
} |
17,368 | You probably know the trick in the movie The Prestige : [MOVIE SPOILER] A magician has found an impressive magic trick: he goes into a machine, close the door, and then disappears and reappears in the other side of the room. But the machine is not perfect : instead of just teleporting him, it duplicates him. The magician stays where he is, and a copy is created at the other side of the room. Then, the magician in the machine falls discreetly in a water tank under the floor and is drowned. Edit: The probability of the new copy of the magician being drowned is 1/2 (in other words, the new copy has 1/2 chances of being drowned, and 1/2 chances of popping into the room). Also, the water tank never fails and the chances are 1 that the magician dropping in the tank dies. So the magician doesn't really like doing this trick, because "you never know where you are going to be, on the other side of the room or drowned". Now, the paradox is the following : Imagine the magician does the trick 100 times. What are his chances of surviving ? Edit, additional question: What are the chances of the magician of keeping his physical brain and not having a new one ? Quick analysis: One one hand, there is one magician alive, and 100 drowned magicians, so his chances are 1 out of 100. On the other hand, each time he does the trick, he has 1/2 chances of staying alive, so his chances are $(1/2)^{100}=1/(2^{100})$ of staying alive. What is the right response and why ? | This mistake was put in evidence in written conversations among Fermat, Pascal, and eminent French mathematicians in 1654 when the former two were considering the "problem of points." A simple example is this: Two people gamble on the outcome of two flips of a fair coin. Player A wins if either flip is heads; otherwise, Player B wins. What are player B's chances of winning? The false argument begins by examining the set of possible outcomes, which we can enumerate: H : The first flip is heads. Player A wins. TH : Only the second flip is heads. Player A wins. TT : No flip is heads. Player B wins. Because Player A has two chances of winning and B has only one chance, the odds in favor of B are (according to this argument) 1:2; that is, B's chances are 1/3. Among those defending this argument were Gilles Personne de Roberval , a founding member of the French Academy of Sciences. The mistake is plain to us today, because we have been educated by people who learned from this discussion. Fermat argued (correctly, but not very convincingly) that case (1) really has to be considered two cases, as if the game had been played out through both flips no matter what. Invoking a hypothetical sequence of flips that wasn't actually played out makes many people uneasy. Nowadays we might find it more convincing just to work out the probabilities of the individual cases: the chance of (1) is 1/2 and the chances of (2) and (3) are each 1/4, whence the chance that A wins equals 1/2 + 1/4 = 3/4 and the chance that B wins is 1/4. These calculations rely on axioms of probability, which were finally settled early in the 20th century, but were essentially established by the fall of 1654 by Pascal and Fermat and popularized throughout Europe three years later by Christian Huyghens in his brief treatise on probability (the first ever published), De ratiociniis in ludo aleae (calculating in games of chance). The present question can be modeled as 100 coin flips, with heads representing death and tails representing survival. The argument for "1 in 100" (which really should be 1/101, by the way) has exactly the same flaw. | {
"source": [
"https://stats.stackexchange.com/questions/17368",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6965/"
]
} |
17,371 | I was wondering, is it possible to have a very strong correlation coefficient (say .9 or higher), with a high p value (say .25 or higher)? Here's an example of a low correlation coefficient, with a high p value: set.seed(10)
y <- rnorm(100)
x <- rnorm(100)+.1*y
cor.test(x,y) cor = 0.03908927, p=0.6994 High correlation coefficient, low p value: y <- rnorm(100)
x <- rnorm(100)+2*y
cor.test(x,y) cor = 0.8807809, p=2.2e-16 Low correlation coefficient, low p value: y <- rnorm(100000)
x <- rnorm(100000)+.1*y
cor.test(x,y) cor = 0.1035018, p=2.2e-16 High correlation coefficient, high p value:
??? | The Bottom Line The sample correlation coefficient needed to reject the hypothesis that the true (Pearson) correlation coefficient is zero becomes small quite fast as the sample size increases. So, in general, no, you cannot simultaneously have a large (in magnitude) correlation coefficient and a simultaneously large $p$-value . The Top Line (Details) The test used for the Pearson correlation coefficient in the $R$ function cor.test is a very slightly modified version of the method I discuss below. Suppose $(X_1,Y_1), (X_2,Y_2),\ldots,(X_n,Y_n)$ are iid bivariate normal random vectors with correlation $\rho$. We want to test the null hypothesis that $\rho = 0$ versus $\rho \neq 0$. Let $r$ be the sample correlation coefficient. Using standard linear-regression theory, it is not hard to show that the test statistic,
$$
T = \frac{r \sqrt{n-2}}{\sqrt{(1-r^2)}}
$$
has a $t_{n-2}$ distribution under the null hypothesis. For large $n$, the $t_{n-2}$ distribution approaches the standard normal. Hence $T^2$ is approximately chi-squared distributed with one degree of freedom. (Under the assumptions we've made, $T^2 \sim F_{1,n-2}$ in actuality, but the $\chi^2_1$ approximation makes clearer what is going on, I think.) So,
$$
\mathbb P\left(\frac{r^2}{1-r^2} (n-2) \geq q_{1-\alpha} \right) \approx \alpha \>,
$$
where $q_{1-\alpha}$ is the $(1-\alpha)$ quantile of a chi-squared distribution with one degree of freedom. Now, note that $r^2/(1-r^2)$ is increasing as $r^2$ increases. Rearranging the quantity in the probability statement, we have that for all
$$
|r| \geq \frac{1}{\sqrt{1+(n-2)/q_{1-\alpha}}}
$$
we'll get a rejection of the null hypothesis at level $\alpha$. Clearly the right-hand side decreases with $n$. A plot Here is a plot of the rejection region of $|r|$ as a function of the sample size. So, for example, when the sample size exceeds 100, the (absolute) correlation need only be about 0.2 to reject the null at the $\alpha = 0.05$ level. A simulation We can do a simple simulation to generate a pair of zero-mean vectors with an exact correlation coefficient. Below is the code. From this we can look at the output of cor.test . k <- 100
n <- 4*k
# Correlation that gives an approximate p-value of 0.05
# Change 0.05 to some other desired p-value to get a different curve
pval <- 0.05
qval <- qchisq(pval,1,lower.tail=F)
rho <- 1/sqrt(1+(n-2)/qval)
# Zero-mean orthogonal basis vectors
b1 <- rep(c(1,-1),n/2)
b2 <- rep(c(1,1,-1,-1),n/4)
# Construct x and y vectors with mean zero and an empirical
# correlation of *exactly* rho
x <- b1
y <- rho * b1 + sqrt(1-rho^2) * b2
# Do test
ctst <- cor.test(x,y) As requested in the comments, here is the code to reproduce the plot, which can be run immediately following the code above (and uses some of the variables defined there). png("cortest.png", height=600, width=600)
m <- 3:1000
yy <- 1/sqrt(1+(m-2)/qval)
plot(m, yy, type="l", lwd=3, ylim=c(0,1),
xlab="sample size", ylab="correlation")
polygon( c(m[1],m,rev(m)[1]), c(1,yy,1), col="lightblue2", border=NA)
lines(m,yy,lwd=2)
text(500, 0.5, "p < 0.05", cex=1.5 )
dev.off() | {
"source": [
"https://stats.stackexchange.com/questions/17371",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2817/"
]
} |
17,391 | How would you go about explaining i.i.d (independent and identically distributed) to non-technical people? | It means "Independent and identically distributed". A good example is a succession of throws of a fair coin: The coin has no memory, so all the throws are "independent". And every throw is 50:50 (heads:tails), so the coin is and stays fair - the distribution from which every throw is drawn, so to speak, is and stays the same: "identically distributed". A good starting point would be the Wikipedia page . ::EDIT:: Follow this link to further explore the concept. | {
"source": [
"https://stats.stackexchange.com/questions/17391",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/333/"
]
} |
17,537 | What is the cleanest, easiest way to explain someone the concept of variance? What does it intuitively mean? If one is to explain this to their child how would one go about it? It's a concept that I have difficulty in articulating - especially when relating variance to risk. I understand it mathematically and can explain it that way too. But when explaining real world phenomena how do you make one understand variance and it's applicability in the 'real world', so to speak. Let's say we are simulating an investment in a stock using random numbers (rolling a die or using an excel sheet, doesn't matter). We get some 'return on investment' by associating each instance of the random variable to 'some change' in the return. Eg.: Rolling a 1 implies a change of 0.8 per \$1 in investment, a 5 a change of 1.1 per \$1 and so on. Now if this simulation is run for about 50 times (or 20 or 100) we will get some values and the final value of the investment. So what does 'variance' actually tell us if we were to calculate it from the above data set? What does one "see" - If the variance turns out to be 1.7654 or 0.88765 or 5.2342 what does this even mean? What did/can I observe about this investment?? What conclusions can I draw - in lay man terms. Please feel free to augment the question with that for standard deviation too! Although I feel it's 'easier' to understand, but something that would contribute to making it also 'intuitively' clear would be greatly appreciated! | I would probably use a similar analogy to the one I've learned to give 'laypeople' when introducing the concept of bias and variance: the dartboard analogy. See below: The particular image above is from Encyclopedia of Machine Learning , and the reference within the image is Moore and McCabe's "Introduction to the Practice of Statistics" . EDIT: Here's an exercise that I believe is pretty intuitive: Take a deck of cards (out of the box), and drop the deck from a height of about 1 foot. Ask your child to pick up the cards and return them to you. Then, instead of dropping the deck, toss it as high as you can and let the cards fall to the ground. Ask your child to pick up the cards and return them to you. The relative fun they have during the two trials should give them an intuitive feel for variance :) | {
"source": [
"https://stats.stackexchange.com/questions/17537",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4426/"
]
} |
17,565 | How do you choose a model from among different models chosen by different methods (e.g. backwards or forwards selection)? Also what is a parsimonious model? | A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible. For model evaluation there are different methods depending on what you want to know. There are generally two ways of evaluating a model: Based on predictions and based on goodness of fit on the current data. In the first case you want to know if your model adequately predicts new data, in the second you want to know whether your model adequatelly describes the relations in your current data. Those are two different things. Evaluating based on predictions The best way to evaluate models used for prediction, is crossvalidation. Very briefly, you cut your dataset in eg. 10 different pieces, use 9 of them to build the model and predict the outcomes for the tenth dataset. A simple mean squared difference between the observed and predicted values give you a measure for the prediction accuracy. As you repeat this ten times, you calculate the mean squared difference over all ten iterations to come to a general value with a standard deviation. This allows you again to compare two models on their prediction accuracy using standard statistical techniques (t-test or ANOVA). A variant on the theme is the PRESS criterion (Prediction Sum of Squares), defined as $\displaystyle\sum^{n}_{i=1} \left(Y_i - \hat{Y}_{i(-i)}\right)^2$ Where $\hat{Y}_{i(-i)}$ is the predicted value for the ith observation using a model based on all observations minus the ith value. This criterion is especially useful if you don't have much data. In that case, splitting your data like in the crossvalidation approach might result in subsets of data that are too small for a stable fitting. Evaluating based on goodness of fit Let me first state that this really differs depending on the model framework you use. For example, a likelihood-ratio test can work for Generalized Additive Mixed Models when using the classic gaussian for the errors, but is meaningless in the case of the binomial variant. First you have the more intuitive methods of comparing models. You can use the Aikake Information Criterion (AIC) or the Bayesian Information Criterion (BIC) to compare the goodness of fit for two models. But nothing tells you that both models really differ. Another one is the Mallow's Cp criterion. This essentially checks for possible bias in your model, by comparing the model with all possible submodels (or a careful selection of them). See also http://www.public.iastate.edu/~mervyn/stat401/Other/mallows.pdf If the models you want to compare are nested models (i.e. all predictors and interactions of the more parsimonious model occur also in the more complete model), you can use a formal comparison in the form of a likelihood ratio test (or a Chi-squared or an F test in the appropriate cases, eg when comparing simple linear models fitted using least squares). This test essentially controls whether the extra predictors or interactions really improve the model. This criterion is often used in forward or backward stepwise methods. About automatic model selection You have advocates and you have enemies of this method. I personally am not in favor of automatic model selection, especially not when it's about describing models, and this for a number of reasons: In every model you should have checked that you deal adequately with confounding. In fact, many datasets have variables that should never be put in a model at the same time. Often people forget to control for that. Automatic model selection is a method to create hypotheses, not to test them. All inference based on models originating from Automatic model selection is invalid. No way to change that. I've seen many cases where starting at a different starting point, a stepwise selection returned a completely different model. These methods are far from stable. It's also difficult to incorporate a decent rule, as the statistical tests to compare two models require the models to be nested. If you use eg AIC, BIC or PRESS, the cutoff for when a difference is really important is arbitrary chosen. So basically, I see more in comparing a select set of models chosen beforehand. If you don't care about statistical evaluation of the model and hypothesis testing, you can use crossvalidation to compare the predictive accuracy of your models. But if you're really after variable selection for predictive purposes, you might want to take a look to other methods for variable selection, like Support Vector Machines, Neural Networks, Random Forests and the likes. These are far more often used in eg medicine to find out which of the thousand measured proteins can adequately predict whether you have cancer or not. Just to give a (famous) example : https://www.nature.com/articles/nm0601_673 https://doi.org/10.1023/A:1012487302797 All these methods have regression variants for continuous data as well. | {
"source": [
"https://stats.stackexchange.com/questions/17565",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7066/"
]
} |
17,581 | Two common approaches for selecting correlated variables are significance tests and cross validation. What problem does each try to solve and when would I prefer one over the other? | First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or not), with parameter vector $\beta = (\beta_0, \beta_1, \ldots, \beta_p)$ and regression function $$f(x_1, \ldots, x_p) = \beta_0 + \beta_1 x_1 + \ldots + \beta_p x_p,$$
which could be a model of the mean of the response variable for a given observation of $x_1, \ldots, x_p$. The question is how to select a subset of the $\beta_i$'s to be non-zero, and, in particular, a comparison of significance testing versus cross validation . To be crystal clear about the terminology, significance testing is a general concept, which is carried out differently in different contexts. It depends, for instance, on the choice of a test statistic. Cross validation is really an algorithm for estimation of the expected generalization error , which is the important general concept, and which depends on the choice of a loss function. The expected generalization error is a little technical to define formally, but in words it is the expected loss of a fitted model when used for prediction on an independent data set , where expectation is over the data used for the estimation as well as the independent data set used for prediction. To make a reasonable comparison lets focus on whether $\beta_1$ could be taken equal to 0 or not. For significance testing of the null hypothesis that $\beta_1 = 0$ the main procedure is to compute a $p$-value, which is the probability that the chosen test-statistic is larger than observed for our data set under the null hypothesis , that is, when assuming that $\beta_1 = 0$. The interpretation is that a small $p$-value is evidence against the null hypothesis. There are commonly used rules for what "small" means in an absolute sense such as the famous 0.05 or 0.01 significance levels. For the expected generalization error we compute, perhaps using cross-validation, an estimate of the expected generalization error under the assumption that $\beta_1 = 0$. This quantity tells us how well models fitted by the method we use, and with $\beta_1 = 0$, will perform on average when used for prediction on independent data. A large expected generalization error is bad, but there are no rules in terms of its absolute value on how large it needs to be to be bad. We will have to estimate the expected generalization error for the model where $\beta_1$ is allowed to be different from 0 as well, and then we can compare the two estimated errors. Whichever is the smallest corresponds to the model we choose. Using significance testing we are not directly concerned with the "performance" of the model under the null hypothesis versus other models, but we are concerned with documenting that the null is wrong. This makes most sense (to me) in a confirmatory setup where the main objective is to confirm and document an a priory well specified scientific hypothesis, which can be formulated as $\beta_1 \neq 0$. The expected generalization error is, on the other hand, only concerned with average "performance" in terms of expected prediction loss, and concluding that it is best to allow $\beta_1$ to be different from 0 in terms of prediction is not an attempt to document that $\beta_1$ is "really" different from 0 $-$ whatever that means. I have personally never worked on a problem where I formally needed significance testing, yet $p$-values find their way into my work and do provide sensible guides and first impressions for variable selection. I am, however, mostly using penalization methods like lasso in combination with the generalization error for any formal model selection, and I am slowly trying to suppress my inclination to even compute $p$-values. For exploratory analysis I see no argument in favor of significance testing and $p$-values, and I will definitely recommend focusing on a concept like expected generalization error for variable selection. In other contexts where one might consider using a $p$-value for documenting that $\beta_1$ is not 0, I would say that it is almost always a better idea to report an estimate of $\beta_1$ and a confidence interval instead. | {
"source": [
"https://stats.stackexchange.com/questions/17581",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6961/"
]
} |
17,585 | Are assumptions for multiple linear regression basically the same as simple linear regression? One has to check for linearity for each of the continuous predictors versus the outcome variable? If there are categorical variables, one has to check the linearity between each of the continuous predictors with the outcome variable for all levels of the categorical variables? | First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or not), with parameter vector $\beta = (\beta_0, \beta_1, \ldots, \beta_p)$ and regression function $$f(x_1, \ldots, x_p) = \beta_0 + \beta_1 x_1 + \ldots + \beta_p x_p,$$
which could be a model of the mean of the response variable for a given observation of $x_1, \ldots, x_p$. The question is how to select a subset of the $\beta_i$'s to be non-zero, and, in particular, a comparison of significance testing versus cross validation . To be crystal clear about the terminology, significance testing is a general concept, which is carried out differently in different contexts. It depends, for instance, on the choice of a test statistic. Cross validation is really an algorithm for estimation of the expected generalization error , which is the important general concept, and which depends on the choice of a loss function. The expected generalization error is a little technical to define formally, but in words it is the expected loss of a fitted model when used for prediction on an independent data set , where expectation is over the data used for the estimation as well as the independent data set used for prediction. To make a reasonable comparison lets focus on whether $\beta_1$ could be taken equal to 0 or not. For significance testing of the null hypothesis that $\beta_1 = 0$ the main procedure is to compute a $p$-value, which is the probability that the chosen test-statistic is larger than observed for our data set under the null hypothesis , that is, when assuming that $\beta_1 = 0$. The interpretation is that a small $p$-value is evidence against the null hypothesis. There are commonly used rules for what "small" means in an absolute sense such as the famous 0.05 or 0.01 significance levels. For the expected generalization error we compute, perhaps using cross-validation, an estimate of the expected generalization error under the assumption that $\beta_1 = 0$. This quantity tells us how well models fitted by the method we use, and with $\beta_1 = 0$, will perform on average when used for prediction on independent data. A large expected generalization error is bad, but there are no rules in terms of its absolute value on how large it needs to be to be bad. We will have to estimate the expected generalization error for the model where $\beta_1$ is allowed to be different from 0 as well, and then we can compare the two estimated errors. Whichever is the smallest corresponds to the model we choose. Using significance testing we are not directly concerned with the "performance" of the model under the null hypothesis versus other models, but we are concerned with documenting that the null is wrong. This makes most sense (to me) in a confirmatory setup where the main objective is to confirm and document an a priory well specified scientific hypothesis, which can be formulated as $\beta_1 \neq 0$. The expected generalization error is, on the other hand, only concerned with average "performance" in terms of expected prediction loss, and concluding that it is best to allow $\beta_1$ to be different from 0 in terms of prediction is not an attempt to document that $\beta_1$ is "really" different from 0 $-$ whatever that means. I have personally never worked on a problem where I formally needed significance testing, yet $p$-values find their way into my work and do provide sensible guides and first impressions for variable selection. I am, however, mostly using penalization methods like lasso in combination with the generalization error for any formal model selection, and I am slowly trying to suppress my inclination to even compute $p$-values. For exploratory analysis I see no argument in favor of significance testing and $p$-values, and I will definitely recommend focusing on a concept like expected generalization error for variable selection. In other contexts where one might consider using a $p$-value for documenting that $\beta_1$ is not 0, I would say that it is almost always a better idea to report an estimate of $\beta_1$ and a confidence interval instead. | {
"source": [
"https://stats.stackexchange.com/questions/17585",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7066/"
]
} |
17,595 | I KNOW what moments are and how to calculate them and how to use the moment generating function for getting higher order moments. Yes, I know the math. Now that I need to get my statistics knowledge lubricated for work, I thought I might as well ask this question – it's been nagging me for about a few years and back in college no professor knew the answer or would just dismiss the question (honestly). So what does the word "moment" mean in this case? Why this choice of word? It doesn't sound intuitive to me (or I never heard it that way back in college :) Come to think of it I am equally curious with its usage in "moment of inertia" ;) but let's not focus on that for now. So what does a "moment" of a distribution mean and what does it seek to do and why THAT word! :) Why does any one care about moments? At this moment I am feeling otherwise about that moment ;) PS: Yes, I've probably asked a similar question on variance but I do value intuitive understanding over 'look in the book to find out' :) | According to the paper "First (?) Occurrence of Common Terms in Mathematical Statistics" by H.A. David, the first use of the word 'moment' in this situation was in a 1893 letter to Nature by Karl Pearson entitled "Asymmetrical Frequency Curves" . Neyman's 1938 Biometrika paper "A Historical Note on Karl Pearson's Deduction of the Moments of the Binomial" gives a good synopsis of the letter and Pearson's subsequent work on moments of the binomial distribution and the method of moments. It's a really good read. Hopefully you have access JSTOR for I don't have the time now to give a good summary of the paper (though I will this weekend). Though I will mention one piece that may give insight as to why the term 'moment' was used. From Neyman's paper: It [Pearson's memoir] deals primarily with methods of approximating
continuous frequency curves by means of some processes involving the
calculation of easy formulae. One of these formulae considered was the
"point-binomial" or the "binomial with loaded ordinates". The formula differs from what to-day we call a binomial, viz. (4), only by a factor
$\alpha$, representing the area under the continuous curve which it is desired
to fit. This is what eventually led to the 'method of moments.' Neyman goes over the Pearson's derivation of the binomial moments in the above paper. And from Pearson's letter: We shall now proceed to find the first four moments of the system of
rectangles round GN. If the inertia of each rectangle might be considered
as concentrated along its mid vertical, we should have for the $s^{\text{th}}$ moment
round NG, writing $d = c(1 + nq)$. This hints at the fact that Pearson used the term 'moment' as an allusion to 'moment of inertia,' a term common in physics. Here's a scan of most of Pearson's Nature letter: You can view the entire article on page 615 here . | {
"source": [
"https://stats.stackexchange.com/questions/17595",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4426/"
]
} |
17,773 | For example, I have historical loss data and I am calculating extreme quantiles (Value-at-Risk or Probable Maximum Loss). The results obtained is for estimating the loss or predicting them? Where can one draw the line? I am confused. | "Prediction" and "estimation" indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard model of a statistical problem. An estimator uses data to guess at a parameter while a predictor uses the data to guess at some random value that is not part of the dataset. For those who are unfamiliar with what "parameter" and "random value" mean in statistics, the following provides a detailed explanation. In this standard model, data are assumed to constitute a (possibly multivariate) observation $\mathbf{x}$ of a random variable $X$ whose distribution is known only to lie within a definite set of possible distributions, the "states of nature". An estimator $t$ is a mathematical procedure that assigns to each possible value of $\mathbf{x}$ some property $t(\mathbf{x})$ of a state of nature $\theta$, such as its mean $\mu(\theta)$. Thus an estimate is a guess about the true state of nature. We can tell how good an estimate is by comparing $t(\mathbf{x})$ to $\mu(\theta)$. A predictor $p(\mathbf{x})$ concerns the independent observation of another random variable $Z$ whose distribution is related to the true state of nature. A prediction is a guess about another random value. We can tell how good a particular prediction is only by comparing $p(\mathbf{x})$ to the value realized by $Z$. We hope that on average the agreement will be good (in the sense of averaging over all possible outcomes $\mathbf{x}$ and simultaneously over all possible values of $Z$). Ordinary least squares affords the standard example. The data consist of pairs $(x_i,y_i)$ associating values $y_i$ of the dependent variable to values $x_i$ of the independent variable. The state of nature is specified by three parameters $\alpha$, $\beta$, and $\sigma$: it says that each $y_i$ is like an independent draw from a normal distribution with mean $\alpha + \beta x_i$ and standard deviation $\sigma$. $\alpha$, $\beta$, and $\sigma$ are parameters (numbers) believed to be fixed and unvarying. Interest focuses on $\alpha$ (the intercept) and $\beta$ (the slope). The OLS estimate, written $(\hat{\alpha}, \hat{\beta})$, is good in the sense that $\hat{\alpha}$ tends to be close to $\alpha$ and $\hat{\beta}$ tends to be close to $\beta$, no matter what the true (but unknown) values of $\alpha$ and $\beta$ might be . OLS prediction consists of observing a new value $Z = Y(x)$ of the dependent variable associated with some value $x$ of the independent variable. $x$ might or might not be among the $x_i$ in the dataset; that is immaterial. One intuitively good prediction is that this new value is likely to be close to $\hat{\alpha} + \hat{\beta}x$. Better predictions say just how close the new value might be (they are called prediction intervals ). They account for the fact that $\hat{\alpha}$ and $\hat{\beta}$ are uncertain (because they depend mathematically on the random values $(y_i)$), that $\sigma$ is not known for certain (and therefore has to be estimated), as well as the assumption that $Y(x)$ has a normal distribution with standard deviation $\sigma$ and mean $\alpha + \beta x$ (note the absence of any hats!). Note especially that this prediction has two separate sources of uncertainty: uncertainty in the data $(x_i,y_i)$ leads to uncertainty in the estimated slope, intercept, and residual standard deviation ($\sigma$); in addition, there is uncertainty in just what value of $Y(x)$ will occur. This additional uncertainty--because $Y(x)$ is random--characterizes predictions. A prediction may look like an estimate (after all, $\hat{\alpha} + \hat{\beta}x$ estimates $\alpha+\beta x$ :-) and may even have the very same mathematical formula ($p(\mathbf{x})$ can sometimes be the same as $t(\mathbf{x})$), but it will come with a greater amount of uncertainty than the estimate. Here, then, in the example of OLS, we see the distinction clearly: an estimate guesses at the parameters (which are fixed but unknown numbers), while a prediction guesses at the value of a random quantity. The source of potential confusion is that the prediction usually builds on the estimated parameters and might even have the same formula as an estimator. In practice, you can distinguish estimators from predictors in two ways: purpose : an estimator seeks to know a property of the true state of nature, while a prediction seeks to guess the outcome of a random variable; and uncertainty : a predictor usually has larger uncertainty than a related estimator, due to the added uncertainty in the outcome of that random variable. Well-documented and -described predictors therefore usually come with uncertainty bands--prediction intervals--that are wider than the uncertainty bands of estimators, known as confidence intervals. A characteristic feature of prediction intervals is that they can (hypothetically) shrink as the dataset grows, but they will not shrink to zero width--the uncertainty in the random outcome is "irreducible"--whereas the widths of confidence intervals will tend to shrink to zero, corresponding to our intuition that the precision of an estimate can become arbitrarily good with sufficient amounts of data. In applying this to assessing potential investment loss, first consider the purpose: do you want to know how much you might actually lose on this investment (or this particular basket of investments) during a given period, or are you really just guessing what is the expected loss (over a large universe of investments, perhaps)? The former is a prediction, the latter an estimate. Then consider the uncertainty. How would your answer change if you had nearly infinite resources to gather data and perform analyses? If it would become very precise, you are probably estimating the expected return on the investment, whereas if you remain highly uncertain about the answer, you are making a prediction. Thus, if you're still not sure which animal you're dealing with, ask this of your estimator/predictor: how wrong is it likely to be and why? By means of both criteria (1) and (2) you will know what you have. | {
"source": [
"https://stats.stackexchange.com/questions/17773",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7140/"
]
} |
17,781 | For the lasso problem
$\min_\beta (Y-X\beta)^T(Y-X\beta)$ such that $\|\beta\|_1 \leq t$. I often see the soft-thresholding result
$$ \beta_j^{\text{lasso}}= \mathrm{sgn}(\beta^{\text{LS}}_j)(|\beta_j^{\text{LS}}|-\gamma)^+ $$
for the orthonormal $X$ case. It is claimed that the solution can be "easily shown" to be such, but I've never seen a worked solution. Has anyone seen one or perhaps has done the derivation? | This can be attacked in a number of ways, including fairly economical approaches via the Karush–Kuhn–Tucker conditions . Below is a quite elementary alternative argument. The least squares solution for an orthogonal design Suppose $X$ is composed of orthogonal columns. Then, the least-squares solution is
$$
\newcommand{\bls}{\hat{\beta}^{{\small \text{LS}}}}\newcommand{\blasso}{\hat{\beta}^{{\text{lasso}}}} \bls = (X^T X)^{-1} X^T y = X^T y \>.
$$ Some equivalent problems Via the Lagrangian form, it is straightforward to see that an equivalent problem to that considered in the question is
$$
\min_\beta \frac{1}{2} \|y - X \beta\|_2^2 + \gamma \|\beta\|_1 \>.
$$ Expanding out the first term we get $\frac{1}{2} y^T y - y^T X \beta + \frac{1}{2}\beta^T \beta$ and since $y^T y$ does not contain any of the variables of interest, we can discard it and consider yet another equivalent problem,
$$
\min_\beta (- y^T X \beta + \frac{1}{2} \|\beta\|^2) + \gamma \|\beta\|_1 \>.
$$ Noting that $\bls = X^T y$, the previous problem can be rewritten as
$$
\min_\beta \sum_{i=1}^p - \bls_i \beta_i + \frac{1}{2} \beta_i^2 + \gamma |\beta_i| \> .
$$ Our objective function is now a sum of objectives, each corresponding to a separate variable $\beta_i$, so they may each be solved individually. The whole is equal to the sum of its parts Fix a certain $i$. Then, we want to minimize
$$
\mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 + \gamma |\beta_i| \> .
$$ If $\bls_i > 0$, then we must have $\beta_i \geq 0$ since otherwise we could flip its sign and get a lower value for the objective function. Likewise if $\bls_i < 0$, then we must choose $\beta_i \leq 0$. Case 1 : $\bls_i > 0$. Since $\beta_i \geq 0$,
$$
\mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 + \gamma \beta_i \> ,
$$
and differentiating this with respect to $\beta_i$ and setting equal to zero, we get $\beta_i = \bls_i - \gamma$ and this is only feasible if the right-hand side is nonnegative, so in this case the actual solution is
$$
\blasso_i = (\bls_i - \gamma)^+ = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)^+ \>.
$$ Case 2 : $\bls_i \leq 0$. This implies we must have $\beta_i \leq 0$ and so
$$
\mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 - \gamma \beta_i \> .
$$
Differentiating with respect to $\beta_i$ and setting equal to zero, we get $\beta_i = \bls_i + \gamma = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)$. But, again, to ensure this is feasible, we need $\beta_i \leq 0$, which is achieved by taking
$$
\blasso_i = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)^+ \>.
$$ In both cases, we get the desired form, and so we are done. Final remarks Note that as $\gamma$ increases, then each of the $|\blasso_i|$ necessarily decreases, hence so does $\|\blasso\|_1$. When $\gamma = 0$, we recover the OLS solutions, and, for $\gamma > \max_i |\bls_i|$, we obtain $\blasso_i = 0$ for all $i$. | {
"source": [
"https://stats.stackexchange.com/questions/17781",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3786/"
]
} |
18,030 | When using SVM, we need to select a kernel. I wonder how to select a kernel. Any criteria on kernel selection? | The kernel is effectively a similarity measure, so choosing a kernel according to prior knowledge of invariances as suggested by Robin (+1) is a good idea. In the absence of expert knowledge, the Radial Basis Function kernel makes a good default kernel (once you have established it is a problem requiring a non-linear model). The choice of the kernel and kernel/regularisation parameters can be automated by optimising a cross-valdiation based model selection (or use the radius-margin or span bounds). The simplest thing to do is to minimise a continuous model selection criterion using the Nelder-Mead simplex method, which doesn't require gradient calculation and works well for sensible numbers of hyper-parameters. If you have more than a few hyper-parameters to tune, automated model selection is likely to result in severe over-fitting, due to the variance of the model selection criterion. It is possible to use gradient based optimization, but the performance gain is not usually worth the effort of coding it up). Automated choice of kernels and kernel/regularization parameters is a tricky issue, as it is very easy to overfit the model selection criterion (typically cross-validation based), and you can end up with a worse model than you started with. Automated model selection also can bias performance evaluation, so make sure your performance evaluation evaluates the whole process of fitting the model (training and model selection), for details, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (pdf) and G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, vol. 11, pp. 2079-2107, July 2010. (pdf) | {
"source": [
"https://stats.stackexchange.com/questions/18030",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7259/"
]
} |
18,058 | ...assuming that I'm able to augment their knowledge about variance in an intuitive fashion ( Understanding "variance" intuitively ) or by saying: It's the average distance of the data values from the 'mean' - and since variance is in square units, we take the square root to keep the units same and that is called standard deviation. Let's assume this much is articulated and (hopefully) understood by the 'receiver'. Now what is covariance and how would one explain it in simple English without the use of any mathematical terms/formulae? (I.e., intuitive explanation. ;) Please note: I do know the formulae and the math behind the concept. I want to be able to 'explain' the same in an easy to understand fashion, without including the math; i.e., what does 'covariance' even mean? | Sometimes we can "augment knowledge" with an unusual or different approach. I would like this reply to be accessible to kindergartners and also have some fun, so everybody get out your crayons! Given paired $(x,y)$ data, draw their scatterplot. (The younger students may need a teacher to produce this for them. :-) Each pair of points $(x_i,y_i)$ , $(x_j,y_j)$ in that plot determines a rectangle: it's the smallest rectangle, whose sides are parallel to the axes, containing those points. Thus the points are either at the upper right and lower left corners (a "positive" relationship) or they are at the upper left and lower right corners (a "negative" relationship). Draw all possible such rectangles. Color them transparently, making the positive rectangles red (say) and the negative rectangles "anti-red" (blue). In this fashion, wherever rectangles overlap, their colors are either enhanced when they are the same (blue and blue or red and red) or cancel out when they are different. ( In this illustration of a positive (red) and negative (blue) rectangle, the overlap ought to be white; unfortunately, this software does not have a true "anti-red" color. The overlap is gray, so it will darken the plot, but on the whole the net amount of red is correct. ) Now we're ready for the explanation of covariance. The covariance is the net amount of red in the plot (treating blue as negative values). Here are some examples with 32 binormal points drawn from distributions with the given covariances, ordered from most negative (bluest) to most positive (reddest). They are drawn on common axes to make them comparable. The rectangles are lightly outlined to help you see them. This is an updated (2019) version of the original: it uses software that properly cancels the red and cyan colors in overlapping rectangles. Let's deduce some properties of covariance. Understanding of these properties will be accessible to anyone who has actually drawn a few of the rectangles. :-) Bilinearity. Because the amount of red depends on the size of the plot, covariance is directly proportional to the scale on the x-axis and to the scale on the y-axis. Correlation. Covariance increases as the points approximate an upward sloping line and decreases as the points approximate a downward sloping line. This is because in the former case most of the rectangles are positive and in the latter case, most are negative. Relationship to linear associations. Because non-linear associations can create mixtures of positive and negative rectangles, they lead to unpredictable (and not very useful) covariances. Linear associations can be fully interpreted by means of the preceding two characterizations. Sensitivity to outliers. A geometric outlier (one point standing away from the mass) will create many large rectangles in association with all the other points. It alone can create a net positive or negative amount of red in the overall picture. Incidentally, this definition of covariance differs from the usual one only by a universal constant of proportionality (independent of the data set size). The mathematically inclined will have no trouble performing the algebraic demonstration that the formula given here is always twice the usual covariance. | {
"source": [
"https://stats.stackexchange.com/questions/18058",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4426/"
]
} |
18,082 | Following up on this question, How would you explain covariance to someone who understands only the mean? , which addresses the issue of explaining covariance to a lay person, brought up a similar question in my mind. How would one explain to a statistics neophyte the difference between covariance and correlation ? It seems that both refer to the change in one variable linked back to another variable. Similar to the referred-to question, a lack of formulae would be preferable. | The problem with covariances is that they are hard to compare: when you calculate the covariance of a set of heights and weights, as expressed in (respectively) meters and kilograms, you will get a different covariance from when you do it in other units (which already gives a problem for people doing the same thing with or without the metric system!), but also, it will be hard to tell if (e.g.) height and weight 'covary more' than, say the length of your toes and fingers, simply because the 'scale' the covariance is calculated on is different. The solution to this is to 'normalize' the covariance: you divide the covariance by something that represents the diversity and scale in both the covariates, and end up with a value that is assured to be between -1 and 1: the correlation. Whatever unit your original variables were in, you will always get the same result, and this will also ensure that you can, to a certain degree, compare whether two variables 'correlate' more than two others, simply by comparing their correlation. Note: the above assumes that the reader already understands the concept of covariance. | {
"source": [
"https://stats.stackexchange.com/questions/18082",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/561/"
]
} |
18,112 | In my work, we are comparing predicted rankings versus true rankings for some sets of data. Up until recently, we've been using Kendall-Tau alone. A group working on a similar project suggested we try to use the Goodman-Kruskal Gamma instead, and that they preferred it. I was wondering what the differences between the different rank correlation algorithms were. The best I've found was this answer , which claims Spearman is used in place of usual linear correlations, and that Kendall-Tau is less direct and more closely resembles Goodman-Kruskal Gamma. The data I'm working with doesn't seem to have any obvious linear correlations, and the data is heavily skewed and non-normal. Also, Spearman generally reports higher correlation than Kendall-Tau for our data, and I was wondering what that says about the data specifically. I'm not a statistician, so some of the papers I'm reading on these things just seem like jargon to me, sorry. | Spearman rho vs Kendall tau . These two are so much computationally different that you cannot directly compare their magnitudes. Spearman is usually higher by 1/4 to 1/3 and this makes one incorrectly conclude that Spearman is "better" for a particular dataset. The difference between rho and tau is in their ideology, proportion-of-variance for rho and probability for tau. Rho is a usual Pearson r applied for ranked data, and like r, is more sensitive to points with large moments (that is, deviations from cloud centre) than to points with small moments. Therefore rho is quite sensitive to the shape of the cloud after the ranking done: the coefficient for an oblong rhombic cloud will be higher than the coefficient for an oblong dumbbelled cloud (because sharp edges of the first are large moments). Tau is an extension of Gamma and is equally sensitive to all the data points , so it is less sensitive to peculiarities in shape of the ranked cloud. Tau is more "general" than rho, for rho is warranted only when you believe the underlying (model, or functional in population) relationship between the variables is strictly monotonic. While Tau allows for nonmonotonic underlying curve and measures which monotonic "trend", positive or negative, prevails there overall. Rho is comparable with r in magnitude; tau is not. Kendall tau as Gamma . Tau is just a standardized form of Gamma. Several related measures all have numerator $P-Q$ but differ in normalizing denominator : Gamma: $P+Q$ Somers' D("x dependent"): $P+Q+T_x$ Somers' D("y dependent"): $P+Q+T_y$ Somers' D("symmetric"): arithmetic mean of the above two Kendall's Tau-b corr. (most suitable for square tables): geometric mean of those two Kendall's Tau-c corr $^1$ . (most suitable for rectangular tables): $N^2(k-1)/(2k)$ Kendall's Tau-a corr $^2$ . (makes nо adjustment for ties): $N(N-1)/2 = P+Q+T_x+T_y+T_{xy}$ where $P$ - number of pairs of observations with "concordance", $Q$ - with "inversion"; $T_x$ - number of ties by variable X, $T_y$ - by variable Y, $T_{xy}$ – by both variables; $N$ - number of observations, $k$ - number of distinct values in that variable where this number is less. Thus, tau is directly comparable in theory and magnitude with Gamma. Rho is directly comparable in theory and magnitude with Pearson $r$ . Nick Stauner's nice answer here tells how it is possible to compare rho and tau indirectly. See also about tau and rho. $^1$ Tau-c of a variable with itself can be below $1$ : specifically, when the distribution of $k$ distinct values is unbalanced. $^2$ Tau-a of a variable with itself can be below $1$ : specifically, when there are ties. | {
"source": [
"https://stats.stackexchange.com/questions/18112",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7287/"
]
} |
18,215 | It is well known that confidence intervals and testing statistical hypothesis are strongly related. My questions is focused on comparison of means for two groups based on a numerical variable. Let's assume that such hypothesis is tested using t-test. On the other side, one can compute confidence intervals for means of both groups. Is there any relation between overlapping of confidence intervals and the rejection of the null hypothesis that means are equal (in favor of the alternative that means differ - two-sided test)? For example, a test could reject the null hypothesis iff the confidence intervals do not overlap. | Yes, there are some simple relationships between confidence interval comparisons and hypothesis tests in a wide range of practical settings. However, in addition to verifying the CI procedures and t-test are appropriate for our data, we must check that the sample sizes are not too different and that the two sets have similar standard deviations. We also should not attempt to derive highly precise p-values from comparing two confidence intervals, but should be glad to develop effective approximations. In trying to reconcile the two replies already given (by @John and @Brett), it helps to be mathematically explicit. A formula for a symmetric two-sided confidence interval appropriate for the setting of this question is $$\text{CI} = m \pm \frac{t_\alpha(n) s}{\sqrt{n}}$$ where $m$ is the sample mean of $n$ independent observations, $s$ is the sample standard deviation, $2\alpha$ is the desired test size (maximum false positive rate), and $t_\alpha(n)$ is the upper $1-\alpha$ percentile of the Student t distribution with $n-1$ degrees of freedom. (This slight deviation from conventional notation simplifies the exposition by obviating any need to fuss over the $n$ vs $n-1$ distinction, which will be inconsequential anyway.) Using subscripts $1$ and $2$ to distinguish two independent sets of data for comparison, with $1$ corresponding to the larger of the two means, a non -overlap of confidence intervals is expressed by the inequality (lower confidence limit 1) $\gt$ (upper confidence limit 2); viz. , $$m_1 - \frac{t_\alpha(n_1) s_1}{\sqrt{n_1}} \gt m_2 + \frac{t_\alpha(n_2) s_2}{\sqrt{n_2}}.$$ This can be made to look like the t-statistic of the corresponding hypothesis test (to compare the two means) with simple algebraic manipulations, yielding $$\frac{m_1-m_2}{\sqrt{s_1^2/n_1 + s_2^2/n_2}} \gt \frac{s_1\sqrt{n_2}t_\alpha(n_1) + s_2\sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 s_2^2 + n_2 s_1^2}}.$$ The left hand side is the statistic used in the hypothesis test; it is usually compared to a percentile of a Student t distribution with $n_1+n_2$ degrees of freedom: that is, to $t_\alpha(n_1+n_2)$ . The right hand side is a biased weighted average of the original t distribution percentiles. The analysis so far justifies the reply by @Brett: there appears to be no simple relationship available. However, let's probe further. I am inspired to do so because, intuitively, a non-overlap of confidence intervals ought to say something! First, notice that this form of the hypothesis test is valid only when we expect $s_1$ and $s_2$ to be at least approximately equal. (Otherwise we face the notorious Behrens-Fisher problem and its complexities.) Upon checking the approximate equality of the $s_i$ , we could then create an approximate simplification in the form $$\frac{m_1-m_2}{s\sqrt{1/n_1 + 1/n_2}} \gt \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}.$$ Here, $s \approx s_1 \approx s_2$ . Realistically, we should not expect this informal comparison of confidence limits to have the same size as $\alpha$ . Our question then is whether there exists an $\alpha'$ such that the right hand side is (at least approximately) equal to the correct t statistic. Namely, for what $\alpha'$ is it the case that $$t_{\alpha'}(n_1+n_2) = \frac{\sqrt{n_2}t_\alpha(n_1) + \sqrt{n_1}t_\alpha(n_2)}{\sqrt{n_1 + n_2}}\text{?}$$ It turns out that for equal sample sizes, $\alpha$ and $\alpha'$ are connected (to pretty high accuracy) by a power law. For instance, here is a log-log plot of the two for the cases $n_1=n_2=2$ (lowest blue line), $n_1=n_2=5$ (middle red line), $n_1=n_2=\infty$ (highest gold line). The middle green dashed line is an approximation described below. The straightness of these curves belies a power law. It varies with $n=n_1=n_2$ , but not much. The answer does depend on the set $\{n_1, n_2\}$ , but it is natural to wonder how much it really varies with changes in the sample sizes. In particular, we could hope that for moderate to large sample sizes (maybe $n_1 \ge 10, n_2 \ge 10$ or thereabouts) the sample size makes little difference. In this case, we could develop a quantitative way to relate $\alpha'$ to $\alpha$ . This approach turns out to work provided the sample sizes are not too different from each other. In the spirit of simplicity, I will report an omnibus formula for computing the test size $\alpha'$ corresponding to the confidence interval size $\alpha$ . It is $$\alpha' \approx e \alpha^{1.91};$$ that is, $$\alpha' \approx \exp(1 + 1.91\log(\alpha)).$$ This formula works reasonably well in these common situations: Both sample sizes are close to each other, $n_1 \approx n_2$ , and $\alpha$ is not too extreme ( $\alpha \gt .001$ or so). One sample size is within about three times the other and the smallest isn't too small (roughly, greater than $10$ ) and again $\alpha$ is not too extreme. One sample size is within three times the other and $\alpha \gt .02$ or so. The relative error (correct value divided by the approximation) in the first situation is plotted here, with the lower (blue) line showing the case $n_1=n_2=2$ , the middle (red) line the case $n_1=n_2=5$ , and the upper (gold) line the case $n_1=n_2=\infty$ . Interpolating between the latter two, we see that the approximation is excellent for a wide range of practical values of $\alpha$ when sample sizes are moderate (around 5-50) and otherwise is reasonably good. This is more than good enough for eyeballing a bunch of confidence intervals. To summarize, the failure of two $2\alpha$ -size confidence intervals of means to overlap is significant evidence of a difference in means at a level equal to $2e \alpha^{1.91}$ , provided the two samples have approximately equal standard deviations and are approximately the same size. I'll end with a tabulation of the approximation for common values of $2\alpha$ . In the left hand column is the nominal size $2\alpha$ of the original confidence interval; in the right hand column is the actual size $2\alpha^\prime$ of the comparison of two such intervals: $$\begin{array}{ll}
2\alpha & 2\alpha^\prime \\ \hline
0.1 &0.02\\
0.05 &0.005\\
0.01 &0.0002\\
0.005 &0.00006\\
\end{array}$$ For example, when a pair of two-sided 95% CIs ( $2\alpha=.05$ ) for samples of approximately equal sizes do not overlap, we should take the means to be significantly different, $p \lt .005$ . The correct p-value (for equal sample sizes $n$ ) actually lies between $.0037$ ( $n=2$ ) and $.0056$ ( $n=\infty$ ). This result justifies (and I hope improves upon) the reply by @John. Thus, although the previous replies appear to be in conflict, both are (in their own ways) correct. | {
"source": [
"https://stats.stackexchange.com/questions/18215",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1215/"
]
} |
18,348 | I would like your thoughts about the differences between cross validation and bootstrapping to estimate the prediction error. Does one work better for small dataset sizes or large datasets? | It comes down to variance and bias (as usual). CV tends to be less biased but K-fold CV has fairly large variance. On the other hand, bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic). Other bootstrapping methods have been adapted to deal with the bootstrap bias (such as the 632 and 632+ rules). Two other approaches would be "Monte Carlo CV" aka "leave-group-out CV" which does many random splits of the data (sort of like mini-training and test splits). Variance is very low for this method and the bias isn't too bad if the percentage of data in the hold-out is low. Also, repeated CV does K-fold several times and averages the results similar to regular K-fold. I'm most partial to this since it keeps the low bias and reduces the variance. Edit For large sample sizes, the variance issues become less important and the computational part is more of an issues. I still would stick by repeated CV for small and large sample sizes. Some relevant research is below (esp Kim and Molinaro). References Bengio, Y., & Grandvalet, Y. (2005). Bias in estimating the variance of k-fold cross-validation. Statistical modeling and analysis for complex data problems, 75–95. Braga-Neto, U. M. (2004). Is cross-validation valid for small-sample microarray classification Bioinformatics, 20(3), 374–380. doi:10.1093/bioinformatics/btg419 Efron, B. (1983). Estimating the error rate of a prediction rule: improvement on cross-validation. Journal of the American Statistical Association, 316–331. Efron, B., & Tibshirani, R. (1997). Improvements on cross-validation: The. 632+ bootstrap method. Journal of the American Statistical Association, 548–560. Furlanello, C., Merler, S., Chemini, C., & Rizzoli, A. (1997). An application of the bootstrap 632+ rule to ecological data. WIRN 97. Jiang, W., & Simon, R. (2007). A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification. Statistics in
Medicine, 26(29), 5320–5334. Jonathan, P., Krzanowski, W., & McCarthy, W. (2000). On the use of cross-validation to assess performance in multivariate prediction. Statistics and Computing, 10(3), 209–229. Kim, J.-H. (2009). Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Computational Statistics and Data Analysis, 53(11), 3735–3745. doi:10.1016/j.csda.2009.04.009 Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. International Joint Conference on Artificial Intelligence, 14, 1137–1145. Martin, J., & Hirschberg, D. (1996). Small sample statistics for classification error rates I: Error rate measurements. Molinaro, A. M. (2005). Prediction error estimation: a comparison of resampling methods. Bioinformatics, 21(15), 3301–3307. doi:10.1093/bioinformatics/bti499 Sauerbrei, W., & Schumacher1, M. (2000). Bootstrap and Cross-Validation to Assess Complexity of Data-Driven Regression Models. Medical Data Analysis, 26–28. Tibshirani, RJ, & Tibshirani, R. (2009). A bias correction for the minimum error rate in cross-validation. Arxiv preprint arXiv:0908.2904. | {
"source": [
"https://stats.stackexchange.com/questions/18348",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9973/"
]
} |
18,375 | I have four different time series of hourly measurements: The heat consumption inside a house The temperature outside the house The solar radiation The wind speed I want to be able to predict the heat consumption inside the house. There is a clear seasonal trend, both on a yearly basis, and on a daily basis. Since there is a clear correlation between the different series, I want to fit them using an ARIMAX-model. This can be done in R, using the function arimax from the package TSA. I tried to read the documentation on this function, and to read up on transfer functions, but so far, my code: regParams = ts.union(ts(dayy))
transferParams = ts.union(ts(temp))
model10 = arimax(heat,order=c(2,1,1),seasonal=list(order=c(0,1,1),period=24),xreg=regParams,xtransf=transferParams,transfer=list(c(1,1))
pred10 = predict(model10, newxreg=regParams) gives me: where the black line is the actual measured data, and the green line is my fitted model in comparison. Not only is it not a good model, but clearly something is wrong. I will admit that my knowledge of ARIMAX-models and transfer functions is limited. In the function arimax(), (as far as I have understood), xtransf is the exogenous time series which I want to use (using transfer functions) to predict my main time series. But what is the difference between xreg and xtransf really? More generally, what have I done wrong? I would like to be able to get a better fit than the one achieved from lm(heat ~ temp radi wind*time). Edits: Based on some of the comments, I removed transfer, and added xreg instead: regParams = ts.union(ts(dayy), ts(temp), ts(time))
model10 = arimax(heat,order=c(2,1,1),seasonal=list(order=c(0,1,1),period=24),xreg=regParams) where dayy is the "number day of the year", and time is the hour of the day. Temp is again the temperature outside. This gives me the following result: which is better, but not nearly what I expected to see. | You're going to have a little bit of trouble modeling a series with 2 levels of seasonality using an ARIMA model. Getting this right is going highly dependent on setting things up correctly. Have you considered a simple linear model yet? They're a lot faster and easier to fit than ARIMA models, and if you use dummy variables for your different seasonality levels they are often quite accurate. I'm assuming you have hourly data, so make sure your TS object is setup with a frequency of 24. You can model other levels of seasonality using dummy variables. For example, you might want a set of 0/1 dummies representing the month of the year. Include the dummy variables in the xreg argument, along with any covariates (like temperature). Fit the model with the arima function in base R. This function can handle ARMAX models through the use of the xreg argument. Try the Arima and auto.arima functions in the forecast package. auto.arima is nice because it will automatically find good parameters for your arima model. However, it will take FOREVER to fit on your dataset. Try the tslm function in the arima package, using dummy variables for each level of seasonality. This will fit a lot faster than the Arima model, and may even work better in your situation. If 4/5/6 don't work, THEN start worrying about transfer functions. You have to crawl before you can walk. If you are planning to forecast into the future, you will first need to forecast your xreg variables. This is easy for seasonal dummies, but you'll have to think about how to make a good weather forecasts. Maybe use the median of historical data? Here is an example of how I would approach this: #Setup a fake time series
set.seed(1)
library(lubridate)
index <- ISOdatetime(2010,1,1,0,0,0)+1:8759*60*60
month <- month(index)
hour <- hour(index)
usage <- 1000+10*rnorm(length(index))-25*(month-6)^2-(hour-12)^2
usage <- ts(usage,frequency=24)
#Create monthly dummies. Add other xvars to this matrix
xreg <- model.matrix(~as.factor(month))[,2:12]
colnames(xreg) <- c('Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec')
#Fit a model
library(forecast)
model <- Arima(usage, order=c(0,0,0), seasonal=list(order=c(1,0,0), period=24), xreg=xreg)
plot(usage)
lines(fitted(model),col=2)
#Benchmark against other models
model2 <- tslm(usage~as.factor(month)+as.factor(hour))
model3 <- tslm(usage~as.factor(month))
model4 <- rep(mean(usage),length(usage))
#Compare the 4 models
library(plyr) #for rbind.fill
ACC <- rbind.fill( data.frame(t(accuracy(model))),
data.frame(t(accuracy(model2))),
data.frame(t(accuracy(model3))),
data.frame(t(accuracy(model4,usage)))
)
ACC <- round(ACC,2)
ACC <- cbind(Type=c('Arima','LM1','Monthly Mean','Mean'),ACC)
ACC[order(ACC$MAE),] | {
"source": [
"https://stats.stackexchange.com/questions/18375",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7339/"
]
} |
18,391 | Is there a method to understand if two lines are (more or less) parallel? I have two lines generated from linear regressions and I would like to understand if they are parallel. In other words, I would like to get the different of the slopes of those two lines. Is there an R function to calculate this? EDIT: ... and how can I get the slope (in degrees) of a linear regression line? | I wonder if I am missing something obvious, but couldn't you do this statistically using ANCOVA? An important issue is that the slopes in the two regressions are estimated with error. They are estimates of the slopes in the populations at large. If the concern is whether the two regression lines are parallel or not in the population then it doesn't make sense to compare $a_1$ with $a_2$ directly for exact equivalence; they are both subject to error/uncertainty that needs to be taken into account. If we think about this from a statistical point of view, and we can combine the data on $x$ and $y$ for both data sets in some meaningful way (i.e. $x$ and $y$ in both sets are drawn from the two populations with similar ranges for the two variables it is just the relationship between them that are different in the two populations), then we can fit the following two models: $$\hat{y} = b_0 + b_1x + b_2g$$ and $$\hat{y} = b_0 + b_1x + b_2g + b_3xg$$ Where $b_i$ are the model coefficients, and $g$ is a grouping variable/factor, indicating which data set each observation belongs to. We can use an ANOVA table or F-ratio to test if the second, more complex model fits the data better than the simpler model. The simpler model states that the slopes of the two lines are the same ($b_1$) but the lines are offset from one another by an amount $b_2$. The more complex model includes an interaction between the slope of the line and the grouping variable. If the coefficient for this interaction term is significantly different from zero or the ANOVA/F-ratio indicates the more complex model fits the data better then we must reject the Null hypothesis that that two lines are parallel. Here is an example in R using dummy data. First, data with equal slopes: set.seed(2)
samp <- factor(sample(rep(c("A","B"), each = 50)))
d1 <- data.frame(y = c(2,5)[as.numeric(samp)] + (0.5 * (1:100)) + rnorm(100),
x = 1:100,
g = samp)
m1 <- lm(y ~ x * g, data = d1)
m1.null <- lm(y ~ x + g, data = d1)
anova(m1.null, m1) Which gives > anova(m1.null, m1)
Analysis of Variance Table
Model 1: y ~ x + g
Model 2: y ~ x * g
Res.Df RSS Df Sum of Sq F Pr(>F)
1 97 122.29
2 96 122.13 1 0.15918 0.1251 0.7243 Indicating that we fail to reject the null hypothesis of equal slopes in this sample of data. Of course, we'd want to assure ourselves that we had sufficient power to detect a difference if there really was one so that we were not lead to erroneously fail to reject the null because our sample size was too small for the expected effect. Now with different slopes. set.seed(42)
x <- seq(1, 100, by = 2)
d2 <- data.frame(y = c(2 + (0.5 * x) + rnorm(50),
5 + (1.5 * x) + rnorm(50)),
x = x,
g = rep(c("A","B"), each = 50))
m2 <- lm(y ~ x * g, data = d2)
m2.null <- lm(y ~ x + g, data = d2)
anova(m2.null, m2) Which gives: > anova(m2.null, m2)
Analysis of Variance Table
Model 1: y ~ x + g
Model 2: y ~ x * g
Res.Df RSS Df Sum of Sq F Pr(>F)
1 97 21132.0
2 96 103.8 1 21028 19439 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Here we have substantial evidence against the null hypothesis and thus we can reject it in favour of the alternative (in other words, we reject the hypothesis that the slopes of the two lines are equal). The interaction terms in the two models I fitted ($b_3xg$) give the estimated difference in slopes for the two groups. For the first model, the estimate of the difference in slopes is small (~0.003) > coef(m1)
(Intercept) x gB x:gB
2.100068977 0.500596394 2.659509181 0.002846393 and a $t$-test on this would fail to reject the null hypothesis that this difference in slopes is 0: > summary(m1)
Call:
lm(formula = y ~ x * g, data = d1)
Residuals:
Min 1Q Median 3Q Max
-2.32886 -0.81224 -0.01569 0.93010 2.29984
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.100069 0.334669 6.275 1.01e-08 ***
x 0.500596 0.005256 95.249 < 2e-16 ***
gB 2.659509 0.461191 5.767 9.82e-08 ***
x:gB 0.002846 0.008047 0.354 0.724
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.128 on 96 degrees of freedom
Multiple R-squared: 0.9941, Adjusted R-squared: 0.9939
F-statistic: 5347 on 3 and 96 DF, p-value: < 2.2e-16 If we turn to the model fitted to the second data set, where we made the slopes for the two groups differ, we see that the estimated difference in slopes of the two lines is ~1 unit. > coef(m2)
(Intercept) x gB x:gB
2.3627432 0.4920317 2.8931074 1.0048653 The slope for group "A" is ~0.49 ( x in the above output), whilst to get the slope for group "B" we need to add the difference slopes (give by the interaction term remember) to the slope of group "A"; ~0.49 + ~1 = ~1.49. This is pretty close to the stated slope for group "B" of 1.5. A $t$-test on this difference of slopes also indicates that the estimate for the difference is bounded away from 0: > summary(m2)
Call:
lm(formula = y ~ x * g, data = d2)
Residuals:
Min 1Q Median 3Q Max
-3.1962 -0.5389 0.0373 0.6952 2.1072
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.362743 0.294220 8.031 2.45e-12 ***
x 0.492032 0.005096 96.547 < 2e-16 ***
gB 2.893107 0.416090 6.953 4.33e-10 ***
x:gB 1.004865 0.007207 139.424 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.04 on 96 degrees of freedom
Multiple R-squared: 0.9994, Adjusted R-squared: 0.9994
F-statistic: 5.362e+04 on 3 and 96 DF, p-value: < 2.2e-16 | {
"source": [
"https://stats.stackexchange.com/questions/18391",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5405/"
]
} |
18,431 | Q: Does there exist experimental evidence supporting Tufte-style, minimalist, data-speak visualizations over the chart-junked visualizations of, say, Nigel Holmes ? I asked how to add chart-junk to R plots here and responders threw a hefty amount of snark back at me. So, surely, there must be some experimental evidence, to which I'm not privy, supporting their anti-chart junk position---more evidence than just "Tufte said so." Right? If such evidence exists it would contradict a lot of psychological research we have regarding humans, their memory recall, and pattern identification. So I'd certainly be excited to read about it. A little anecdote: at a conference I asked Edward Tufte how he regards experimental evidence finding that junk animations and videos improve humans' understanding and memory recall [see research cited in Brain Rules] . His response: "Don't believe them." So much for the scientific method! P.S. Of course, I'm needling people a little here. I own all of Tufte's books and think his work is incredible. I just think that his supporters have oversold some of his arguments. NOTE: This is a re-post of a question I asked on StackOverflow . Moderators closed it because it wasn't programming-specific. CrossValidated might be a better home. UPDATE: There are some useful links in the comments section of my original question post---namely, to the work of Chambers, Cleveland, and the datavis group at Stanford. UPDATE: This question deals with similar subject matter. | The literature is vast. Experimental evidence is abundant but incomplete. For an introduction that focuses on the psychological and semiotic investigations, see Alan M. MacEachren, How Maps Work (1995; 2004 in paperback). Jump directly to chapter 9 (near the end) and then work backwards through any preliminary material that interests you. The bibliography is extensive (over 400 documents) but is getting a little long in the tooth. Although the title suggests a focus on cartography, most of the book is relevant to how humans create meaning out of and interpret graphical information. Don't expect to get a definitive answer out of any amount of such research. Remember that Tufte, Cleveland, and others were primarily focused on creating graphics that enable (above all) accurate, insightful communication of and interpretation of data. Other graphics artists and researchers have other aims, such as influencing people, creating effective propaganda, simplifying complex datasets, and expressing their artistic sensibilities within a graphical medium. These are almost diametrically opposed to the first set of objectives, whence the hugely differing approaches and recommendations you will find. Given this, I think a review of Cleveland's research should be sufficiently convincing that many of Tufte's design recommendations have decent experimental justification. These include his use of the Lie Factor, the Data-Ink Ratio, small multiples, and chartjunk for critically evaluating and designing statistical graphics. | {
"source": [
"https://stats.stackexchange.com/questions/18431",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3577/"
]
} |
18,433 | Given the random variable $$Y = \max(X_1, X_2, \ldots, X_n)$$ where $X_i$ are IID uniform variables, how do I calculate the PDF of $Y$? | It is possible that this question is homework but I felt this classical elementary probability question was still lacking a complete answer after several months, so I'll give one here. From the problem statement, we want the distribution of $$Y = \max \{ X_1, ..., X_n \}$$ where $X_1, ..., X_n$ are iid ${\rm Uniform}(a,b)$ . We know that $Y < x$ if and only if every element of the sample is less than $x$ . Then this, as indicated in @varty's hint, combined with the fact that the $X_i$ 's are independent, allows us to deduce $$ P(Y \leq x) = P(X_1 \leq x, ..., X_n \leq x) = \prod_{i=1}^{n} P(X_i \leq x) = F_{X}(x)^n$$ where $F_{X}(x)$ is the CDF of the uniform distribution that is $\frac{y-a}{b-a}$ . Therefore the CDF of $Y$ is $$F_{Y}(y) = P(Y \leq y) = \begin{cases}
0 & y \leq a \\
\phantom{} \left( \frac{y-a}{b-a} \right)^n & y\in(a,b) \\
1 & y \geq b \\
\end{cases}$$ Since $Y$ has an absolutely continuous distribution we can derive its density by differentiating the CDF . Therefore the density of $Y$ is $$ p_{Y}(y) = \frac{n(y-a)^{n-1}}{(b-a)^{n}}$$ In the special case where $a=0,b=1$ , we have that $p_{Y}(y)=ny^{n-1}$ , which is the density of a Beta distribution with $\alpha=n$ and $\beta=1$ , since ${\rm Beta}(n,1) = \frac{\Gamma(n+1)}{\Gamma(n)\Gamma(1)}=\frac{n!}{(n-1)!} = n$ . As a note, the sequence you get if you were to sort your sample in increasing order - $X_{(1)}, ..., X_{(n)}$ - are called the order statistics . A generalization of this answer is that all order statistics of a ${\rm Uniform}(0,1)$ distributed sample have a Beta distribution , as noted in @bnaul's answer. | {
"source": [
"https://stats.stackexchange.com/questions/18433",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7395/"
]
} |
18,438 | I just noticed that integrating a univariate random variable's quantile function (inverse cdf) from p=0 to p=1 produces the variable's mean. I haven't heard of this relationship before now, so I'm wondering: Is this always the case? If so, is this relationship widely known? Here is an example in python: from math import sqrt
from scipy.integrate import quad
from scipy.special import erfinv
def normalPdf(x, mu, sigma):
return 1.0 / sqrt(2.0 * pi * sigma**2.0) * exp(-(x - mu)**2.0 / (2.0 * sigma**2.0))
def normalQf(p, mu, sigma):
return mu + sigma * sqrt(2.0) * erfinv(2.0 * p - 1.0)
mu = 2.5
sigma = 1.3
quantileIntegral = quad(lambda p: quantile(p,mu,sigma), 0.0, 1.0)[0]
print quantileIntegral # Prints 2.5. | Let $F$ be the CDF of the random variable $X$ , so the inverse CDF can be written $F^{-1}$ . In your integral make the substitution $p = F(x)$ , $dp = F'(x)dx = f(x)dx$ to obtain $$\int_0^1F^{-1}(p)dp = \int_{-\infty}^{\infty}x f(x) dx = \mathbb{E}_F[X].$$ This is valid for continuous distributions. Care must be taken for other distributions because an inverse CDF hasn't a unique definition. Edit When the variable is not continuous, it does not have a distribution that is absolutely continuous with respect to Lebesgue measure, requiring care in the definition of the inverse CDF and care in computing integrals. Consider, for instance, the case of a discrete distribution. By definition, this is one whose CDF $F$ is a step function with steps of size $\Pr_F(x)$ at each possible value $x$ . This figure shows the CDF of a Bernoulli $(2/3)$ distribution scaled by $2$ . That is, the random variable has a probability $1/3$ of equalling $0$ and a probability of $2/3$ of equalling $2$ . The heights of the jumps at $0$ and $2$ give their probabilities. The expectation of this variable evidently equals $0\times(1/3)+2\times(2/3)=4/3$ . We could define an "inverse CDF" $F^{-1}$ by requiring $$F^{-1}(p) = x \text{ if } F(x) \ge p \text{ and } F(x^{-}) \lt p.$$ This means that $F^{-1}$ is also a step function. For any possible value $x$ of the random variable, $F^{-1}$ will attain the value $x$ over an interval of length $\Pr_F(x)$ . Therefore its integral is obtained by summing the values $x\Pr_F(x)$ , which is just the expectation. This is the graph of the inverse CDF of the preceding example. The jumps of $1/3$ and $2/3$ in the CDF become horizontal lines of these lengths at heights equal to $0$ and $2$ , the values to whose probabilities they correspond. (The Inverse CDF is not defined beyond the interval $[0,1]$ .) Its integral is the sum of two rectangles, one of height $0$ and base $1/3$ , the other of height $2$ and base $2/3$ , totaling $4/3$ , as before. In general, for a mixture of a continuous and a discrete distribution, we need to define the inverse CDF to parallel this construction: at each discrete jump of height $p$ we must form a horizontal line of length $p$ as given by the preceding formula. | {
"source": [
"https://stats.stackexchange.com/questions/18438",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7025/"
]
} |
18,480 | I'm wondering if it makes a difference in interpretation whether only the dependent, both the dependent and independent, or only the independent variables are log transformed. Consider the case of log(DV) = Intercept + B1*IV + Error I can interpret the IV as the percent increase but how does this change when I have log(DV) = Intercept + B1*log(IV) + Error or when I have DV = Intercept + B1*log(IV) + Error ? | Charlie provides a nice, correct explanation. The Statistical Computing site at UCLA has some further examples: https://stats.oarc.ucla.edu/sas/faq/how-can-i-interpret-log-transformed-variables-in-terms-of-percent-change-in-linear-regression , and https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faqhow-do-i-interpret-a-regression-model-when-some-variables-are-log-transformed Just to complement Charlie's answer, below are specific interpretations of your examples. As always, coefficient interpretations assume that you can defend your model, that the regression diagnostics are satisfactory, and that the data are from a valid study. Example A : No transformations DV = Intercept + B1 * IV + Error "One unit increase in IV is associated with a ( B1 ) unit increase in DV." Example B : Outcome transformed log(DV) = Intercept + B1 * IV + Error "One unit increase in IV is associated with a ( B1 * 100 ) percent increase in DV." Example C : Exposure transformed DV = Intercept + B1 * log(IV) + Error "One percent increase in IV is associated with a ( B1 / 100 ) unit increase in DV." Example D : Outcome transformed and exposure transformed log(DV) = Intercept + B1 * log(IV) + Error "One percent increase in IV is associated with a ( B1 ) percent increase in DV." | {
"source": [
"https://stats.stackexchange.com/questions/18480",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5837/"
]
} |
18,595 | Brief Summary Why is it more common for logistic regression (with odds ratios) to be used in cohort studies with binary outcomes, as opposed to Poisson regression (with relative risks)? Background Undergraduate and graduate statistics and epidemiology courses, in my experience, generally teach that logistic regression should be used for modelling data with binary outcomes, with risk estimates reported as odds ratios. However, Poisson regression (and related: quasi-Poisson, negative binomial, etc.) can also be used to model data with binary outcomes and, with appropriate methods (e.g. robust sandwich variance estimator), it provides valid risk estimates and confidence levels. E.g., Greenland S., Model-based estimation of relative risks and other epidemiologic measures in studies of common outcomes and in case-control studies , Am J Epidemiol. 2004 Aug 15;160(4):301-5. Zou G., A modified Poisson regression approach to prospective studies with binary data , Am J Epidemiol. 2004 Apr 1;159(7):702-6. Zou G.Y. and Donner A., Extension of the modified Poisson regression model to prospective studies with correlated binary data , Stat Methods Med Res. 2011 Nov 8. From Poisson regression, relative risks can be reported, which some have argued are easier to interpret compared with odds ratios, especially for frequent outcomes, and especially by individuals without a strong background in statistics. See Zhang J. and Yu K.F., What's the relative risk? A method of correcting the odds ratio in cohort studies of common outcomes , JAMA. 1998 Nov 18;280(19):1690-1. From reading the medical literature, among cohort studies with binary outcomes it seems that it is still far more common to report odds ratios from logistic regressions rather than relative risks from Poisson regressions. Questions For cohort studies with binary outcomes: Is there good reason to report odds ratios from logistic regressions rather than relative risks from Poisson regressions? If not, can the infrequency of Poisson regressions with relative risks in the medical literature be attributed mostly to a lag between methodological theory and practice among scientists, clinicians, statisticians, and epidemiologists? Should intermediate statistics and epidemiology courses include more discussion of Poisson regression for binary outcomes? Should I be encouraging students and colleagues to consider Poisson regression over logistic regression when appropriate? | An answer to all four of your questions, preceeded by a note: It's not actually all that common for modern epidemiology studies to report an odds ratio from a logistic regression for a cohort study. It remains the regression technique of choice for case-control studies, but more sophisticated techniques are now the de facto standard for analysis in major epidemiology journals like Epidemiology , AJE or IJE . There will be a greater tendency for them to show up in clinical journals reporting the results of observational studies. There's also going to be some problems because Poisson regression can be used in two contexts: What you're referring to, wherein it's a substitute for a binomial regression model, and in a time-to-event context, which is extremely common for cohort studies. More details in the particular question answers: For a cohort study, not really no. There are some extremely specific cases where say, a piecewise logistic model may have been used, but these are outliers. The whole point of a cohort study is that you can directly measure the relative risk, or many related measures, and don't have to rely on an odds ratio. I will however make two notes: A Poisson regression is estimating often a rate , not a risk, and thus the effect estimate from it will often be noted as a rate ratio (mainly, in my mind, so you can still abbreviate it RR) or an incidence density ratio (IRR or IDR). So make sure in your search you're actually looking for the right terms: there are many cohort studies using survival analysis methods. For these studies, Poisson regression makes some assumptions that are problematic, notably that the hazard is constant. As such it is much more common to analyze a cohort study using Cox proportional hazards models, rather than Poisson models, and report the ensuing hazard ratio (HR). If pressed to name a "default" method with which to analyze a cohort, I'd say epidemiology is actually dominated by the Cox model. This has its own problems, and some very good epidemiologists would like to change it, but there it is. There are two things I might attribute the infrequency to - an infrequency I don't necessarily think exists to the extent you suggest. One is that yes - "epidemiology" as a field isn't exactly closed, and you get huge numbers of papers from clinicians, social scientists, etc. as well as epidemiologists of varying statistical backgrounds. The logistic model is commonly taught, and in my experience many researchers will turn to the familiar tool over the better tool. The second is actually a question of what you mean by "cohort" study. Something like the Cox model, or a Poisson model, needs an actual estimate of person-time. It's possible to get a cohort study that follows a somewhat closed population for a particular period - especially in early "Intro to Epi" examples, where survival methods like Poisson or Cox models aren't so useful. The logistic model can be used to estimate an odds ratio that, with sufficiently low disease prevalence, approximates a relative risk. Other regression techniques that directly estimate it, like binomial regression, have convergence issues that can easily derail a new student. Keep in mind the Zou papers you cite are both using a Poisson regression technique to get around the convergence issues of binomial regression. But binomial-appropriate cohort studies are actually a small slice of the "cohort study pie". Yes. Frankly, survival analysis methods should come up earlier than they often do. My pet theory is that the reason this isn't so is that methods like logistic regression are easier to code . Techniques that are easier to code, but come with much larger caveats about the validity of their effect estimates, are taught as the "basic" standard, which is a problem. You should be encouraging students and colleagues to use the appropriate tool. Generally for the field, I think you'd probably be better off suggesting a consideration of the Cox model over a Poisson regression, as most reviewers would (and should) swiftly bring up concerns about the assumption of a constant hazard. But yes, the sooner you can get them away from "How do I shoehorn my question into a logistic regression model?" the better off we'll all be. But yes, if you're looking at a study without time, students should be introduced to both binomial regression, and alternative approaches, like Poisson regression, which can be used in case of convergence problems. | {
"source": [
"https://stats.stackexchange.com/questions/18595",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2981/"
]
} |
18,844 | Say I have some historical data e.g., past stock prices, airline ticket price fluctuations, past financial data of the company... Now someone (or some formula) comes along and says "let's take/use the log of the distribution" and here's where I go WHY ? Questions: WHY should one take the log of the distribution in the first place? WHAT does the log of the distribution 'give/simplify' that the original distribution couldn't/didn't? Is the log transformation 'lossless'? I.e., when transforming to log-space and analyzing the data, do the same conclusions hold for the original distribution? How come? And lastly WHEN to take the log of the distribution? Under what conditions does one decide to do this? I've really wanted to understand log-based distributions (for example lognormal) but I never understood the when/why aspects - i.e., the log of the distribution is a normal distribution, so what? What does that even tell and me and why bother? Hence the question! UPDATE : As per @whuber's comment I looked at the posts and for some reason I do understand the use of log transforms and their application in linear regression, since you can draw a relation between the independent variable and the log of the dependent variable. However, my question is generic in the sense of analyzing the distribution itself - there is no relation per se that I can conclude to help understand the reason of taking logs to analyze a distribution. I hope I'm making sense :-/ In regression analysis you do have constraints on the type/fit/distribution of the data and you can transform it and define a relation between the independent and (not transformed) dependent variable. But when/why would one do that for a distribution in isolation where constraints of type/fit/distribution are not necessarily applicable in a framework (like regression). I hope the clarification makes things more clear than confusing :) This question deserves a clear answer as to "WHY and WHEN" | If you assume a model form that is non-linear but can be transformed to a linear model such as $\log Y = \beta_0 + \beta_1t$ then one would be justified in taking logarithms of $Y$ to meet the specified model form. In general whether or not you have causal series , the only time you would be justified or correct in taking the Log of $Y$ is when it can be proven that the Variance of $Y$ is proportional to the Expected Value of $Y^2$ . I don't remember the original source for the following but it nicely summarizes the role of power transformations. It is important to note that the distributional assumptions are always about the error process not the observed Y, thus it is a definite "no-no" to analyze the original series for an appropriate transformation unless the series is defined by a simple constant. Unwarranted or incorrect transformations including differences should be studiously avoided as they are often an ill-fashioned /ill-conceived attempt to deal with unidentified anomalies/level shifts/time trends or changes in parameters or changes in error variance. A classic example of this is discussed starting at slide 60 here http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting/doc_download/53-capabilities-presentation where three pulse anomalies (untreated) led to an unwarranted log transformation by early researchers. Unfortunately some of our current researchers are still making the same mistake. Several common used variance-stabilizing transformations Relationship of $\sigma^2$ to $E(y)$ Transformation $\sigma^2 \propto$ constant $y'=y$ (no transformation) $\sigma^2 \propto E(y)$ $y' = \sqrt y$ (square root: Poisson data) $\sigma^2 \propto E(y)(1-E(y))$ $y' = sin^{-1}(\sqrt y)$ (arcsin; binomial proportions $0\le y_i \le 1$ ) $\sigma^2 \propto (E(y))^2$ $y'=log(y)$ $\sigma^2 \propto (E(y))^3$ $y' = y^{-1/2}$ (reciprocal square root) $\sigma^2 \propto (E(y))^4$ $y' = y^{-1}$ (reciprocal) The optimal power transformation is found via the Box-Cox Test where -1. is a reciprocal -.5 is a recriprocal square root 0.0 is a log transformation .5 is a square toot transform and 1.0 is no transform. Note that when you have no predictor/causal/supporting input series, the model is $Y_t=u +a_t$ and that there are no requirements made about the distribution of $Y$ BUT are made about $a_t$ , the error process. In this case the distributional requirements about $a_t$ pass directly on to $Y_t$ . When you have supporting series such as in a regression or in a Autoregressive–moving-average model with exogenous inputs model ( ARMAX model ) the distributional assumptions are all about $a_t$ and have nothing whatsoever to do with the distribution of $Y_t$ . Thus in the case of ARIMA model or an ARMAX Model one would never assume any transformation on $Y$ before finding the optimal Box-Cox transformation which would then suggest the remedy (transformation) for $Y$ . In earlier times some analysts would transform both $Y$ and $X$ in a presumptive way just to be able to reflect upon the percent change in $Y$ as a result in the percent change in $X$ by examining the regression coefficient between $\log Y$ and $\log X$ . In summary, transformations are like drugs some are good and some are bad for you! They should only be used when necessary and then with caution. | {
"source": [
"https://stats.stackexchange.com/questions/18844",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4426/"
]
} |
18,848 | I have come across these two terms which are used interchangeably in many contexts. Basically, a moderator (M) is a factor that impacts on the relationship between X and Y. Moderation analysis is usually done using a regression model. For example, gender (M) can affect the relationship between "product research" (X) and "product purchase" (Y). In interaction, X1 and X2 interact to influence Y. The same example here is that "product research" (X1) is affected by "gender" (X2) and together they affect "product purchase" (Y). I can see that in moderation, M affects the X-Y relationship but in interaction, M (which is gender in this case) affects the other IV. Question : If the aim of my project is to see how gender affects the relationship between X and Y, should I use moderation or interaction? Note: My project is about the correlation between X and Y, not causal relationship between X and Y. | You should consider the two terms to be synonymous. Although they are used in slightly different ways, and come from different traditions within statistics ('interaction' is associated more with ANOVA, and 'moderator variable' is more associated with regression), there is no real difference in the underlying meaning. In fact, statistics is littered with synonymous terms that come from different traditions that mean the same thing. Should we call our X variables 'predictor variables', 'explanatory variables', 'factors', 'covariates', etc.? Does it matter? (No, not really.) The way to think about what an interaction is, is that if you were to explain your findings to someone you would use the word 'depends'. I will make up a story using your variables (I have no way of knowing if this is accurate or even plausible): Lets say someone asks you, "if people research a product, do they purchase it?" You might respond, "Well, it depends. For men, if they research a product, they typically end up buying one, but women enjoy looking at and thinking about products for its own sake; often, a woman will research a product, but have no intention of buying it. So, the relationship between researching a product and buying that product depends on sex." In this story, there is an interaction between product research and sex, or sex moderates the relationship between research and purchasing. (Again, I don't know if this story is remotely correct, and I hope no one is offended by it. I only use men and women because that's in the question. I don't mean to push any stereotypes.) | {
"source": [
"https://stats.stackexchange.com/questions/18848",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6096/"
]
} |
18,887 | Wikipedia has a Fisher transform of the Spearman rank correlation to an approximate z-score. Perhaps that z-score is the difference from null hypothesis (rank correlation 0)? This page has the following example: 4, 10, 3, 1, 9, 2, 6, 7, 8, 5
5, 8, 6, 2, 10, 3, 9, 4, 7, 1
rank correlation 0.684848
"95% CI for rho (Fisher's z transformed)= 0.097085 to 0.918443" How do they use the Fisher transform to get the 95% confidence interval? | In a nutshell, a 95% confidence interval is given by $$\tanh(\operatorname{arctanh}r\pm1.96/\sqrt{n-3}),$$
where $r$ is the estimate of the correlation and $n$ is the sample size. Explanation: The Fisher transformation is arctanh. On the transformed scale, the sampling distribution of the estimate is approximately normal, so a 95% CI is found by taking the transformed estimate and adding and subtracting 1.96 times its standard error. The standard error is (approximately) $1/\sqrt{n-3}$. EDIT : The example above in Python: import math
r = 0.684848
num = 10
stderr = 1.0 / math.sqrt(num - 3)
delta = 1.96 * stderr
lower = math.tanh(math.atanh(r) - delta)
upper = math.tanh(math.atanh(r) + delta)
print "lower %.6f upper %.6f" % (lower, upper) gives lower 0.097071 upper 0.918445 which agrees with your example to 4 decimal places. | {
"source": [
"https://stats.stackexchange.com/questions/18887",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2849/"
]
} |
18,891 | What's the similarities and differences between these 3 methods: Bagging, Boosting, Stacking? Which is the best one? And why? Can you give me an example for each? | All three are so-called "meta-algorithms": approaches to combine several machine learning techniques into one predictive model in order to decrease the variance ( bagging ), bias ( boosting ) or improving the predictive force ( stacking alias ensemble ). Every algorithm consists of two steps: Producing a distribution of simple ML models on subsets of the original data. Combining the distribution into one "aggregated" model. Here is a short description of all three methods: Bagging (stands for B ootstrap Agg regat ing ) is a way to decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multisets of the same cardinality/size as your original data. By increasing the size of your training set you can't improve the model predictive force, but just decrease the variance, narrowly tuning the prediction to expected outcome. Boosting is a two-step approach, where one first uses subsets of the original data to produce a series of averagely performing models and then "boosts" their performance by combining them together using a particular cost function (=majority vote). Unlike bagging, in the classical boosting the subset creation is not random and depends upon the performance of the previous models: every new subsets contains the elements that were (likely to be) misclassified by previous models. Stacking is a similar to boosting: you also apply several models to your original data. The difference here is, however, that you don't have just an empirical formula for your weight function, rather you introduce a meta-level and use another model/approach to estimate the input together with outputs of every model to estimate the weights or, in other words, to determine what models perform well and what badly given these input data. Here is a comparison table: As you see, these all are different approaches to combine several models into a better one, and there is no single winner here: everything depends upon your domain and what you're going to do. You can still treat stacking as a sort of more advances boosting , however, the difficulty of finding a good approach for your meta-level makes it difficult to apply this approach in practice. Short examples of each: Bagging : Ozone data . Boosting : is used to improve optical character recognition (OCR) accuracy. Stacking : is used in classification of cancer microarrays in medicine. | {
"source": [
"https://stats.stackexchange.com/questions/18891",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7588/"
]
} |
19,048 | I found this confusing when I use the neural network toolbox in Matlab. It divided the raw data set into three parts: training set validation set test set I notice in many training or learning algorithm, the data is often divided into 2 parts, the training set and the test set. My questions are: what is the difference between validation set and test set? Is the validation set really specific to neural network? Or it is optional. To go further, is there a difference between validation and testing in context of machine learning? | Typically to perform supervised learning, you need two types of data sets: In one dataset (your "gold standard"), you have the input data together with correct/expected output; This dataset is usually duly prepared either by humans or by collecting some data in a semi-automated way. But you must have the expected output for every data row here because you need this for supervised learning. The data you are going to apply your model to. In many cases, this is the data in which you are interested in the output of your model, and thus you don't have any "expected" output here yet. While performing machine learning, you do the following: Training phase: you present your data from your "gold standard" and train your model, by pairing the input with the expected output. Validation/Test phase: in order to estimate how well your model has been trained (that is dependent upon the size of your data, the value you would like to predict, input, etc) and to estimate model properties (mean error for numeric predictors, classification errors for classifiers, recall and precision for IR-models etc.) Application phase: now, you apply your freshly-developed model to the real-world data and get the results. Since you usually don't have any reference value in this type of data (otherwise, why would you need your model?), you can only speculate about the quality of your model output using the results of your validation phase. The validation phase is often split into two parts : In the first part, you just look at your models and select the best performing approach using the validation data (=validation) Then you estimate the accuracy of the selected approach (=test). Hence the separation to 50/25/25. In case if you don't need to choose an appropriate model from several rivaling approaches, you can just re-partition your set that you basically have only training set and test set, without performing the validation of your trained model. I personally partition them 70/30 then. See also this question . | {
"source": [
"https://stats.stackexchange.com/questions/19048",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7259/"
]
} |
19,103 | I have two time series, shown in the plot below: The plot is showing the full detail of both time series, but I can easily reduce it to just the coincident observations if needed. My question is: What statistical methods can I use to assess the differences between the time series? I know this is a fairly broad and vague question, but I can't seem to find much introductory material on this anywhere. As I can see it, there are two distinct things to assess: 1. Are the values the same? 2. Are the trends the same? What sort of statistical tests would you suggest looking at to assess these questions? For question 1 I can obviously assess the means of the different datasets and look for significant differences in distributions, but is there a way of doing this that takes into account the time-series nature of the data? For question 2 - is there something like the Mann-Kendall tests that looks for the similarity between two trends? I could do the Mann-Kendall test for both datasets and compare, but I don't know if that is a valid way to do things, or whether there is a better way? I'm doing all of this in R, so if tests you suggest have a R package then please let me know. | As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each series separately. This might be an ARIMA model or a multiply-trended Regression Model with possible Level Shifts or a composite model integrating both memory (ARIMA) and dummy variables. This common model could be estimated globally and separately for each of the two series and then one could construct an F test to test the hypothesis of a common set of parameters. | {
"source": [
"https://stats.stackexchange.com/questions/19103",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/261/"
]
} |
19,115 | The Ubuntu Software Center uses a 1 to 5 star rating system for its App Reviews. However, it's current ratings sorting algorithm looks very fishy . I believe this is a different question from this one , which seems to assume the mean of the ratings is a meaningful number. If I assume that user supplied ratings are ordinal data, then doing things like adding the ratings together or taking their mean are not correct. The primary method of sorting must be the sample median. Unfortunately, this leaves me with a lot of duplicates since 30 applications are being pigeon-holed into 5 stars of ratings, so it must be possible to further subsort the apps with identical median stars. I believe I want: Rating at the median should be better than rating below it. {2,3,3,3,4} > {2,2,3,3,4} Similarly, rating at the median should be worse than rating above it. {2,2,3,3,4} < {2,2,3,4,4} Ratings above and below the median should be equivalent. {1,1,3,4,4} = {2,2,3,4,4} = {2,2,3,5,5} Among two apps with the same median, higher confidence that the median is at least that large should rank higher. {2,3,3,3,4} > {2,3,4} Are these reasonable desires? What algorithm can get me there? My intuition tells me I want something like sample median + {lower bound probability estimate someone ranks that app higher than median} - {upper bound probability estimate someone ranks that app lower than median}. So: a large data set composed of 20 % 1's, 40% 3's, and 40% 4's would approach (3)+(2/5)-(1/5) = 3.2 a set composed of equal parts 2,3, and 5 would approach (3)+(1/3)-(1/3) = 3.0 and a set composed of 40% 1's and 60% 3's would approach (3)+(0)-(2/5) = 2.6. Is this reasonable? | As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each series separately. This might be an ARIMA model or a multiply-trended Regression Model with possible Level Shifts or a composite model integrating both memory (ARIMA) and dummy variables. This common model could be estimated globally and separately for each of the two series and then one could construct an F test to test the hypothesis of a common set of parameters. | {
"source": [
"https://stats.stackexchange.com/questions/19115",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7686/"
]
} |
19,181 | Given the data points $x_1, \ldots, x_n \in \mathbb{R}^d$ and labels $y_1, \ldots, y_n \in \left \{-1, 1 \right\}$, the hard margin SVM primal problem is $$ \text{minimize}_{w, w_0} \quad \frac{1}{2} w^T w $$
$$ \text{s.t.} \quad \forall i: y_i (w^T x_i + w_0) \ge 1$$ which is a quadratic program with $d+1$ variables to be optimized for and $i$ constraints. The dual $$ \text{maximize}_{\alpha} \quad \sum_{i=1}^{n}{\alpha_i} - \frac{1}{2}\sum_{i=1}^{n}{\sum_{j=1}^{n}{y_i y_j \alpha_i \alpha_j x_i^T x_j}}$$
$$ \text{s.t.} \quad \forall i: \alpha_i \ge 0 \land \sum_{i=1}^{n}{y_i \alpha_i} = 0$$
is a quadratic program with $n + 1$ variables to be optimized for and $n$ inequality and $n$ equality constraints. When implementing a hard margin SVM, why would I solve the dual problem instead of the primal problem? The primal problem looks more 'intuitive' to me, and I don't need to concern myself with the duality gap, the Kuhn-Tucker condition etc. It would make sense to me to solve the dual problem if $d \gg n$, but I suspect there are better reasons. Is this the case? | Based on the lecture notes referenced in @user765195's answer (thanks!), the most apparent reasons seem to be: Solving the primal problem, we obtain the optimal $w$ , but know nothing about the $\alpha_i$ . In order to classify a query point $x$ we need to explicitly compute the scalar product $w^Tx$ , which may be expensive if $d$ is large. Solving the dual problem, we obtain the $\alpha_i$ (where $\alpha_i = 0$ for all but a few points - the support vectors). In order to classify a query point $x$ , we calculate $$ w^Tx + w_0 = \left(\sum_{i=1}^{n}{\alpha_i y_i x_i} \right)^T x + w_0 = \sum_{i=1}^{n}{\alpha_i y_i \langle x_i, x \rangle} + w_0 $$ This term is very efficiently calculated if there are only few support vectors. Further, since we now have a scalar product only involving data vectors, we may apply the kernel trick . | {
"source": [
"https://stats.stackexchange.com/questions/19181",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4916/"
]
} |
19,216 | In what circumstances would you want to, or not want to scale or standardize a variable prior to model fitting? And what are the advantages / disadvantages of scaling a variable? | Standardization is all about the weights of different variables for the model.
If you do the standardisation "only" for the sake of numerical stability, there may be transformations that yield very similar numerical properties but different physical meaning that could be much more appropriate for the interpretation. The same is true for centering, which is usually part of the standardization. Situations where you probably want to standardize: the variables are different physical quantities and the numeric values are on very different scales of magnitude and there is no "external" knowledge that the variables with high (numeric) variation should be considered more important. Situations where you may not want to standardize: if the variables are the same physical quantity, and are (roughly) of the same magnitude, e.g. relative concentrations of different chemical species absorbances at different wavelengths emission intensity (otherwise same measurement conditions) at different wavelengths you definitively do not want to standardize variables that do not change between the samples (baseline channels) - you'd just blow up measurement noise (you may want to exclude them from the model instead) if you have such physically related variables, your measurement noise may be roughly the same for all variables, but the signal intensity varies much more. I.e. variables with low values have higher relative noise. Standardizing would blow up the noise. In other words, you may have to decide whether you want relative or absolute noise to be standardized. There may be physically meaningful values that you can use to relate your measured value to, e.g. instead of transmitted intensity use percent of transmitted intensity (transmittance T). You may do something "in between", and transform the variables or choose the unit so that the new variables still have physical meaning but the variation in the numerical value is not that different, e.g. if you work with mice, use body weight g and length in cm (expected range of variation about 5 for both) instead of the base units kg and m (expected range of variation 0.005 kg and 0.05 m - one order of magnitude different). for the transmittance T above, you may consider using the absorbance $A = -log_{10} T$ Similar for centering: There may be (physically/chemically/biologically/...) meaningful baseline values available (e.g. controls, blinds, etc.) Is the mean actually meaningful? (The average human has one ovary and one testicle) | {
"source": [
"https://stats.stackexchange.com/questions/19216",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1991/"
]
} |
19,222 | I have about 30000 book names assigned to 6 categories, and I want to build scalable and accurate classifiers. So far I have only been able to use Naive Baye's and LibLINEAR classifiers and they both give me an (almost) identical precision and recall values of 0.8 and 0.7 after 10 fold CV. I am wondering if I would be able to do better if I were to use more complex models . The problem is that the time complexity of the sophisticated models seems to increase super-linearly with the number of training instances. SVM (SMO implementation from WEKA), for example, has been running for the past 3hrs already on this data, whereas the Naive Baye's and LibLINEAR finished in about 15mins and 40mins respectively. I am trying to build a general framework for short texts classification (twitter, text messages etc.), and so will be running many experiments over varied data sets. I require techniques that scale and work well (don't we all :-)). Any suggestions? Another question is with regard to dimension reduction. When I pre-process my text, I apply stemming, stopword removal and convert the text to tf-idf vector representation. Dimension reduction techniques (Info gain, in particular) again seems to be taking an inordinately long time. Any scalable way to do feature selection? Would pruning by tf-idf scores an acceptable approach? Edit 1: By "Info Gain", I meant Information Gain . And currently I am not doing any feature selection. | Standardization is all about the weights of different variables for the model.
If you do the standardisation "only" for the sake of numerical stability, there may be transformations that yield very similar numerical properties but different physical meaning that could be much more appropriate for the interpretation. The same is true for centering, which is usually part of the standardization. Situations where you probably want to standardize: the variables are different physical quantities and the numeric values are on very different scales of magnitude and there is no "external" knowledge that the variables with high (numeric) variation should be considered more important. Situations where you may not want to standardize: if the variables are the same physical quantity, and are (roughly) of the same magnitude, e.g. relative concentrations of different chemical species absorbances at different wavelengths emission intensity (otherwise same measurement conditions) at different wavelengths you definitively do not want to standardize variables that do not change between the samples (baseline channels) - you'd just blow up measurement noise (you may want to exclude them from the model instead) if you have such physically related variables, your measurement noise may be roughly the same for all variables, but the signal intensity varies much more. I.e. variables with low values have higher relative noise. Standardizing would blow up the noise. In other words, you may have to decide whether you want relative or absolute noise to be standardized. There may be physically meaningful values that you can use to relate your measured value to, e.g. instead of transmitted intensity use percent of transmitted intensity (transmittance T). You may do something "in between", and transform the variables or choose the unit so that the new variables still have physical meaning but the variation in the numerical value is not that different, e.g. if you work with mice, use body weight g and length in cm (expected range of variation about 5 for both) instead of the base units kg and m (expected range of variation 0.005 kg and 0.05 m - one order of magnitude different). for the transmittance T above, you may consider using the absorbance $A = -log_{10} T$ Similar for centering: There may be (physically/chemically/biologically/...) meaningful baseline values available (e.g. controls, blinds, etc.) Is the mean actually meaningful? (The average human has one ovary and one testicle) | {
"source": [
"https://stats.stackexchange.com/questions/19222",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6003/"
]
} |
19,224 | I'm kind of new to datamining/machine learning/etc. and have been reading about a couple ways to combine multiple models and runs of the same model to improve predictions. My impression from reading a couple papers (which are often interesting and great on theory and greek letters but short on code and actual examples) is that it's supposed to go like this: I take a model ( knn , RF , etc) and get a list of classifiers between 0 and 1. My question is how to do combine each of these lists of classifiers? Do I run the same models on my training set so that the number of columns going into the final model are the same or is there some other trick? It would be great if any suggestions/examples included R code. NOTE: This is for a data set w/ 100k lines in the training set and 70k in the test set and 10 columns. | It actually boils down to one of the "3B" techniques: bagging, boosting or blending. In bagging, you train a lot of classifiers on different subsets of object and combine answers by average for regression and voting for classification (there are some other options for more complex situations, but I'll skip it). Vote proportion/variance can be interpreted as error approximation since the individual classifiers are usually considered independent. RF is in fact a bagging ensemble. Boosting is a wider family of methods, however their main point is that you build next classifier on the residuals of the former, this way (in theory) gradually increasing accuracy by highlighting more and more subtle interactions. The predictions are thus usually combined by summing them up, something like calculating a value of a function in x by summing values of its Taylor series' elements for x. Most popular versions are (Stochastic) Gradient Boosting (with nice mathematical foundation) and AdaBoost (well known, in fact a specific case of GB). From a holistic perspective, decision tree is a boosting of trivial pivot classifiers. Blending is an idea of nesting classifiers, i.e. running one classifier on an information system made of predictions of other classifiers. As so, it is a very variable method and certainly not a defined algorithm; may require a lot of objects (in most cases the "blender" classifier must be trained on a set of objects which were not used to build the partial classifiers to avoid embarrassing overfit). The predictions of partial classifiers are obviously combined by melding them into an information system which is predicted by the blender. | {
"source": [
"https://stats.stackexchange.com/questions/19224",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7411/"
]
} |
19,681 | This is a followup question to what Frank Harrell wrote here : In my experience the required sample size for the t distribution to be
accurate is often larger than the sample size at hand. The Wilcoxon
signed-rank test is extremely efficient as you said, and it is robust,
so I almost always prefer it over the t test If I understand it correctly - when comparing the location of two unmatched samples, we would prefer to use the Wilcoxon rank-sum test over the unpaired t-test, if our sample sizes are small. Is there a theoretical situation where we would prefer the Wilcoxon rank-sum test over the unpaired t-test, even that the sample sizes of our two groups are relatively large? My motivation for this question stems from the observation that for a single sample t-test, using it for a not-so-small sample of a skewed distribution will yield a wrong type I error: n1 <- 100
mean1 <- 50
R <- 100000
P_y1 <- numeric(R)
for(i in seq_len(R))
{
y1 <- rexp(n1, 1/mean1)
P_y1[i] <- t.test(y1 , mu = mean1)$p.value
}
sum(P_y1<.05) / R # for n1=n2=100 -> 0.0572 # "wrong" type I error | Yes, there is. For example, any sampling from distributions with infinite variance will wreck the t-test, but not the Wilcoxon. Referring to Nonparametric Statistical Methods (Hollander and Wolfe), I see that the asymptotic relative efficiency (ARE) of the Wilcoxon relative to the t test is 1.0 for the Uniform distribution, 1.097 (i.e., Wilcoxon is better) for the Logistic, 1.5 for the double Exponential (Laplace), and 3.0 for the Exponential. Hodges and Lehmann showed that the minimum ARE of the Wilcoxon relative to any other test is 0.864, so you can never lose more than about 14% efficiency using it relative to anything else. (Of course, this is an asymptotic result.) Consequently, Frank Harrell's use of the Wilcoxon as a default should probably be adopted by almost everyone, including myself. Edit: Responding to the followup question in comments, for those who prefer confidence intervals, the Hodges-Lehmann estimator is the estimator that "corresponds" to the Wilcoxon test, and confidence intervals can be constructed around that. | {
"source": [
"https://stats.stackexchange.com/questions/19681",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
19,715 | I understand that a stationary time series is one whose mean and variance is constant over time. Can someone please explain why we have to make sure our data set is stationary before we can run different ARIMA or ARM models on it? Does this also apply to normal regression models where autocorrelation and/or time is not a factor? | Stationarity is a one type of dependence structure. Suppose we have a data $X_1,...,X_n$ . The most basic assumption is that $X_i$ are independent, i.e. we have a sample. The independence is a nice property, since using it we can derive a lot of useful results. The problem is that sometimes (or frequently, depending on the view) this property does not hold. Now independence is a unique property, two random variables can be independent only in one way, but they can be dependent in various ways. So stationarity is one way of modeling the dependence structure. It turns out that a lot of nice results which holds for independent random variables (law of large numbers, central limit theorem to name a few) hold for stationary random variables (we should strictly say sequences). And of course it turns out that a lot of data can be considered stationary, so the concept of stationarity is very important in modeling non-independent data. When we have determined that we have stationarity, naturally we want to model it. This is where ARMA( A uto R egressive M oving A verage) models come in. It turns out that any stationary data can be approximated with
stationary ARMA model, thanks to Wold decomposition theorem . So that is why ARMA models are very popular and that is why we need to make sure that the series is stationary to use these models. Now again the same story holds as with independence and dependence. Stationarity is defined uniquely, i.e. data is either stationary or not, so there is only one way for data to be stationary, but lots of ways for it to be non-stationary. Again it turns out that a lot of data becomes stationary after certain transformation. ARIMA( A uto R egressive I ntegrated M oving A verage) model is one model for non-stationarity. It assumes that the data becomes stationary after differencing. In the regression context the stationarity is important since the same results which apply for independent data holds if the data is stationary. | {
"source": [
"https://stats.stackexchange.com/questions/19715",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7959/"
]
} |
19,948 | If $X$ is distributed $N(\mu_X, \sigma^2_X)$,
$Y$ is distributed $N(\mu_Y, \sigma^2_Y)$
and $Z = X + Y$, I know that $Z$ is distributed $N(\mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y)$ if X and Y are independent. But what would happen if X and Y were not independent, i.e.
$(X, Y) \approx N\big(
(\begin{smallmatrix}
\mu_X\\\mu_Y
\end{smallmatrix})
,
(\begin{smallmatrix}
\sigma^2_X && \sigma_{X,Y}\\
\sigma_{X,Y} && \sigma^2_Y
\end{smallmatrix})
\big)
$ Would this affect how the sum $Z$ is distributed? | See my comment on probabilityislogic's answer to this question . Here,
$$
\begin{align*}
X + Y &\sim N(\mu_X + \mu_Y,\; \sigma_X^2 + \sigma_Y^2 + 2\sigma_{X,Y})\\
aX + bY &\sim N(a\mu_X + b\mu_Y,\; a^2\sigma_X^2 + b^2\sigma_Y^2 + 2ab\sigma_{X,Y})
\end{align*}
$$
where $\sigma_{X,Y}$ is the covariance of $X$ and $Y$. Nobody writes the off-diagonal entries in the covariance matrix as $\sigma_{xy}^2$ as you have
done. The off-diagonal entries are covariances which
can be negative. | {
"source": [
"https://stats.stackexchange.com/questions/19948",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8067/"
]
} |
20,002 | I was always under the impression that regression is just a more general form of ANOVA and that the results would be identical. Recently, however, I have run both a regression and an ANOVA on the same data and the results differ significantly. That is, in the regression model both main effects and the interaction are significant, while in the ANOVA one main effect is not significant. I expect this has something to do with the interaction, but it's not clear to me what is different about these two ways of modeling the same question. If it's important, one predictor is categorical and the other is continuous, as indicated in the simulation below. Here is an example of what my data looks like and what analyses I'm running, but without the same p-values or effects being significant in the results (my actual results are outlined above): group<-c(1,1,1,0,0,0)
moderator<-c(1,2,3,4,5,6)
score<-c(6,3,8,5,7,4)
summary(lm(score~group*moderator))
summary(aov(score~group*moderator)) | The summary function calls different methods depending on the class of the object. The difference isn't in the aov vs lm , but in the information presented about the models. For example, if you used anova(mod1) and anova(mod2) instead, you should get the same results. As @Glen says, the key is whether the tests reported are based on Type 1 or Type 3 sums of squares. These will differ when the correlation between your explanatory variables is not exactly 0. When they are correlated, some SS are unique to one predictor and some to the other, but some SS could be attributed to either or both. ( You can visualize this by imagining the MasterCard symbol --there's a small region of overlap in the center.) There is no unique answer in this situation, and unfortunately, this is the norm for non-experimental data. One approach is for the analyst to use their judgment and assign the overlapping SS to one of the variables. That variable goes into the model first. The other variable goes into the model second and gets the SS that looks like a cookie with a bite taken out of it. It's effect can be tested by what is sometimes called $R^2$ change or F change. This approach uses Type 1 SS. Alternatively, you could do this twice with each going in first, and report the F change test for both predictors. In this way, neither variable gets the SS due to the overlap. This approach uses Type 3 SS. (I should also tell you that the latter approach is held in low regard.) Following the suggestion of @BrettMagill in the comment below, I can try to make this a little clearer. (Note that, in my example, I'm using just 2 predictors and no interaction, but this idea can be scaled up to include whatever you like.) Type 1: SS(A) and SS(B|A) Type 3: SS(A|B) and SS(B|A) | {
"source": [
"https://stats.stackexchange.com/questions/20002",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8102/"
]
} |
20,010 | Suppose we have someone building a predictive model, but that someone is not necessarily well-versed in proper statistical or machine learning principles. Maybe we are helping that person as they are learning, or maybe that person is using some sort of software package that requires minimal knowledge to use. Now this person might very well recognize that the real test comes from accuracy (or whatever other metric) on out-of-sample data. However, my concern is that there are a lot of subtleties there to worry about. In the simple case, they build their model and evaluate it on training data and evaluate it on held-out testing data. Unfortunately it can sometimes be all too easy at that point to go back and tweak some modeling parameter and check the results on that same "testing" data. At this point that data is no longer true out-of-sample data though, and overfitting can become a problem. One potential way to resolve this problem would be to suggest creating many out-of-sample datasets such that each testing dataset can be discarded after use and not reused at all. This requires a lot of data management though, especially that the splitting must be done before the analysis (so you would need to know how many splits beforehand). Perhaps a more conventional approach is k-fold cross validation. However, in some sense that loses the distinction between a "training" and "testing" dataset that I think can be useful, especially to those still learning. Also I'm not convinced this makes sense for all types of predictive models. Is there some way that I've overlooked to help overcome the problem of overfitting and testing leakage while still remaining somewhat clear to an inexperienced user? | You are right, this is a significant problem in machine learning/statistical modelling. Essentially the only way to really solve this problem is to retain an independent test set and keep it held out until the study is complete and use it for final validation. However, inevitably people will look at the results on the test set and then change their model accordingly; however this won't necessarily result in an improvement in generalisation performance as the difference in performance of different models may be largely due to the particular sample of test data that we have. In this case, in making a choice we are effectively over-fitting the test error. The way to limit this is to make the variance of the test error as small as possible (i.e. the variability in test error we would see if we used different samples of data as the test set, drawn from the same underlying distribution). This is most easily achieved using a large test set if that is possible, or e.g. bootstrapping or cross-validation if there isn't much data available. I have found that this sort of over-fitting in model selection is a lot more troublesome than is generally appreciated, especially with regard to performance estimation, see G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010 (www) This sort of problem especially affects the use of benchmark datasets, which have been used in many studies, and each new study is implicitly affected by the results of earlier studies, so the observed performance is likely to be an over-optimistic estimate of the true performance of the method. The way I try to get around this is to look at many datasets (so the method isn't tuned to one specific dataset) and also use multiple random test/training splits for performance estimation (to reduce the variance of the estimate). However the results still need the caveat that these benchmarks have been over-fit. Another example where this does occur is in machine learning competitions with a leader-board based on a validation set. Inevitably some competitors keep tinkering with their model to get further up the leader board, but then end up towards the bottom of the final rankings. The reason for this is that their multiple choices have over-fitted the validation set (effectively learning the random variations in the small validation set). If you can't keep a statistically pure test set, then I'm afraid the two best options are (i) collect some new data to make a new statistically pure test set or (ii) make the caveat that the new model was based on a choice made after observing the test set error, so the performance estimate is likely to have an optimistic bias. | {
"source": [
"https://stats.stackexchange.com/questions/20010",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2485/"
]
} |
20,011 | More recently, I read two articles. Speed's article is about the history of the correlation, and the article by Reshef, et al. is about a new method called maximal information coefficient (MIC). I need your help to understand the MIC method to estimate non-linear correlations between variables. Moreover, instructions for MIC's use in R can be found on the author's website (under Downloads): I hope this will be a good platform to discuss and understand this method. My interest is in the intuition behind this method and how it can be extended in the way the author said: ...we need extensions of $\text{MIC}(X,Y)$ to $\text{MIC}(X,Y|Z)$ . We will want to know how much data are needed to get stable estimates of MIC, how susceptible it is to outliers, what three- or higher-dimensional relationships it will miss, and more. MIC is a great step forward, but there are many more steps to take. Citations Speed, T. (2011). A Correlation for the 21st Century . Science , 334(6062), 1502–1503. Reshef, D. N., et al. (2011). Detecting Novel Associations in Large Data Sets . Science , 334(6062), 1518–1524. | Is it not telling that this was published in a non-statistical journal whose statistical peer review we are unsure of? This problem was solved by Hoeffding in 1948 (Annals of Mathematical Statistics 19:546) who developed a straightforward algorithm requiring no binning nor multiple steps. Hoeffding's work was not even referenced in the Science article. This has been in the R hoeffd function in the Hmisc package for many years. Here's an example (type example(hoeffd) in R): # Hoeffding's test can detect even one-to-many dependency
set.seed(1)
x <- seq(-10,10,length=200)
y <- x*sign(runif(200,-1,1))
plot(x,y) # an X
hoeffd(x,y) # also accepts a numeric matrix
D
x y
x 1.00 0.06
y 0.06 1.00
n= 200
P
x y
x 0 # P-value is very small
y 0 hoeffd uses a fairly efficient Fortran implementation of Hoeffding's method. The basic idea of his test is to consider the difference between joint ranks of X and Y and the product of the marginal rank of X and the marginal rank of Y, suitably scaled. Update I have since been corresponding with the authors (who are very nice by the way, and are open to other ideas and are continuing to research their methods). They originally had the Hoeffding reference in their manuscript but cut it (with regrets, now) for lack of space. While Hoeffding's $D$ test seems to perform well for detecting dependence in their examples, it does not provide an index that meets their criteria of ordering degrees of dependence the way the human eye is able to. In an upcoming release of the R Hmisc package I've added two additional outputs related to $D$, namely the mean and max $|F(x,y) - G(x)H(y)|$ which are useful measures of dependence. However these measures, like $D$, do not have the property that the creators of MIC were seeking. | {
"source": [
"https://stats.stackexchange.com/questions/20011",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6723/"
]
} |
20,101 | I compared ?prcomp and ?princomp and found something about Q-mode and R-mode principal component analysis (PCA). But honestly – I don't understand it. Can anybody explain the difference and maybe even explain when to apply which? | The difference between them is nothing to do with the type of PCA they perform, just the method they use. As the help page for prcomp says: The calculation is done by a singular value decomposition of the (centered and possibly scaled) data matrix, not by using eigen on the covariance matrix. This is generally the preferred method for numerical accuracy. On the other hand, the princomp help page says: The calculation is done using eigen on the correlation or covariance matrix, as determined by cor . This is done for compatibility with the S-PLUS result. A preferred method of calculation is to use svd on x , as is done in prcomp ." So, prcomp is preferred , although in practice you are unlikely to see much difference (for example, if you run the examples on the help pages you should get identical results). | {
"source": [
"https://stats.stackexchange.com/questions/20101",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/704/"
]
} |
20,217 | There are several popular resampling techniques, which are often used in practice, such as bootstrapping, permutation test, jackknife, etc. There are numerous articles & books discuss these techniques, for example Philip I Good (2010) Permutation, Parametric, and Bootstrap Tests of Hypotheses My question is which resampling technique has gained the more popularity and easier to implement? Bootstrapping or permutation tests? | Both are popular and useful, but primarily for different uses. The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals. Permutation tests test a specific null hypothesis of exchangeability, i.e. that only the random sampling/randomization explains the difference seen. This is the common case for things like t-tests and ANOVA. It can also be expanded to things like time series (null hypothesis that there is no serial correlation) or regression (null hypothesis of no relationship). Permutation tests can be used to create confidence intervals, but it requires many more assumptions, that may or may not be reasonable (so other methods are preferred). The Mann-Whitney/Wilcoxon test is actually a special case of a permutation test, so they are much more popular than some realize. The bootstrap estimates the variability of the sampling process and works well for estimating confidence intervals. You can do a test of hypothesis this way but it tends to be less powerful than the permutation test for cases that the permutation test assumptions hold. | {
"source": [
"https://stats.stackexchange.com/questions/20217",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4559/"
]
} |
20,227 | I would like to understand why, under the OLS model, the RSS (residual sum of squares) is distributed $$\chi^2\cdot (n-p)$$ ($p$ being the number of parameters in the model, $n$ the number of observations). I apologize for asking such a basic question, but I seem to not be able to find the answer online (or in my, more application oriented, textbooks). | I consider the following linear model: ${y} = X \beta + \epsilon$. The vector of residuals is estimated by $$\hat{\epsilon} = y - X \hat{\beta}
= (I - X (X'X)^{-1} X') y
= Q y
= Q (X \beta + \epsilon) = Q \epsilon$$ where $Q = I - X (X'X)^{-1} X'$. Observe that $\textrm{tr}(Q) = n - p$ (the trace is invariant under cyclic permutation) and that $Q'=Q=Q^2$. The eigenvalues of $Q$ are therefore $0$ and $1$ (some details below). Hence, there exists a unitary matrix $V$ such that ( matrices are diagonalizable by unitary matrices if and only if they are normal. ) $$V'QV = \Delta = \textrm{diag}(\underbrace{1, \ldots, 1}_{n-p \textrm{ times}}, \underbrace{0, \ldots, 0}_{p \textrm{ times}})$$ Now, let $K = V' \hat{\epsilon}$. Since $\hat{\epsilon} \sim N(0, \sigma^2 Q)$, we have $K \sim N(0, \sigma^2 \Delta)$ and therefore $K_{n-p+1}=\ldots=K_n=0$. Thus $$\frac{\|K\|^2}{\sigma^2} = \frac{\|K^{\star}\|^2}{\sigma^2} \sim \chi^2_{n-p}$$ with $K^{\star} = (K_1, \ldots, K_{n-p})'$. Further, as $V$ is a unitary matrix, we also have $$\|\hat{\epsilon}\|^2 = \|K\|^2=\|K^{\star}\|^2$$ Thus $$\frac{\textrm{RSS}}{\sigma^2} \sim \chi^2_{n-p}$$ Finally, observe that this result implies that $$E\left(\frac{\textrm{RSS}}{n-p}\right) = \sigma^2$$ Since $Q^2 - Q =0$, the minimal polynomial of $Q$ divides the polynomial $z^2 - z$. So, the eigenvalues of $Q$ are among $0$ and $1$. Since $\textrm{tr}(Q) = n-p$ is also the sum of the eigenvalues multiplied by their multiplicity, we necessarily have that $1$ is an eigenvalue with multiplicity $n-p$ and zero is an eigenvalue with multiplicity $p$. | {
"source": [
"https://stats.stackexchange.com/questions/20227",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
20,295 | The holiday season has given me the opportunity to curl up next to the fire with The Elements of Statistical Learning . Coming from a (frequentist) econometrics perspective, I'm having trouble grasping the uses of shrinkage methods like ridge regression, lasso, and least angle regression (LAR). Typically, I'm interested in the parameter estimates themselves and in achieving unbiasedness or at least consistency. Shrinkage methods don't do that. It seems to me that these methods are used when the statistician is worried that the regression function becomes too responsive to the predictors, that it considers the predictors to be more important (measured by the magnitude of the coefficients) than they actually are. In other words, overfitting. But, OLS typically provides unbiased and consistent estimates.(footnote) I've always viewed the problem of overfitting not of giving estimates that are too big, but rather confidence intervals that are too small because the selection process isn't taken into account (ESL mentions this latter point). Unbiased/consistent coefficient estimates lead to unbiased/consistent predictions of the outcome. Shrinkage methods push predictions closer to the mean outcome than OLS would, seemingly leaving information on the table. To reiterate, I don't see what problem the shrinkage methods are trying to solve. Am I missing something? Footnote: We need the full column rank condition for identification of the coefficients. The exogeneity/zero conditional mean assumption for the errors and the linear conditional expectation assumption determine the interpretation that we can give to the coefficients, but we get an unbiased or consistent estimate of something even if these assumptions aren't true. | I suspect you want a deeper answer, and I'll have to let someone else provide that, but I can give you some thoughts on ridge regression from a loose, conceptual perspective. OLS regression yields parameter estimates that are unbiased (i.e., if such samples are gathered and parameters are estimated indefinitely, the sampling distribution of parameter estimates will be centered on the true value). Moreover, the sampling distribution will have the lowest variance of all possible unbiased estimates (this means that, on average, an OLS parameter estimate will be closer to the true value than an estimate from some other unbiased estimation procedure will be). This is old news (and I apologize, I know you know this well), however, the fact that the variance is lower does not mean that it is terribly low . Under some circumstances, the variance of the sampling distribution can be so large as to make the OLS estimator essentially worthless. (One situation where this could occur is when there is a high degree of multicollinearity.) What is one to do in such a situation? Well, a different estimator could be found that has lower variance (although, obviously, it must be biased, given what was stipulated above). That is, we are trading off unbiasedness for lower variance. For example, we get parameter estimates that are likely to be substantially closer to the true value, albeit probably a little below the true value. Whether this tradeoff is worthwhile is a judgment the analyst must make when confronted with this situation. At any rate, ridge regression is just such a technique. The following (completely fabricated) figure is intended to illustrate these ideas. This provides a short, simple, conceptual introduction to ridge regression. I know less about lasso and LAR, but I believe the same ideas could be applied. More information about the lasso and least angle regression can be found here , the "simple explanation..." link is especially helpful. This provides much more information about shrinkage methods. I hope this is of some value. | {
"source": [
"https://stats.stackexchange.com/questions/20295",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/401/"
]
} |
20,429 | I'm just getting my feet wet in statistics so I'm sorry if this question does not make sense. I have used Markov models to predict hidden states (unfair casinos, dice rolls, etc.) and neural networks to study users clicks on a search engine. Both had hidden states that we were trying to figure out using observations. To my understanding they both predict hidden states, so I'm wondering when would one use Markov models over neural networks? Are they just different approaches to similar problems? (I'm interested in learning but I also have another motivation, I have a problem that I'm trying to solve using hidden Markov models but its driving me bonkers so I was interested in seeing if I can switch to using something else.) | What is hidden and what is observed The thing that is hidden in a hidden Markov model is the same as the thing that is hidden in a discrete mixture model, so for clarity, forget about the hidden state's dynamics and stick with a finite mixture model as an example. The 'state' in this model is the identity of the component that caused each observation. In this class of model such causes are never observed, so 'hidden cause' is translated statistically into the claim that the observed data have marginal dependencies which are removed when the source component is known. And the source components are estimated to be whatever makes this statistical relationship true. The thing that is hidden in a feedforward multilayer neural network with sigmoid middle units is the states of those units, not the outputs which are the target of inference. When the output of the network is a classification, i.e., a probability distribution over possible output categories, these hidden units values define a space within which categories are separable. The trick in learning such a model is to make a hidden space (by adjusting the mapping out of the input units) within which the problem is linear. Consequently, non-linear decision boundaries are possible from the system as a whole. Generative versus discriminative The mixture model (and HMM) is a model of the data generating process, sometimes called a likelihood or 'forward model'. When coupled with some assumptions about the prior probabilities of each state you can infer a distribution over possible values of the hidden state using Bayes theorem (a generative approach). Note that, while called a 'prior', both the prior and the parameters in the likelihood are usually learned from data. In contrast to the mixture model (and HMM) the neural network learns a posterior distribution over the output categories directly (a discriminative approach). This is possible because the output values were observed during estimation. And since they were observed, it is not necessary to construct a posterior distribution from a prior and a specific model for the likelihood such as a mixture. The posterior is learnt directly from data, which is more efficient and less model dependent. Mix and match To make things more confusing, these approaches can be mixed together, e.g. when mixture model (or HMM) state is sometimes actually observed. When that is true, and in some other circumstances not relevant here, it is possible to train discriminatively in an otherwise generative model. Similarly it is possible to replace the mixture model mapping of an HMM with a more flexible forward model, e.g., a neural network. The questions So it's not quite true that both models predict hidden state. HMMs can be used to predict hidden state, albeit only of the kind that the forward model is expecting. Neural networks can be used to predict a not yet observed state, e.g. future states for which predictors are available. This sort of state is not hidden in principle, it just hasn't been observed yet. When would you use one rather than the other? Well, neural networks make rather awkward time series models in my experience. They also assume you have observed output. HMMs don't but you don't really have any control of what the hidden state actually is. Nevertheless they are proper time series models. | {
"source": [
"https://stats.stackexchange.com/questions/20429",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4507/"
]
} |
20,441 | Both the likelihood ratio test and the AIC are tools for choosing between two models and both are based on the log-likelihood. But, why the likelihood ratio test can't be used to choose between two non-nested models while AIC can? | The LR (likelihood ratio) test actually is testing the hypothesis that a specified subset of the parameters equal some pre-specified values. In the case of model selection, generally (but not always) that means some of the parameters equal zero. If the models are nested, the parameters in the larger model that are not in the smaller model are the ones being tested, with values specified implicitly by their exclusion from the smaller model. If the models aren't nested, you aren't testing this any more, because BOTH models have parameters that aren't in the other model, so the LR test statistic doesn't have the asymptotic $\chi^2$ distribution that it (usually) does in the nested case. AIC, on the other hand, is not used for formal testing. It is used for informal comparisons of models with differing numbers of parameters. The penalty term in the expression for AIC is what allows this comparison. But no assumptions are made about the functional form of the asymptotic distribution of the differences between the AIC of two non-nested models when doing the model comparison, and the difference between two AICs is not treated as a test statistic. I'll add that there is some disagreement over the use of AIC with non-nested models, as the theory is worked out for nested models. Hence my emphasis on "not...formal" and "not...test statistic." I use it for non-nested models, but not in a hard-and-fast way, more as an important, but not the sole, input into the model building process. | {
"source": [
"https://stats.stackexchange.com/questions/20441",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7064/"
]
} |
20,452 | My primary question is how to interpret the output (coefficients, F, P) when conducting a Type I (sequential) ANOVA? My specific research problem is a bit more complex, so I will break my example into parts. First, if I am interested in the effect of spider density (X1) on say plant growth (Y1) and I planted seedlings in enclosures and manipulated spider density, then I can analyze the data with a simple ANOVA or linear regression. Then it wouldn't matter if I used Type I, II, or III Sum of Squares (SS) for my ANOVA. In my case, I have 4 replicates of 5 density levels, so I can use density as a factor or as a continuous variable. In this case, I prefer to interpret it as a continuous independent (predictor) variable. In R I might run the following: lm1 <- lm(y1 ~ density, data = Ena)
summary(lm1)
anova(lm1) Running the anova function will make sense for comparison later hopefully, so please ignore the oddness of it here. The output is: Response: y1
Df Sum Sq Mean Sq F value Pr(>F)
density 1 0.48357 0.48357 3.4279 0.08058 .
Residuals 18 2.53920 0.14107 Now, let's say I suspect that the starting level of inorganic nitrogen in the soil, which I couldn't control, may have also significantly affected the plant growth. I'm not particularly interested in this effect but would like to potentially account for the variation it causes. Really, my primary interest is in the effects of spider density (hypothesis: increased spider density causes increased plant growth - presumably through reduction of herbivorous insects but I'm only testing the effect not the mechanism). I could add the effect of inorganic N to my analysis. For the sake of my question, let's pretend that I test the interaction density*inorganicN and it's non-significant so I remove it from the analysis and run the following main effects: > lm2 <- lm(y1 ~ density + inorganicN, data = Ena)
> anova(lm2)
Analysis of Variance Table
Response: y1
Df Sum Sq Mean Sq F value Pr(>F)
density 1 0.48357 0.48357 3.4113 0.08223 .
inorganicN 1 0.12936 0.12936 0.9126 0.35282
Residuals 17 2.40983 0.14175 Now, it makes a difference whether I use Type I or Type II SS (I know some people object to the terms Type I & II etc. but given the popularity of SAS it's easy short-hand). R anova{stats} uses Type I by default. I can calculate the type II SS, F, and P for density by reversing the order of my main effects or I can use Dr. John Fox's "car" package (companion to applied regression). I prefer the latter method since it is easier for more complex problems. library(car)
Anova(lm2)
Sum Sq Df F value Pr(>F)
density 0.58425 1 4.1216 0.05829 .
inorganicN 0.12936 1 0.9126 0.35282
Residuals 2.40983 17 My understanding is that type II hypotheses would be, "There is no linear effect of x1 on y1 given the effect of (holding constant?) x2" and the same for x2 given x1. I guess this is where I get confused. What is the hypothesis being tested by the ANOVA using the type I (sequential) method above compared to the hypothesis using the type II method? In reality, my data is a bit more complex because I measured numerous metrics of plant growth as well as nutrient dynamics and litter decomposition. My actual analysis is something like: Y <- cbind(y1 + y2 + y3 + y4 + y5)
# Type II
mlm1 <- lm(Y ~ density + nitrate + Npred, data = Ena)
Manova(mlm1)
Type II MANOVA Tests: Pillai test statistic
Df test stat approx F num Df den Df Pr(>F)
density 1 0.34397 1 5 12 0.34269
nitrate 1 0.99994 40337 5 12 < 2e-16 ***
Npred 1 0.65582 5 5 12 0.01445 *
# Type I
maov1 <- manova(Y ~ density + nitrate + Npred, data = Ena)
summary(maov1)
Df Pillai approx F num Df den Df Pr(>F)
density 1 0.99950 4762 5 12 < 2e-16 ***
nitrate 1 0.99995 46248 5 12 < 2e-16 ***
Npred 1 0.65582 5 5 12 0.01445 *
Residuals 16 | What you are calling type II SS, I would call type III SS. Lets imagine that there are just two factors A and B (and we'll throw in the A*B interaction later to distinguish type II SS). Further, lets imagine that there are different $n$s in the four cells (e.g., $n_{11}$=11, $n_{12}$=9, $n_{21}$=9, and $n_{22}$=11). Now your two factors are correlated with each other. (Try this yourself, make 2 columns of 1's and 0's and correlate them, $r=.1$; n.b. it doesn't matter if $r$ is 'significant', this is the whole population that you care about). The problem with your factors being correlated is that there are sums of squares that are associated with both A and B. When computing an ANOVA (or any other linear regression), we want to partition the sums of squares. A partition puts all sums of squares into one and only one of several subsets. (For example, we might want to divide the SS up into A, B and error.) However, since your factors (still only A and B here) are not orthogonal there is no unique partition of these SS. In fact, there can be very many partitions, and if you are willing to slice your SS up into fractions (e.g., "I'll put .5 into this bin and .5 into that one"), there are infinite partitions. A way to visualize this is to imagine the MasterCard symbol: The rectangle represents the total SS, and each of the circles represents the SS that are attributable to that factor, but notice the overlap between the circles in the center, those SS could be given to either circle. The question is: How are we to choose the 'right' partition out of all of these possibilities? Let's bring the interaction back in and discuss some possibilities: Type I SS: SS(A) SS(B|A) SS(A*B|A,B) Type II SS: SS(A|B) SS(B|A) SS(A*B|A,B) Type III SS: SS(A|B,A*B) SS(B|A,A*B) SS(A*B|A,B) Notice how these different possibilities work. Only type I SS actually uses those SS in the overlapping portion between the circles in the MasterCard symbol. That is, the SS that could be attributed to either A or B, are actually attributed to one of them when you use type I SS (specifically, the one you entered into the model first). In both of the other approaches, the overlapping SS are not used at all . Thus, type I SS gives to A all the SS attributable to A (including those that could also have been attributed elsewhere), then gives to B all of the remaining SS that are attributable to B, then gives to the A*B interaction all of the remaining SS that are attributable to A*B, and leaves the left-overs that couldn't be attributed to anything to the error term. Type III SS only gives A those SS that are uniquely attributable to A, likewise it only gives to B and the interaction those SS that are uniquely attributable to them. The error term only gets those SS that couldn't be attributed to any of the factors. Thus, those 'ambiguous' SS that could be attributed to 2 or more possibilities are not used. If you sum the type III SS in an ANOVA table, you will notice that they do not equal the total SS. In other words, this analysis must be wrong, but errs in a kind of epistemically conservative way. Many statisticians find this approach egregious, however government funding agencies (I believe the FDA) requires their use. The type II approach is intended to capture what might be worthwhile about the idea behind type III, but mitigate against its excesses. Specifically, it only adjusts the SS for A and B for each other, not the interaction. However, in practice type II SS is essentially never used. You would need to know about all of this and be savvy enough with your software to get these estimates, and the analysts who are typically think this is bunk. There are more types of SS (I believe IV and V). They were suggested in the late 60's to deal with certain situations, but it was later shown that they do not do what was thought. Thus, at this point they are just a historical footnote. As for what questions these are answering, you basically have that right already in your question: Estimates using type I SS tell you how much of the variability in Y can be explained by A, how much of the residual variability can be explained by B, how much of the remaining residual variability can be explained by the interaction, and so on, in order . Estimates based on type III SS tell you how much of the residual variability in Y can be accounted for by A after having accounted for everything else, and how much of the residual variability in Y can be accounted for by B after having accounted for everything else as well, and so on. (Note that both go both first and last simultaneously; if this makes sense to you, and accurately reflects your research question, then use type III SS.) | {
"source": [
"https://stats.stackexchange.com/questions/20452",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8289/"
]
} |
20,514 | I started by Time Series Analysis by Hamilton, but I am lost hopelessly. This book is really too theoretical for me to learn by myself. Does anybody have a recommendation for a textbook on time series analysis that's suitable for self-study? | I would recommed the following books: Time Series Analysis and Its Applications: With R Examples , Third Edition, by Robert H. Shumway and David S. Stoffer, Springer Verlag. Time Series Analysis and Forecasting by Example , 1st Edition, by Søren Bisgaard and Murat Kulahci, John Wiley & Sons. I hope it helps you. Best of luck! | {
"source": [
"https://stats.stackexchange.com/questions/20514",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8086/"
]
} |
20,520 | Inspired by a comment from this question : What do we consider "uninformative" in a prior - and what information is still contained in a supposedly uninformative prior? I generally see the prior in an analysis where it's either a frequentist-type analysis trying to borrow some nice parts from Bayesian analysis (be it some easier interpretation all the way to 'its the hot thing to do'), the specified prior is a uniform distribution across the bounds of the effect measure, centered on 0. But even that asserts a shape to the prior - it just happens to be flat. Is there a better uninformative prior to use? | [Warning: as a card-carrying member of the Objective Bayes Section of ISBA , my views are not exactly representative of all Bayesian statisticians!, quite the opposite...] In summary, there is no such thing as a prior with "truly no information". Indeed, the concept of "uninformative" prior is sadly a misnomer. Any prior distribution contains some specification that is akin to some amount of information. Even (or especially) the uniform prior. For one thing, the uniform prior is only flat for one given parameterisation of the problem. If one changes to another parameterisation (even a bounded one), the Jacobian change of variable comes into the picture and the density and therefore the prior is no longer flat. As pointed out by Elvis, maximum entropy is one approach advocated to select so-called "uninformative" priors. It however requires (a) some degree of information on some moments $h(\theta)$ of the prior distribution $\pi(\cdot)$ to specify the constraints $$\int_{\Theta} h(\theta)\,\text{d}\pi(\theta) = \mathfrak{h}_0$$ that lead to the MaxEnt prior $$\pi^*(\theta)\propto \exp\{ \lambda^\text{T}h(\theta) \}$$ and (b) the preliminary choice of a reference measure $\text{d}\mu(\theta)$ [in continuous settings], a choice that brings the debate back to its initial stage! (In addition, the parametrisation of the constraints (i.e., the choice of $h$ ) impacts the shape of the resulting MaxEnt prior.) José Bernardo has produced an original theory of reference priors where he chooses the prior in order to maximise the information brought by the data by maximising the Kullback distance between prior and posterior. In the simplest cases with no nuisance parameters, the solution is Jeffreys' prior. In more complex problems, (a) a choice of the parameters of interest (or even a ranking of their order of interest) must be made; (b) the computation of the prior is fairly involved and requires a sequence of embedded compact sets to avoid improperness issues. (See e.g. The Bayesian Choice for details.) In an interesting twist, some researchers outside the Bayesian perspective have been developing procedures called confidence distributions that are probability distributions on the parameter space, constructed by inversion from frequency-based procedures without an explicit prior structure or even a dominating measure on this parameter space. They argue that this absence of well-defined prior is a plus, although the result definitely depends on the choice of the initialising frequency-based procedure In short, there is no "best" (or even "better") choice for "the" "uninformative" prior. And I consider this is how things should be because the very nature of Bayesian analysis implies that the choice of the prior distribution matters. And that there is no comparison of priors: one cannot be "better" than another. (At least before observing the data: once it is observed, comparison of priors becomes model choice.) The conclusion of José Bernardo, Jim Berger, Dongchu Sun, and many other "objective" Bayesians is that there are roughly equivalent reference priors one can use when being unsure about one's prior information or seeking a benchmark Bayesian inference, some of those priors being partly supported by information theory arguments, others by non-Bayesian frequentist properties (like matching priors), and all resulting in rather similar inferences. | {
"source": [
"https://stats.stackexchange.com/questions/20520",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5836/"
]
} |
20,523 | What is the difference between Logit and Probit model ? I'm more interested here in knowing when to use logistic regression, and when to use Probit. If there is any literature which defines it using R , that would be helpful as well. | A standard linear model (e.g., a simple regression model) can be thought of as having two 'parts'. These are called the structural component and the random component . For example: $$
Y=\beta_0+\beta_1X+\varepsilon \\
\text{where } \varepsilon\sim\mathcal{N}(0,\sigma^2)
$$
The first two terms (that is, $\beta_0+\beta_1X$) constitute the structural component, and the $\varepsilon$ (which indicates a normally distributed error term) is the random component. When the response variable is not normally distributed (for example, if your response variable is binary) this approach may no longer be valid. The generalized linear model (GLiM) was developed to address such cases, and logit and probit models are special cases of GLiMs that are appropriate for binary variables (or multi-category response variables with some adaptations to the process). A GLiM has three parts, a structural component , a link function , and a response distribution . For example: $$
g(\mu)=\beta_0+\beta_1X
$$
Here $\beta_0+\beta_1X$ is again the structural component, $g()$ is the link function, and $\mu$ is a mean of a conditional response distribution at a given point in the covariate space. The way we think about the structural component here doesn't really differ from how we think about it with standard linear models; in fact, that's one of the great advantages of GLiMs. Because for many distributions the variance is a function of the mean, having fit a conditional mean (and given that you stipulated a response distribution), you have automatically accounted for the analog of the random component in a linear model (N.B.: this can be more complicated in practice). The link function is the key to GLiMs: since the distribution of the response variable is non-normal, it's what lets us connect the structural component to the response--it 'links' them (hence the name). It's also the key to your question, since the logit and probit are links (as @vinux explained), and understanding link functions will allow us to intelligently choose when to use which one. Although there can be many link functions that can be acceptable, often there is one that is special. Without wanting to get too far into the weeds (this can get very technical) the predicted mean, $\mu$, will not necessarily be mathematically the same as the response distribution's canonical location parameter ; the link function that does equate them is the canonical link function . The advantage of this "is that a minimal sufficient statistic for $\beta$ exists" ( German Rodriguez ). The canonical link for binary response data (more specifically, the binomial distribution) is the logit. However, there are lots of functions that can map the structural component onto the interval $(0,1)$, and thus be acceptable; the probit is also popular, but there are yet other options that are sometimes used (such as the complementary log log, $\ln(-\ln(1-\mu))$, often called 'cloglog'). Thus, there are lots of possible link functions and the choice of link function can be very important. The choice should be made based on some combination of: Knowledge of the response distribution, Theoretical considerations, and Empirical fit to the data. Having covered a little of conceptual background needed to understand these ideas more clearly (forgive me), I will explain how these considerations can be used to guide your choice of link. (Let me note that I think @David's comment accurately captures why different links are chosen in practice .) To start with, if your response variable is the outcome of a Bernoulli trial (that is, $0$ or $1$), your response distribution will be binomial, and what you are actually modeling is the probability of an observation being a $1$ (that is, $\pi(Y=1)$). As a result, any function that maps the real number line, $(-\infty,+\infty)$, to the interval $(0,1)$ will work. From the point of view of your substantive theory, if you are thinking of your covariates as directly connected to the probability of success, then you would typically choose logistic regression because it is the canonical link. However, consider the following example: You are asked to model high_Blood_Pressure as a function of some covariates. Blood pressure itself is normally distributed in the population (I don't actually know that, but it seems reasonable prima facie), nonetheless, clinicians dichotomized it during the study (that is, they only recorded 'high-BP' or 'normal'). In this case, probit would be preferable a-priori for theoretical reasons. This is what @Elvis meant by "your binary outcome depends on a hidden Gaussian variable". Another consideration is that both logit and probit are symmetrical , if you believe that the probability of success rises slowly from zero, but then tapers off more quickly as it approaches one, the cloglog is called for, etc. Lastly, note that the empirical fit of the model to the data is unlikely to be of assistance in selecting a link, unless the shapes of the link functions in question differ substantially (of which, the logit and probit do not). For instance, consider the following simulation: set.seed(1)
probLower = vector(length=1000)
for(i in 1:1000){
x = rnorm(1000)
y = rbinom(n=1000, size=1, prob=pnorm(x))
logitModel = glm(y~x, family=binomial(link="logit"))
probitModel = glm(y~x, family=binomial(link="probit"))
probLower[i] = deviance(probitModel)<deviance(logitModel)
}
sum(probLower)/1000
[1] 0.695 Even when we know the data were generated by a probit model, and we have 1000 data points, the probit model only yields a better fit 70% of the time, and even then, often by only a trivial amount. Consider the last iteration: deviance(probitModel)
[1] 1025.759
deviance(logitModel)
[1] 1026.366
deviance(logitModel)-deviance(probitModel)
[1] 0.6076806 The reason for this is simply that the logit and probit link functions yield very similar outputs when given the same inputs. The logit and probit functions are practically identical, except that the logit is slightly further from the bounds when they 'turn the corner', as @vinux stated. (Note that to get the logit and the probit to align optimally, the logit's $\beta_1$ must be $\approx 1.7$ times the corresponding slope value for the probit. In addition, I could have shifted the cloglog over slightly so that they would lay on top of each other more, but I left it to the side to keep the figure more readable.) Notice that the cloglog is asymmetrical whereas the others are not; it starts pulling away from 0 earlier, but more slowly, and approaches close to 1 and then turns sharply. A couple more things can be said about link functions. First, considering the identity function ($g(\eta)=\eta$) as a link function allows us to understand the standard linear model as a special case of the generalized linear model (that is, the response distribution is normal, and the link is the identity function). It's also important to recognize that whatever transformation the link instantiates is properly applied to the parameter governing the response distribution (that is, $\mu$), not the actual response data . Finally, because in practice we never have the underlying parameter to transform, in discussions of these models, often what is considered to be the actual link is left implicit and the model is represented by the inverse of the link function applied to the structural component instead. That is: $$
\mu=g^{-1}(\beta_0+\beta_1X)
$$
For instance, logistic regression is usually represented:
$$
\pi(Y)=\frac{\exp(\beta_0+\beta_1X)}{1+\exp(\beta_0+\beta_1X)}
$$
instead of:
$$
\ln\left(\frac{\pi(Y)}{1-\pi(Y)}\right)=\beta_0+\beta_1X
$$ For a quick and clear, but solid, overview of the generalized linear model, see chapter 10 of Fitzmaurice, Laird, & Ware (2004) , (on which I leaned for parts of this answer, although since this is my own adaptation of that--and other--material, any mistakes would be my own). For how to fit these models in R, check out the documentation for the function ?glm in the base package. (One final note added later:) I occasionally hear people say that you shouldn't use the probit, because it can't be interpreted. This is not true, although the interpretation of the betas is less intuitive. With logistic regression, a one unit change in $X_1$ is associated with a $\beta_1$ change in the log odds of 'success' (alternatively, an $\exp(\beta_1)$-fold change in the odds), all else being equal. With a probit, this would be a change of $\beta_1\text{ }z$'s. (Think of two observations in a dataset with $z$-scores of 1 and 2, for example.) To convert these into predicted probabilities , you can pass them through the normal CDF , or look them up on a $z$-table. (+1 to both @vinux and @Elvis. Here I have tried to provide a broader framework within which to think about these things and then using that to address the choice between logit and probit.) | {
"source": [
"https://stats.stackexchange.com/questions/20523",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4278/"
]
} |
20,553 | Let's say that there exists some "true" relationship between $y$ and $x$ such that $y = ax + b + \epsilon$, where $a$ and $b$ are constants and $\epsilon$ is i.i.d normal noise. When I randomly generate data from that R code: x <- 1:100; y <- ax + b + rnorm(length(x)) and then fit a model like y ~ x , I obviously get reasonably good estimates for $a$ and $b$. If I switch the role of the variables as in (x ~ y) , however, and then rewrite the result for $y$ to be a function of $x$, the resulting slope is always steeper (either more negative or more positive) than that estimated by the y ~ x regression. I'm trying to understand exactly why that is and would appreciate it if anyone could give me an intuition as to what's going on there. | Given $n$ data points $(x_i,y_i), i = 1,2,\ldots n$, in the plane,
let us draw a straight line
$y = ax+b$. If we predict $ax_i+b$ as the value $\hat{y}_i$ of $y_i$, then
the error is $(y_i-\hat{y}_i) = (y_i-ax_i-b)$, the squared error is
$(y_i-ax_i-b)^2$, and the total squared error $\sum_{i=1}^n (y_i-ax_i-b)^2$.
We ask What choice of $a$ and $b$ minimizes
$S =\displaystyle\sum_{i=1}^n (y_i-ax_i-b)^2$? Since $(y_i-ax_i-b)$ is the vertical distance of $(x_i,y_i)$ from
the straight line, we are asking for the line such that the
sum of the squares of the vertical distances of the points from
the line is as small as possible. Now $S$ is a
quadratic function of both $a$ and $b$ and attains its minimum
value when $a$ and $b$ are such that
$$\begin{align*}
\frac{\partial S}{\partial a} &= 2\sum_{i=1}^n (y_i-ax_i-b)(-x_i) &= 0\\
\frac{\partial S}{\partial b} &= 2\sum_{i=1}^n (y_i-ax_i-b)(-1) &= 0
\end{align*}$$
From the second equation, we get
$$b = \frac{1}{n}\sum_{i=1}^n (y_i - ax_i) = \mu_y - a\mu_x$$
where
$\displaystyle \mu_y = \frac{1}{n}\sum_{i=1}^n y_i, ~
\mu_x = \frac{1}{n}\sum_{i=1}^n x_i$ are the arithmetic average values
of the $y_i$'s and the $x_i$'s respectively. Substituting into the
first equation, we get
$$
a = \frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{
\left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}.
$$
Thus, the line that minimizes $S$ can be expressed as
$$y = ax+b = \mu_y +
\left(\frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{
\left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}\right)
(x - \mu_x),
$$
and the minimum value of $S$ is
$$S_{\min} =
\frac{\left[\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2\right]
\left[\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2\right]
-
\left[\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right)
-\mu_x\mu_y\right]^2}{\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}.$$ If we interchange the roles of $x$ and $y$, draw a line
$x = \hat{a}y + \hat{b}$, and ask for the values of
$\hat{a}$ and $\hat{b}$ that minimize
$$T = \sum_{i=1}^n (x_i - \hat{a}y_i - \hat{b})^2,$$
that is, we want the line such that the
sum of the squares of the horizontal distances of the points from
the line is as small as possible, then we get $$x = \hat{a}y+\hat{b} = \mu_x +
\left(\frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{
\left( \frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}\right)
(y - \mu_y)
$$
and the minimum value of $T$ is
$$T_{\min} =
\frac{\left[\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2\right]
\left[\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2\right]
-
\left[\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right)
-\mu_x\mu_y\right]^2}{\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}.$$ Note that both lines pass through the point $(\mu_x,\mu_y)$
but the slopes are
$$a =
\frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{
\left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2},~~
\hat{a}^{-1} = \frac{
\left( \frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}$$
are different in general. Indeed, as @whuber points out in a comment, the
slopes are the same when all the points $(x_i,y_i)$ lie on the same
straight line. To see this, note that
$$\hat{a}^{-1} - a = \frac{S_{\min}}{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y} = 0 \Rightarrow S_{\min} = 0 \Rightarrow y_i=ax_i+b, i=1,2,\ldots, n.
$$ | {
"source": [
"https://stats.stackexchange.com/questions/20553",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8317/"
]
} |
20,558 | The world of statistics was divided between frequentists and Bayesians. These days it seems everyone does a bit of both. How can this be? If the different approaches are suitable for different problems, why did the founding fathers of statistics did not see this? Alternatively, has the debate been won by Frequentists and the true subjective Bayesians moved to decision theory? | I actually mildly disagree with the premise. Everyone is a Bayesian, if they really do have a probability distribution handed to them as a prior. The trouble comes about when they don't, and I think there's still a pretty good-sized divide on that topic. Having said that, though, I do agree that more and more people are less inclined to fight holy wars and just get on with doing what seems appropriate in any given situation. I would say that, as the profession advanced, both sides realized there were merits in the other side's approaches. Bayesians realized that evaluating how well Bayesian procedures would do if used over and over again (e.g., does this 95% credible interval (CI) actually contain the true parameter about 95% of the time?) required a frequentist outlook. Without this, there's no calibration of that "95%" to any real-world number. Robustness? Model building through iterative fitting etc.? Ideas that came up in the frequentist world, and were adapted by Bayesians starting in the late 1980s or so. Frequentists realized that regularization was good, and use it quite commonly these days - and Bayesian priors can be easily interpreted as regularization. Nonparametric modeling via cubic splines with a penalty function? Your penalty is my prior! Now we can all get along. The other major influence, I believe, is the staggering improvement in availability of high-quality software that will let you do analysis quickly. This comes in two parts - algorithms, e.g., Gibbs sampling and Metropolis-Hastings, and the software itself, R, SAS, ... I might be more of a pure Bayesian if I had to write all my code in C (I simply wouldn't have the time to try anything else), but as it is, I'll use gam in the mgcv package in R any time my model looks like I can fit it into that framework without too much squeezing, and I'm a better statistician for it. Familiarity with your opponent's methods, and realizing how much effort it can save / better quality it can provide to use them in some situations, even though they may not fit 100% into your default framework for thinking about an issue, is a big antidote to dogmatism. | {
"source": [
"https://stats.stackexchange.com/questions/20558",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6961/"
]
} |
20,563 | I have a process with binary output. Is there a standard way to test if it is a Bernoulli process? The problem translates to checking if every trial is independent of the previous trials. I have observed some processes where a result "sticks" for a number of trials. | I actually mildly disagree with the premise. Everyone is a Bayesian, if they really do have a probability distribution handed to them as a prior. The trouble comes about when they don't, and I think there's still a pretty good-sized divide on that topic. Having said that, though, I do agree that more and more people are less inclined to fight holy wars and just get on with doing what seems appropriate in any given situation. I would say that, as the profession advanced, both sides realized there were merits in the other side's approaches. Bayesians realized that evaluating how well Bayesian procedures would do if used over and over again (e.g., does this 95% credible interval (CI) actually contain the true parameter about 95% of the time?) required a frequentist outlook. Without this, there's no calibration of that "95%" to any real-world number. Robustness? Model building through iterative fitting etc.? Ideas that came up in the frequentist world, and were adapted by Bayesians starting in the late 1980s or so. Frequentists realized that regularization was good, and use it quite commonly these days - and Bayesian priors can be easily interpreted as regularization. Nonparametric modeling via cubic splines with a penalty function? Your penalty is my prior! Now we can all get along. The other major influence, I believe, is the staggering improvement in availability of high-quality software that will let you do analysis quickly. This comes in two parts - algorithms, e.g., Gibbs sampling and Metropolis-Hastings, and the software itself, R, SAS, ... I might be more of a pure Bayesian if I had to write all my code in C (I simply wouldn't have the time to try anything else), but as it is, I'll use gam in the mgcv package in R any time my model looks like I can fit it into that framework without too much squeezing, and I'm a better statistician for it. Familiarity with your opponent's methods, and realizing how much effort it can save / better quality it can provide to use them in some situations, even though they may not fit 100% into your default framework for thinking about an issue, is a big antidote to dogmatism. | {
"source": [
"https://stats.stackexchange.com/questions/20563",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3678/"
]
} |
20,608 | I know that with a model that is not identifiable the data can be said to be generated by multiple different assignments to the model parameters. I know that sometimes it's possible to constrain parameters so that all are identifiable, as in the example in Cassella & Berger 2nd ed, section 11.2. Given a particular model, how can I evaluate whether or not it's identifiable? | For identifiability we are talking about a parameter $\theta$ (which could be a vector), which ranges over a parameter space $\Theta$, and a family of distributions (for simplicity, think PDFs) indexed by $\theta$ which we typically write something like $\{ f_{\theta}|\, \theta \in \Theta\}$. For instance, $\theta$ could be $\theta = \beta$ and $f$ could be $$
f_{\theta}(x) = \frac{1}{\beta}\mathrm{e}^{-x/\beta}, \ x>0,\ \beta >0,
$$
which would mean that $\Theta = (0,\infty)$. In order for the model to be identifiable, the transformation which maps $\theta$ to $f_{\theta}$ should be one-to-one . Given a model in your lap, the most straightforward way to check this is to start with the equation $f_{\theta_{1}} = f_{\theta_{2}}$, (this equality should hold for (almost) all $x$ in the support ) and to try to use algebra (or some other argument) to show that just such an equation implies that, in fact, $\theta_{1} = \theta_{2}$. If you succeed with this plan, then your model is identifiable; go on with your business. If you don't, then either your model isn't identifiable, or you need to find another argument. The intuition is the same, regardless: in an identifiable model it is impossible for two distinct parameters (which could be vectors) to give rise to the same likelihood function. This makes sense, because if, for fixed data, two unique parameters gave rise to the same likelihood, then it would be impossible to distinguish between the two candidate parameters based on the data alone. It would be impossible to identify the true parameter, in that case. For the example above, the equation $f_{\theta_{1}} = f_{\theta_{2}}$ is
$$
\frac{1}{\beta_{1}}\mathrm{e}^{-x/\beta_{1}} = \frac{1}{\beta_{2}}\mathrm{e}^{-x/\beta_{2}},
$$
for (almost) all $x > 0$. If we take logs of both sides we get
$$
-\ln\,\beta_{1} - \frac{x}{\beta_{1}} = -\ln\,\beta_{2} - \frac{x}{\beta_{2}}
$$
for $x > 0$, which implies the linear function
$$
-\left(\frac{1}{\beta_{1}} - \frac{1}{\beta_{2}}\right)x - (\ln\,\beta_{1} - \ln\,\beta_{2})
$$
is (almost) identically zero. The only line which does such a thing is the one which has slope 0 and y-intercept zero. Hopefully you can see the rest. By the way, if you can tell by looking at your model that it isn't identifiable (sometimes you can), then it is common to introduce additional constraints on it to make it identifiable (as you mentioned). This is akin to recognizing that the function $f(y) = y^{2}$ isn't one-to-one for $y$ in $[-1,1]$, but it is one-to-one if we restrict $y$ to lie inside $[0,1]$. In more complicated models the equations are tougher but the idea is the same. | {
"source": [
"https://stats.stackexchange.com/questions/20608",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8207/"
]
} |
20,622 | I have read various (seemingly) contradicting statements whether or not AdaBoost (or other boosting techniques) are less or more prone to overfitting compared to other learning methods. Are there good reasons to believe one or the other? If it depends, what does it depend on? What are the reasons that AdaBoost is less/more prone to overfitting? | As you say a lot has been discussed about this matter, and there's some quite heavy theory that has gone along with it that I have to admit I never fully understood. In my practical experience AdaBoost is quite robust to overfitting, and LPBoost (Linear Programming Boosting) even more so (because the objective function requires a sparse combination of weak learners, which is a form of capacity control). The main factors that influence it are: The "strength" of the "weak" learners: If you use very simple weak learners, such as decision stumps (1-level decision trees), then the algorithms are much less prone to overfitting. Whenever I've tried using more complicated weak learners (such as decision trees or even hyperplanes) I've found that overfitting occurs much more rapidly The noise level in the data: AdaBoost is particularly prone to overfitting on noisy datasets. In this setting the regularised forms (RegBoost, AdaBoostReg, LPBoost, QPBoost) are preferable The dimensionality of the data: We know that in general, we experience overfitting more in high dimensional spaces ("the curse of dimensionality"), and AdaBoost can also suffer in that respect, as it is simply a linear combination of classifiers which themselves suffer from the problem. Whether it is as prone as other classifiers is hard to determine. Of course you can use heuristic methods such as validation sets or $k$-fold cross-validation to set the stopping parameter (or other parameters in the different variants) as you would for any other classifier. | {
"source": [
"https://stats.stackexchange.com/questions/20622",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4916/"
]
} |
20,701 | I use "boot" package to compute an approximated 2-sided bootstrapped p-value but the result is too far away from p-value of using t.test. I can't figure out what I did wrong in my R code. Can someone please give me a hint for this time = c(14,18,11,13,18,17,21,9,16,17,14,15,
12,12,14,13,6,18,14,16,10,7,15,10)
group=c(rep(1:2, each=12))
sleep = data.frame(time, group)
require(boot)
diff = function(d1,i){
d = d1[i,]
Mean= tapply(X=d$time, INDEX=d$group, mean)
Diff = Mean[1]-Mean[2]
Diff
}
set.seed(1234)
b3 = boot(data = sleep, statistic = diff, R = 5000, strata=sleep$group)
pvalue = mean(abs(b3$t) > abs(b3$t0))
pvalue The 2-sided bootstrapped p-value (pvalue) = 0.4804 but the 2-sided p-value of t.test is 0.04342. Both p-values are around 11 times difference. How can this happen? | You are using bootstrap to generate data under the empirical distribution of the observed data. This can be useful to give a confidence interval on the difference between the two means: > quantile(b3$t,c(0.025,0.975))
2.5% 97.5%
0.4166667 5.5833333 To get a $p$-value, you need to generate permutations under the null hypothesis. This can be done eg like this: diff2 = function(d1,i){
d = d1;
d$group <- d$group[i]; # randomly re-assign groups
Mean= tapply(X=d$time, INDEX=d$group, mean)
Diff = Mean[1]-Mean[2]
Diff
}
> set.seed(1234)
> b4 = boot(data = sleep, statistic = diff2, R = 5000)
> mean(abs(b4$t) > abs(b4$t0))
[1] 0.046 In this solution, the size of groups is not fixed, you randomly reassign a group to each individual by bootstraping from the initial group set. It seems legit to me, however a more classical solution is to fix the number of individuals of each group, so you just permute the groups instead of bootstraping (this is usually motivated by the design of the experiment, where the group sizes are fixed beforehand): > R <- 10000; d <- sleep
> b5 <- numeric(R); for(i in 1:R) {
+ d$group <- sample(d$group, length(d$group));
+ b5[i] <- mean(d$time[d$group==1])-mean(d$time[d$group==2]);
+ }
> mean(abs(b5) > 3)
[1] 0.0372 | {
"source": [
"https://stats.stackexchange.com/questions/20701",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4559/"
]
} |
20,741 | What is the semantic difference between Mean Squared Error (MSE) and Mean Squared Prediction Error (MSPE)? | The difference is not the mathematical expression, but rather what you are measuring. Mean squared error measures the expected squared distance between an estimator and the true underlying parameter: $$\text{MSE}(\hat{\theta}) = E\left[(\hat{\theta} - \theta)^2\right].$$ It is thus a measurement of the quality of an estimator. The mean squared prediction error measures the expected squared distance between what your predictor predicts for a specific value and what the true value is: $$\text{MSPE}(L) = E\left[\sum_{i=1}^n\left(g(x_i) - \widehat{g}(x_i)\right)^2\right].$$ It is thus a measurement of the quality of a predictor. The most important thing to understand is the difference between a predictor and an estimator. An example of an estimator would be taking the average height a sample of people to estimate the average height of a population. An example of a predictor is to average the height of an individual's two parents to guess his specific height. They are thus solving two very different problems. | {
"source": [
"https://stats.stackexchange.com/questions/20741",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8401/"
]
} |
20,825 | I am using a generalized linear model in SPSS to look at the differences in average number of caterpillars (non-normal, using Tweedie distribution) on 16 different species of plants. I want to run multiple comparisons but I'm not sure if I should use a Sidak or Bonferroni correction test. What is the difference between the two tests? Is one better than the other? | If you run $k$ independent statistical tests using $\alpha$ as your significance level, and the null obtains in every case, whether or not you will find 'significance' is simply a draw from a random variable. Specifically, it is taken from a binomial distribution with $p=\alpha$ and $n=k$. For example, if you plan to run 3 tests using $\alpha=.05$, and (unbeknownst to you) there is actually no difference in each case, then there is a 5% chance of finding a significant result in each test. In this way, the type I error rate is held to $\alpha$ for the tests individually, but across the set of 3 tests the long-run type I error rate will be higher. If you believe that it is meaningful to group / think of these 3 tests together, then you may want to hold the type I error rate at $\alpha$ for the set as a whole , rather than just individually. How should you go about this? There are two approaches that center on shifting from the original $\alpha$ (i.e., $\alpha_o$) to a new value (i.e., $\alpha_{\rm new}$): Bonferroni: adjust the $\alpha$ used to assess 'significance' such that $$\alpha_{\rm new}=\frac{\alpha_{o}}{k}\qquad\qquad\quad$$ Dunn-Sidak: adjust $\alpha$ using $$\alpha_{\rm new}=1-(1-\alpha_{o})^{1/k}$$ (Note that the Dunn-Sidak assumes all the tests within the set are independent of each other and could yield familywise type I error inflation if that assumption does not hold.) It is important to note that when conducting tests, there are two kinds of errors that you want to avoid, type I (i.e., saying there is a difference when there isn't one) and type II (i.e., saying there isn't a difference when there actually is). Typically, when people discuss this topic, they only discuss—and seem to only be aware of / concerned with—type I errors. In addition, people often neglect to mention that the calculated error rate will only hold if all nulls are true. It is trivially obvious that you cannot make a type I error if the null hypothesis is false, but it is important to hold that fact explicitly in mind when discussing this issue. I bring this up because there are implications of these facts that appear to often go unconsidered. First, if $k>1$, the Dunn-Sidak approach will offer higher power (although the difference can be quite tiny with small $k$) and so should always be preferred (when applicable). Second, a 'step-down' approach should be used. That is, test the biggest effect first; if you are convinced that the null does not obtain in that case, then the maximum possible number of type I errors is $k-1$, so the next test should be adjusted accordingly, and so on. (This often makes people uncomfortable and looks like fishing, but it is not fishing, as the tests are independent, and you intended to conduct them before you ever saw the data. This is just a way of adjusting $\alpha$ optimally.) The above holds no matter how you you value type I relative to type II errors. However, a-priori there is no reason to believe that type I errors are worse than type II (despite the fact that everyone seems to assume so). Instead, this is a decision that must be made by the researcher, and must be specific to that situation. Personally, if I am running theoretically-suggested, a-priori , orthogonal contrasts, I don't usually adjust $\alpha$. (And to state this again, because it's important, all of the above assumes that the tests are independent. If the contrasts are not independent, such as when several treatments are each being compared to the same control, a different approach than $\alpha$ adjustment, such as Dunnett's test, should be used.) | {
"source": [
"https://stats.stackexchange.com/questions/20825",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8428/"
]
} |
20,826 | I have count data (demand/offer analysis with counting number of customers, depending on - possibly - many factors). I tried a linear regression with normal errors, but my QQ-plot is not really good. I tried a log transformation of the answer: once again, bad QQ-plot. So now, I'm trying a regression with Poisson Errors. With a model with all significant variables, I get: Null deviance: 12593.2 on 53 degrees of freedom
Residual deviance: 1161.3 on 37 degrees of freedom
AIC: 1573.7
Number of Fisher Scoring iterations: 5 Residual deviance is larger than residual degrees of freedom: I have overdispersion. How can I know if I need to use quasipoisson? What's the goal of quasipoisson in this case? I read this advise in "The R Book" by Crawley, but I don't see the point nor a large improvement in my case. | When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target variable given the rhs variables. Plots of the residuals vs. the fitted values from from your Normal model can help with this. With Poisson regression, the assumed relationship is that the variance equals the expected value; rather restrictive, I think you'll agree. With a "standard" linear regression, the assumption is that the variance is constant regardless of the expected value. For a quasi-poisson regression, the variance is assumed to be a linear function of the mean; for negative binomial regression, a quadratic function. However, you aren't restricted to these relationships. The specification of a "family" (other than "quasi") determines the mean-variance relationship. I don't have The R Book, but I imagine it has a table that shows the family functions and corresponding mean-variance relationships. For the "quasi" family you can specify any of several mean-variance relationships, and you can even write your own; see the R documentation . It may be that you can find a much better fit by specifying a non-default value for the mean-variance function in a "quasi" model. You also should pay attention to the range of the target variable; in your case it's nonnegative count data. If you have a substantial fraction of low values - 0, 1, 2 - the continuous distributions probably won't fit well, but if you don't, there's not much value in using a discrete distribution. It's rare that you'd consider Poisson and Normal distributions as competitors. | {
"source": [
"https://stats.stackexchange.com/questions/20826",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8431/"
]
} |
20,834 | I have a data set with about 15,000 labeled observations of a single continuous value.
What is the best way to plot this type of data? I'm playing around with various histograms and density plots, but I can't seem to figure out the best way to plot this data set. Any suggestions? My data looks like this: label value
------- -------
foo 1.2
bar 6.2
baz 0.2
qux 4.7
... ... This data set contains 15,000 values, each with a unique label. I am looking as to how best to create a visualization of the distribution of the data and see outliers. Here are two candidate plots I've generated. Both simplified the data more than I would like. Are there any additional ways I could plot the data and somehow integrate the labels into this plot? | When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target variable given the rhs variables. Plots of the residuals vs. the fitted values from from your Normal model can help with this. With Poisson regression, the assumed relationship is that the variance equals the expected value; rather restrictive, I think you'll agree. With a "standard" linear regression, the assumption is that the variance is constant regardless of the expected value. For a quasi-poisson regression, the variance is assumed to be a linear function of the mean; for negative binomial regression, a quadratic function. However, you aren't restricted to these relationships. The specification of a "family" (other than "quasi") determines the mean-variance relationship. I don't have The R Book, but I imagine it has a table that shows the family functions and corresponding mean-variance relationships. For the "quasi" family you can specify any of several mean-variance relationships, and you can even write your own; see the R documentation . It may be that you can find a much better fit by specifying a non-default value for the mean-variance function in a "quasi" model. You also should pay attention to the range of the target variable; in your case it's nonnegative count data. If you have a substantial fraction of low values - 0, 1, 2 - the continuous distributions probably won't fit well, but if you don't, there's not much value in using a discrete distribution. It's rare that you'd consider Poisson and Normal distributions as competitors. | {
"source": [
"https://stats.stackexchange.com/questions/20834",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4593/"
]
} |
20,836 | I would like to implement an algorithm for automatic model selection.
I am thinking of doing stepwise regression but anything will do (it has to be based on linear regressions though). My problem is that I am unable to find a methodology, or an open source implementation (I am woking in java). The methodology I have in mind would be something like: calculate the correlation matrix of all the factors pick the factors that have a low correlation to each other remove the factors that have a low t-stat add other factors (still based on the low correlation factor found in 2.). reiterate several times until some criterion (e.g AIC) is over a certain threshold or cannot or we can't find a larger value. I realize there is an R implementation for this (stepAIC), but I find the code quite hard to understand. Also I have not been able to find articles describing the stepwise regression. | I think this approach is mistaken, but perhaps it will be more helpful if I explain why. Wanting to know the best model given some information about a large number of variables is quite understandable. Moreover, it is a situation in which people seem to find themselves regularly. In addition, many textbooks (and courses) on regression cover stepwise selection methods, which implies that they must be legitimate. Unfortunately, however, they are not, and the pairing of this situation and goal is quite difficult to successfully navigate. The following is a list of problems with automated stepwise model selection procedures (attributed to Frank Harrell, and copied from here ): It yields R-squared values that are badly biased to be high. The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution. The method yields confidence intervals for effects and predicted values that are falsely narrow; see Altman and Andersen (1989). It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem. It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large; see Tibshirani
[1996]). It has severe problems in the presence of collinearity. It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses. Increasing the sample size does not help very much; see Derksen and Keselman (1992). It allows us to not think about the problem. It uses a lot of paper. The question is, what's so bad about these procedures / why do these problems occur? Most people who have taken a basic regression course are familiar with the concept of regression to the mean , so this is what I use to explain these issues. (Although this may seem off-topic at first, bear with me, I promise it's relevant.) Imagine a high school track coach on the first day of tryouts. Thirty kids show up. These kids have some underlying level of intrinsic ability to which neither the coach nor anyone else, has direct access. As a result, the coach does the only thing he can do, which is have them all run a 100m dash. The times are presumably a measure of their intrinsic ability and are taken as such. However, they are probabilistic; some proportion of how well someone does is based on their actual ability, and some proportion is random. Imagine that the true situation is the following: set.seed(59)
intrinsic_ability = runif(30, min=9, max=10)
time = 31 - 2*intrinsic_ability + rnorm(30, mean=0, sd=.5) The results of the first race are displayed in the following figure along with the coach's comments to the kids. Note that partitioning the kids by their race times leaves overlaps on their intrinsic ability--this fact is crucial. After praising some, and yelling at some others (as coaches tend to do), he has them run again. Here are the results of the second race with the coach's reactions (simulated from the same model above): Notice that their intrinsic ability is identical, but the times bounced around relative to the first race. From the coach's point of view, those he yelled at tended to improve, and those he praised tended to do worse (I adapted this concrete example from the Kahneman quote listed on the wiki page), although actually regression to the mean is a simple mathematical consequence of the fact that the coach is selecting athletes for the team based on a measurement that is partly random. Now, what does this have to do with automated (e.g., stepwise) model selection techniques? Developing and confirming a model based on the same dataset is sometimes called data dredging . Although there is some underlying relationship amongst the variables, and stronger relationships are expected to yield stronger scores (e.g., higher t-statistics), these are random variables, and the realized values contain error. Thus, when you select variables based on having higher (or lower) realized values, they may be such because of their underlying true value, error, or both. If you proceed in this manner, you will be as surprised as the coach was after the second race. This is true whether you select variables based on having high t-statistics, or low intercorrelations. True, using the AIC is better than using p-values, because it penalizes the model for complexity, but the AIC is itself a random variable (if you run a study several times and fit the same model, the AIC will bounce around just like everything else). Unfortunately, this is just a problem intrinsic to the epistemic nature of reality itself. I hope this is helpful. | {
"source": [
"https://stats.stackexchange.com/questions/20836",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5394/"
]
} |
21,022 | At first I thought the order didn’t matter, but then I read about the gram-schmidt orthogonalization process for calculating multiple regression coefficients, and now I’m having second thoughts. According to the gram-schmidt process, the later an explanatory variable is indexed among the other variables, the smaller its residual vector is because preceding variables' residual vectors are subtracted from it. As a result, the explanatory variable's regression coefficient is also smaller. If that's true, then the residual vector of the variable in question would be larger if it were indexed earlier, since fewer residual vectors would be subtracted from it. This means that the regression coefficient would be larger too. Ok, so I've been asked to clarify my question. So I've posted screenshots from the text that got me confused in the first place. Ok, here goes. My understanding is that there are at least two options to calculate the regression coefficients. The first option is denoted (3.6) in the screenshot below. Here is the second option (I had to use multiple screenshots). Unless I am misreading something (which is definitely possible), it seems that order matters in the second option. Does it matter in the first option? Why or why not? Or is my frame of reference so messed up that this isn't even a valid question? Also, is this all somehow related to Type I Sum of Squares vs Type II Sum of Squares? Thanks so much in advance, I am so confused! | I believe the confusion may be arising from something a bit simpler, but it provides a nice opportunity to review some related matters. Note that the text is not claiming that all of the regression coefficients $\newcommand{\bhat}{\hat{\beta}}\newcommand{\m}{\mathbf}\newcommand{\z}{\m{z}}\bhat_i$ can be calculated via the successive residuals vectors as
$$
\bhat_i \stackrel{?}{=} \frac{\langle \m y, \z_i \rangle}{\|\z_i\|^2}\>,
$$
but rather that only the last one, $\bhat_p$, can be calculated this way! The successive orthogonalization scheme (a form of Gram–Schmidt orthogonalization) is (almost) producing a pair of matrices $\newcommand{\Z}{\m{Z}}\newcommand{\G}{\m{G}}\Z$ and $\G$ such that
$$
\m X = \Z \G \>,
$$
where $\Z$ is $n \times p$ with orthonormal columns and $\G = (g_{ij})$ is $p \times p$ upper triangular. I say "almost" since the algorithm is only specifying $\Z$ up to the norms of the columns, which will not in general be one, but can be made to have unit norm by normalizing the columns and making a corresponding simple adjustment to the coordinate matrix $\G$. Assuming, of course, that $\m X \in \mathbb R^{n \times p}$ has rank $p \leq n$, the unique least squares solution is the vector $\bhat$ that solves the system
$$
\m X^T \m X \bhat = \m X^T \m y \>.
$$ Substituting $\m X = \Z \G$ and using $\Z^T \Z = \m I$ (by construction), we get
$$
\G^T \G \bhat = \G^T \Z^T \m y \> ,
$$
which is equivalent to
$$
\G \bhat = \Z^T \m y \>.
$$ Now, concentrate on the last row of the linear system. The only nonzero element of $\G$ in the last row is $g_{pp}$. So, we get that
$$
g_{pp} \bhat_p = \langle \m y, \z_p \rangle \>.
$$
It is not hard to see (verify this as a check of understanding!) that $g_{pp} = \|\z_p\|$ and so this yields the solution. ( Caveat lector : I've used $\z_i$ already normalized to have unit norm, whereas in the book they have not . This accounts for the fact that the book has a squared norm in the denominator, whereas I only have the norm.) To find all of the regression coefficients, one needs to do a simple backsubstitution step to solve for the individual $\bhat_i$. For example, for row $(p-1)$,
$$
g_{p-1,p-1} \bhat_{p-1} + g_{p-1,p} \bhat_p = \langle \m z_{p-1}, \m y \rangle \>,
$$
and so
$$
\bhat_{p-1} = g_{p-1,p-1}^{-1} \langle \m z_{p-1}, \m y \rangle \> - g_{p-1,p-1}^{-1} g_{p-1,p} \bhat_p .
$$
One can continue this procedure working "backwards" from the last row of the system up to the first, subtracting out weighted sums of the regression coefficients already calculated and then dividing by the leading term $g_{ii}$ to get $\bhat_i$. The point in the section in ESL is that we could reorder the columns of $\m X$ to get a new matrix $\m X^{(r)}$ with the $r$th original column now being the last one. If we then apply Gram–Schmidt procedure on the new matrix, we get a new orthogonalization such that the solution for the original coefficient $\bhat_r$ is found by the simple solution above. This gives us an interpretation for the regression coefficient $\bhat_r$. It is a univariate regression of $\m y$ on the residual vector obtained by "regressing out" the remaining columns of the design matrix from $\m x_r$. General QR decompositions The Gram–Schmidt procedure is but one method of producing a QR decomposition of $\m X$. Indeed, there are many reasons to prefer other algorithmic approaches over the Gram–Schmidt procedure. Householder reflections and Givens rotations provide more numerically stable approaches to this problem. Note that the above development does not change in the general case of QR decomposition. Namely, let
$$
\m X = \m Q \m R \>,
$$
be any QR decomposition of $\m X$. Then, using exactly the same reasoning and algebraic manipulations as above, we have that the least-squares solution $\bhat$ satisfies
$$
\m R^T \m R \bhat = \m R^T \m Q^T \m y \>,
$$
which simplifies to
$$
\m R \bhat = \m Q^T \m y \> .
$$
Since $\m R$ is upper-triangular, then the same backsubstitution technique works. We first solve for $\bhat_p$ and then work our way backwards from bottom to top. The choice for which QR decomposition algorithm to use generally hinges on controlling numerical instability and, from this perspective, Gram–Schmidt is generally not a competitive approach. This notion of decomposing $\m X$ as an orthogonal matrix times something else can be generalized a little bit further as well to get a very general form for the fitted vector $\hat{\m y}$, but I fear this response has already become too long. | {
"source": [
"https://stats.stackexchange.com/questions/21022",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8401/"
]
} |
21,103 | I have to find a 95% C.I. on the median and other percentiles. I don't know how to approach this. I mainly use R as a programming tool. | Here is an illustration on a classical R dataset: > x = faithful$waiting
> bootmed = apply(matrix(sample(x, rep=TRUE, 10^4*length(x)), nrow=10^4), 1, median)
> quantile(bootmed, c(.025, 0.975))
2.5% 97.5%
73.5 77 which gives a (73.5, 77) confidence interval on the median. ( Note: Corrected version, thanks to John . I used $10^3$ in the nrow earlier, which led to the confusion!) | {
"source": [
"https://stats.stackexchange.com/questions/21103",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4754/"
]
} |
21,104 | I have 365 daily measurements that all have standard errors associated with them. Date | Prediction | Standard Error
-----------------------------------------
Jan-01-2003 | 24.8574 | 10.6407
Jan-02-2003 | 10.8658 | 3.8237
Jan-03-2003 | 12.1917 | 5.7988
Jan-04-2003 | 11.1783 | 4.3016
Jan-05-2003 | 16.713 | 5.3177
etc ... What is the statistically appropriate way of getting the yearly average with a 95% Confidence Interval around it ? I am assuming that the errors must be propagating somehow and need to be accounted for. Google returns mostly information on how to calculate the average or standard deviation of a set of numbers, not a set of numbers with errors. I would also appreciate some type of internet reference so I can refer to it later. | Here is an illustration on a classical R dataset: > x = faithful$waiting
> bootmed = apply(matrix(sample(x, rep=TRUE, 10^4*length(x)), nrow=10^4), 1, median)
> quantile(bootmed, c(.025, 0.975))
2.5% 97.5%
73.5 77 which gives a (73.5, 77) confidence interval on the median. ( Note: Corrected version, thanks to John . I used $10^3$ in the nrow earlier, which led to the confusion!) | {
"source": [
"https://stats.stackexchange.com/questions/21104",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/44618/"
]
} |
21,152 | Random forests are considered to be black boxes, but recently I was thinking what knowledge can be obtained from a random forest? The most obvious thing is the importance of the variables, in the simplest variant it can be done just by calculating the number of occurrences of a variable. The second thing I was thinking about are interactions. I think that if the number of trees is sufficiently large then the number of occurrences of pairs of variables can be tested (something like chi square independence).
The third thing are nonlinearities of variables. My first idea was just to look at a chart of a variable Vs score, but I'm not sure yet whether it makes any sense. Added 23.01.2012 Motivation I want to use this knowledge to improve a logit model. I think (or at least I hope) that it is possible to find interactions and nonlinearities that were overlooked. | Random Forests are hardly a black box. They are based on decision trees, which are very easy to interpret: #Setup a binary classification problem
require(randomForest)
data(iris)
set.seed(1)
dat <- iris
dat$Species <- factor(ifelse(dat$Species=='virginica','virginica','other'))
trainrows <- runif(nrow(dat)) > 0.3
train <- dat[trainrows,]
test <- dat[!trainrows,]
#Build a decision tree
require(rpart)
model.rpart <- rpart(Species~., train) This results in a simple decision tree: > model.rpart
n= 111
node), split, n, loss, yval, (yprob)
* denotes terminal node
1) root 111 35 other (0.68468468 0.31531532)
2) Petal.Length< 4.95 77 3 other (0.96103896 0.03896104) *
3) Petal.Length>=4.95 34 2 virginica (0.05882353 0.94117647) * If Petal.Length < 4.95, this tree classifies the observation as "other." If it's greater than 4.95, it classifies the observation as "virginica." A random forest is simple a collection of many such trees, where each one is trained on a random subset of the data. Each tree then "votes" on the final classification of each observation. model.rf <- randomForest(Species~., train, ntree=25, proximity=TRUE, importance=TRUE, nodesize=5)
> getTree(model.rf, k=1, labelVar=TRUE)
left daughter right daughter split var split point status prediction
1 2 3 Petal.Width 1.70 1 <NA>
2 4 5 Petal.Length 4.95 1 <NA>
3 6 7 Petal.Length 4.95 1 <NA>
4 0 0 <NA> 0.00 -1 other
5 0 0 <NA> 0.00 -1 virginica
6 0 0 <NA> 0.00 -1 other
7 0 0 <NA> 0.00 -1 virginica You can even pull out individual trees from the rf, and look at their structure. The format is slightly different than for rpart models, but you could inspect each tree if you wanted and see how it's modeling the data. Furthermore, no model is truly a black box, because you can examine predicted responses vs actual responses for each variable in the dataset. This is a good idea regardless of what sort of model you are building: library(ggplot2)
pSpecies <- predict(model.rf,test,'vote')[,2]
plotData <- lapply(names(test[,1:4]), function(x){
out <- data.frame(
var = x,
type = c(rep('Actual',nrow(test)),rep('Predicted',nrow(test))),
value = c(test[,x],test[,x]),
species = c(as.numeric(test$Species)-1,pSpecies)
)
out$value <- out$value-min(out$value) #Normalize to [0,1]
out$value <- out$value/max(out$value)
out
})
plotData <- do.call(rbind,plotData)
qplot(value, species, data=plotData, facets = type ~ var, geom='smooth', span = 0.5) I've normalized the variables (sepal and petal length and width) to a 0-1 range. The response is also 0-1, where 0 is other and 1 is virginica. As you can see the random forest is a good model, even on the test set. Additionally, a random forest will compute various measure of variable importance, which can be very informative: > importance(model.rf, type=1)
MeanDecreaseAccuracy
Sepal.Length 0.28567162
Sepal.Width -0.08584199
Petal.Length 0.64705819
Petal.Width 0.58176828 This table represents how much removing each variable reduces the accuracy of the model. Finally, there are many other plots you can make from a random forest model, to view what's going on in the black box: plot(model.rf)
plot(margin(model.rf))
MDSplot(model.rf, iris$Species, k=5)
plot(outlier(model.rf), type="h", col=c("red", "green", "blue")[as.numeric(dat$Species)]) You can view the help files for each of these functions to get a better idea of what they display. | {
"source": [
"https://stats.stackexchange.com/questions/21152",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1643/"
]
} |
21,153 | I am asked by colleagues some help in this subject, that I don’t really know. They made hypotheses on the role of some latent variables in one study, and a referee asked them to formalize this in SEM. As what they need doesn’t seem too difficult, I think I’ll give it a shot ... for now, I am just looking for a good introduction to the subject! Google wasn’t really my friend on this. PS: I read Structural Equation Modeling
With the sem Package in R by John Fox, and this text by the same author. I think this can be sufficient for my purpose, anyway any other references are welcome. | I would go for some papers by Múthen and Múthen, who authored the Mplus software, especially Múthen, B.O. (1984). A general structural equation model with dichotomous, ordered categorical and continuous latent indicators . Psychometrika , 49, 115–132. Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished technical report. (Available as PDFs from here: Weighted Least Squares for Categorical Variables .) There is a lot more to see on Mplus wiki, e.g. WLS vs. WLSMV results with ordinal data ; the two authors are very responsive and always provide detailed answers with accompanying references when possible. Some comparisons of robust weighted least squares vs. ML-based methods of analyzing polychoric or polyserial correlation matrices can be found in: Lei, P.W. (2009). Evaluating estimation methods for ordinal data in
structural equation modeling . Quality & Quantity , 43, 495–507. For other mathematical development, you can have a look at: Jöreskog, K.G. (1994) On the estimation of polychoric correlations
and their asymptotic covariance matrix . Psychometrika , 59(3),
381-389. (See also S-Y Lee 's papers.) Sophia Rabe-Hesketh and her colleagues also have good papers on SEM. Some relevant references include: Rabe-Hesketh, S. Skrondal, A., and Pickles, A. (2004b). Generalized multilevel structural equation modeling . Psychometrika , 69, 167–190. Skrondal, A. and Rabe-Hesketh, S. (2004). Generalized Latent Variable Modeling: Multilevel, Longitudinal, and Structural Equation Models . Chapman & Hall/CRC, Boca Raton, FL. (This is the reference textbook for understanding/working with Stata gllamm .) Other good resources are probably listed on John Uebersax's excellent website, in particular Introduction to the Tetrachoric and Polychoric Correlation Coefficients . Given that you are also interested in applied work, I would suggest taking a look at OpenMx (yet another software package for modeling covariance structure) and lavaan (which aims at delivering output similar to those of EQS or Mplus), both available under R. | {
"source": [
"https://stats.stackexchange.com/questions/21153",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8076/"
]
} |
21,222 | What are the best (recommended) pre-processing steps before performing k-means? | If your variables are of incomparable units (e.g. height in cm and weight in kg) then you should standardize variables, of course. Even if variables are of the same units but show quite different variances it is still a good idea to standardize before K-means. You see, K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance, so clusters will tend to be separated along variables with greater variance. A different thing also worth to remind is that K-means clustering results are potentially sensitive to the order of objects in the data set $^1$ . A justified practice would be to run the analysis several times, randomizing objects order; then average the cluster centres of the correpondent/same clusters between those runs $^2$ and input the centres as initial ones for one final run of the analysis. Here is some general reasoning about the issue of standardizing features in cluster or other multivariate analysis. $^1$ Specifically, (1) some methods of centres initialization are sensitive to case order; (2) even when the initialization method isn't sensitive, results might depend sometimes on the order the initial centres are introduced to the program by (in particular, when there are tied, equal distances within data); (3) so-called running means version of k-means algorithm is naturaly sensitive to case order (in this version - which is not often used apart from maybe online clustering - recalculation of centroids take place after each individual case is re-asssigned to another cluster). $^2$ In practice, which clusters from different runs correspond - is often immediately seen by their relative closeness. When not easily seen, correspondence can be established by a hierarchical clustering done among the centres or by a matching algorithm such as Hungarian. But, to remark, if the correspondence is so vague that it almost vanishes, then the data either had no cluster structure detectable by K-means, or K is very wrong. | {
"source": [
"https://stats.stackexchange.com/questions/21222",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8112/"
]
} |
21,265 | I am currently working to build a model using a multiple linear regression. After fiddling around with my model, I am unsure how to best determine which variables to keep and which to remove. My model started with 10 predictors for the DV. When using all 10 predictors, four were considered significant. If I remove only some of the obviously-incorrect predictors, some of my predictors that were not initially significant become significant. Which leads me to my question: How does one go about determining which predictors to include in their model? It seemed to me you should run the model once with all predictors, remove those that are not significant, and then rerun. But if removing only some of those predictors makes others significant, I am left wondering if I am taking the wrong approach to all this. I believe that this thread is similar to my question, but I am unsure I am interpreting the discussion correctly. Perhaps this is more of an experimental design topic, but maybe someone has some experience they can share. | Based on your reaction to my comment: You are looking for prediction. Thus, you should not really rely on (in)significance of the coefficients. You would be better to Pick a criterion that describes your prediction needs best (e.g.
missclassification rate, AUC of ROC, some form of these with
weights,...) For each model of interest , evaluate this criterion. This can be
done e.g.by providing a validation set (if you're lucky or rich),
through crossvalidation (typically tenfold), or whatever other
options your criterion of interest allows. If possible also find an
estimate of the SE of the criterion for each model (e.g. by using the
values over the different folds in crossvalidation) Now you can pick the model with the best value of the criterion,
though it is typically advised to pick the most parsimoneous model
(least variables) that is within one SE of the best value. Wrt each model of interest : herein lies quite a catch. With 10 potential predictors, that is a truckload of potential models. If you've got the time or the processors for this (or if your data is small enough so that models get fit and evaluated fast enough): have a ball. If not, you can go about this by educated guesses, forward or backward modelling (but using the criterion instead of significance), or better yet: use some algorithm that picks a reasonable set of models. One algorithm that does this, is penalized regression, in particular Lasso regression. If you're using R, just plug in the package glmnet and you're about ready to go. | {
"source": [
"https://stats.stackexchange.com/questions/21265",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8590/"
]
} |
21,346 | I have been working in R for a bit and have been faced with things like PCA, SVD, QR decompositions and many such linear algebra results (when inspecting estimating weighted regressions and such) so I wanted to know if anyone has a recommendation on a good comprehensive linear algebra book which is not too theoretical but is mathematically rigorous and covers all of these such topics. | The "big three" that I have used/heard of are: Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics . (Amazon link) . Searle, Matrix Algebra Useful for Statistics . (Amazon link) . Harville, Matrix Algebra From a Statistician's Perspective . (Amazon link) . I have used Gentle and Harville and found both to be very helpful and quite manageable. | {
"source": [
"https://stats.stackexchange.com/questions/21346",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6355/"
]
} |
21,419 | I have two populations (men and women), each containing $1000$ samples. For each sample I have two properties A & B (first year grade point average, and SAT score). I have used a t-test separately for A & B: both found significant differences between the two groups; A with $p=0.008$ and B with $p=0.002$. Is it okay to claim that the property B is better discriminated (more significant) then the property A? Or is it that a t-test is just a yes or no (significant or not significant) measure? Update : according to the comments here and to what I have read on wikipedia , I think that the answer should be: drop the meaningless p-value and report your effect size . Any thoughts? | Many people would argue that a $p$-value can either be significant ($p< \alpha$) or not, and so it does not (ever) make sense to compare two $p$-values between each other. This is wrong; in some cases it does. In your particular case there is absolutely no doubt that you can directly compare the $p$-values. If the sample size is fixed ($n=1000$), then $p$-values are monotonically related to $t$-values, which are in turn monotonically related to the effect size as measured by Cohen's $d$. Specifically, $d=2t/\sqrt{n}$. This means that your $p$-values are in one-to-one correspondence with the effect size, and so you can be sure that if the $p$-value for property A is larger than for property B, then the effect size for A is smaller than for property B. I believe this answers your question. Several additional points: This is only true given that the sample size $n$ is fixed. If you get $p=0.008$ for property A in one experiment with one sample size, and $p=0.002$ for property B in another experiment with another sample size, it is more difficult to compare them. If the question is specifically whether A or B are better "discriminated" in the population (i.e.: how well can you predict gender by looking at the A or B values?), then you should be looking at effect size. In the simple cases, knowing $p$ and $n$ is enough to compute the effect size. If the question is more vague: what experiment provides more "evidence" against the null? (this can be meaningful if e.g. A=B) -- then the issue becomes complicated and contentious, but I would say that the $p$-value by definition is a scalar summary of the evidence against the null, so the lower the $p$-value, the stronger the evidence, even if the sample sizes are different. Saying that the effect size for B is larger than for A, does not mean that it is significantly larger. You need some direct comparison between A and B to make such a claim. It's always a good idea to report (and interpret) effect sizes and confidence intervals in addition to $p$-values. | {
"source": [
"https://stats.stackexchange.com/questions/21419",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6637/"
]
} |
21,565 | I see a similar constrained regression here: Constrained linear regression through a specified point but my requirement is slightly different. I need the coefficients to add up to 1. Specifically I am regressing the returns of 1 foreign exchange series against 3 other foreign exchange series, so that investors may replace their exposure to that series with a combination of exposure to the other 3, but their cash outlay must not change, and preferably (but this is not mandatory), the coefficients should be positive. I have tried to search for constrained regression in R and Google but with little luck. | If I understand correctly, your model is $$ Y = \pi_1 X_1 + \pi_2 X_2 + \pi_3 X_3 + \varepsilon, $$ with $\sum_k \pi_k = 1$ and $\pi_k\ge0$ . You need to minimize $$\sum_i \left(Y_i - (\pi_1 X_{i1} + \pi_2 X_{i2} + \pi_3 X_{i3}) \right)^2 $$ subject to these constraints. This kind of problem is known as quadratic programming . Here a few line of R codes giving a possible solution ( $X_1, X_2, X_3$ are the columns of X , the true values of the $\pi_k$ are 0.2, 0.3 and 0.5). library("quadprog");
X <- matrix(runif(300), ncol=3)
Y <- X %*% c(0.2,0.3,0.5) + rnorm(100, sd=0.2)
Rinv <- solve(chol(t(X) %*% X));
C <- cbind(rep(1,3), diag(3))
b <- c(1,rep(0,3))
d <- t(Y) %*% X
solve.QP(Dmat = Rinv, factorized = TRUE, dvec = d, Amat = C, bvec = b, meq = 1)
$solution
[1] 0.2049587 0.3098867 0.4851546
$value
[1] -16.0402
$unconstrained.solution
[1] 0.2295507 0.3217405 0.5002459
$iterations
[1] 2 0
$Lagrangian
[1] 1.454517 0.000000 0.000000 0.000000
$iact
[1] 1 I don’t know any results on the asymptotic distribution of the estimators, etc. If someone has pointers, I’ll be curious to get some (if you wish I can open a new question on this). | {
"source": [
"https://stats.stackexchange.com/questions/21565",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4705/"
]
} |
21,572 | I want to generate the plot described in the book ElemStatLearn "The Elements of
Statistical Learning: Data Mining, Inference, and Prediction. Second Edition" by Trevor Hastie
& Robert Tibshirani& Jerome Friedman. The plot is: I am wondering how I can produce this exact graph in R , particularly note the grid graphics and calculation to show the boundary. | To reproduce this figure, you need to have the ElemStatLearn package installed on you system. The artificial dataset was generated with mixture.example() as pointed out by @StasK. library(ElemStatLearn)
require(class)
x <- mixture.example$x
g <- mixture.example$y
xnew <- mixture.example$xnew
mod15 <- knn(x, xnew, g, k=15, prob=TRUE)
prob <- attr(mod15, "prob")
prob <- ifelse(mod15=="1", prob, 1-prob)
px1 <- mixture.example$px1
px2 <- mixture.example$px2
prob15 <- matrix(prob, length(px1), length(px2))
par(mar=rep(2,4))
contour(px1, px2, prob15, levels=0.5, labels="", xlab="", ylab="", main=
"15-nearest neighbour", axes=FALSE)
points(x, col=ifelse(g==1, "coral", "cornflowerblue"))
gd <- expand.grid(x=px1, y=px2)
points(gd, pch=".", cex=1.2, col=ifelse(prob15>0.5, "coral", "cornflowerblue"))
box() All but the last three commands come from the on-line help for mixture.example . Note that we used the fact that expand.grid will arrange its output by varying x first, which further allows to index (by column) colors in the prob15 matrix (of dimension 69x99), which holds the proportion of the votes for the winning class for each lattice coordinates ( px1 , px2 ). | {
"source": [
"https://stats.stackexchange.com/questions/21572",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8065/"
]
} |
21,760 | I am trying to expand my knowledge of statistics. I come from a physical sciences background with a "recipe based" approach to statistical testing, where we say is it continuous, is it normally distributed -- OLS regression . In my reading I have come across the terms: random effects model, fixed effects model, marginal model. My questions are: In very simple terms, what are they? What are the differences between them? Are any of them synonyms? Where do the traditional tests like OLS regression, ANOVA and ANCOVA fall in this classification? Just trying to decide where to go next with the self study. | This question has been partially discussed at this site as below, and opinions seem mixed. What is the difference between fixed effect, random effect and mixed effect models? What is the mathematical difference between random- and fixed-effects? Concepts behind fixed/random effects models All terms are generally related to longitudinal / panel / clustered / hierarchical data and repeated measures (in the format of advanced regression and ANOVA), but have multiple meanings in different context. I would like to answer the question in formulas based on my knowledge. Fixed-effects model In biostatistics, fixed-effects, denoted as $\color{red}{\boldsymbol\beta}$ in Equation (*) below, usually comes together with random effects. But the fixed-effects model is also defined to assume that the observations are independent, like cross-sectional setting, as in Longitudinal Data Analysis of Hedeker and Gibbons (2006). In econometrics, the fixed-effects model can be written as
$$ y_{ij}=\boldsymbol x_{ij}^{'}\boldsymbol\beta+\color{red}{u_i}+\epsilon_{ij}$$
where $\color{red}{u_i}$ is fixed (not random) intercept for each subject ($i$), or we can also have a fixed-effect as $u_j$ for each repeated measurement ($j$); $\boldsymbol x_{ij}$ denotes covariates. In meta-analysis, the fixed-effect model assumes underlying effect is the same across all studies (e.g. Mantel and Haenszel, 1959). Random-effects model In biostatistics, the random-effects model (Laird and Ware, 1982) can be written as
$$\tag{*} y_{ij}=\boldsymbol x_{ij}^{'}\color{red}{\boldsymbol\beta}+\boldsymbol z_{ij}^{'}\color{blue}{\boldsymbol u_i}+e_{ij}$$
where $\color{blue}{\boldsymbol u_i}$ is assumed to follow a distribution. $\boldsymbol x_{ij}$ denotes covariates for fixed effects, and $\boldsymbol z_{ij}$ denotes covariates for random effects. In econometrics, the random-effects model may only refer to random intercept model as in biostatistics, i.e. $\boldsymbol z_{ij}^{'}=1$ and $\boldsymbol u_i$ is a scalar. In meta-analysis, the random-effect model assumes heterogeneous effects across studies (DerSimonian and Laird, 1986). Marginal model Marginal model is generally compared to conditional model (random-effects model), and the former focuses on the population mean (take linear model for an example) $$E(y_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta,$$ while the latter deals with the conditional mean $$E(y_{ij}|\boldsymbol u_i)=\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i.$$ The interpretation and scale of the regression coefficients between marginal model and random-effects model would be different for nonlinear models (e.g. logistic regression). Let $h(E(y_{ij}|\boldsymbol u_i))=\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i$, then $$E(y_{ij})=E(E(y_{ij}|\boldsymbol u_i))=E(h^{-1}(\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i))\neq h^{-1}(\boldsymbol x_{ij}^{'}\boldsymbol\beta),$$ unless trivially the link function $h$ is the identity link (linear model), or $u_i=0$ (no random-effects). Good examples include generalized estimating equations (GEE; Zeger, Liang and Albert, 1988) and marginalized multilevel models (Heagerty and Zeger, 2000). | {
"source": [
"https://stats.stackexchange.com/questions/21760",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4054/"
]
} |
21,822 | From StatSoft, Inc. (2013), Electronic Statistics Textbook , "Naive Bayes Classifier" : To demonstrate the concept of Naïve Bayes Classification, consider the
example displayed in the illustration above. As indicated, the objects
can be classified as either GREEN or RED. My task is to classify new
cases as they arrive, i.e., decide to which class label they belong,
based on the currently exiting objects. Since there are twice as many GREEN objects as RED, it is reasonable
to believe that a new case (which hasn't been observed yet) is twice
as likely to have membership GREEN rather than RED. In the Bayesian
analysis, this belief is known as the prior probability. Prior
probabilities are based on previous experience, in this case the
percentage of GREEN and RED objects, and often used to predict
outcomes before they actually happen. Thus, we can write: Since there is a total of 60 objects, 40 of which are GREEN and 20
RED, our prior probabilities for class membership are: Having formulated our prior probability, we are now ready to classify
a new object (WHITE circle). Since the objects are well clustered, it
is reasonable to assume that the more GREEN (or RED) objects in the
vicinity of X, the more likely that the new cases belong to that
particular color. To measure this likelihood, we draw a circle around
X which encompasses a number (to be chosen a priori) of points
irrespective of their class labels. Then we calculate the number of
points in the circle belonging to each class label. From this we
calculate the likelihood: From the illustration above, it is clear that Likelihood of X given
GREEN is smaller than Likelihood of X given RED, since the circle
encompasses 1 GREEN object and 3 RED ones. Thus: Although the prior probabilities indicate that X may belong to GREEN
(given that there are twice as many GREEN compared to RED) the
likelihood indicates otherwise; that the class membership of X is RED
(given that there are more RED objects in the vicinity of X than
GREEN). In the Bayesian analysis, the final classification is produced
by combining both sources of information, i.e., the prior and the
likelihood, to form a posterior probability using the so-called Bayes'
rule (named after Rev. Thomas Bayes 1702-1761). Finally, we classify X as RED since its class membership achieves the
largest posterior probability. This is where the difficulty of my maths understanding comes in. p(Cj | x1,x2,x...,xd) is the posterior probability of class membership, i.e., the probability that X belongs to Cj but why write it like this? Calculating the likelihood? Posterior Probability? I never took math, but my understanding of naive bayes is fine I think just when it comes to these decomposed methods confuses me. Could some one help with visualizing these methods and how to write the math out in an understandable way? | I'm going to run through the whole Naive Bayes process from scratch, since it's not totally clear to me where you're getting hung up. We want to find the probability that a new example belongs to each class: $P(class|feature_1, feature_2,..., feature_n$ ). We then compute that probability for each class, and pick the most likely class. The problem is that we usually don't have those probabilities. However, Bayes' Theorem lets us rewrite that equation in a more tractable form. Bayes' Thereom is simply $$P(A|B)=\frac{P(B|A) \cdot P(A)}{P(B)}$$ or in terms of our problem: $$P(class|features)=\frac{P(features|class) \cdot P(class)}{P(features)}$$ We can simplify this by removing $P(features)$ . We can do this because we're going to rank $P(class|features)$ for each value of $class$ ; $P(features)$ will be the same every time--it doesn't depend on $class$ . This leaves us with $$ P(class|features) \propto P(features|class) \cdot P(class)$$ The prior probabilities, $P(class)$ , can be calculated as you described in your question. That leaves $P(features|class)$ . We want to eliminate the massive, and probably very sparse, joint probability $P(feature_1, feature_2, ..., feature_n|class)$ . If each feature is independent , then $$P(feature_1, feature_2, ..., feature_n|class) = \prod_i{P(feature_i|class})$$ Even if they're not actually independent, we can assume they are (that's the "naive" part of naive Bayes). I personally think it's easier to think this through for discrete (i.e., categorical) variables, so let's use a slightly different version of your example. Here, I've divided each feature dimension into two categorical variables. . Example: Training the classifer To train the classifer, we count up various subsets of points and use them to compute the prior and conditional probabilities. The priors are trivial: There are sixty total points, forty are green while twenty are red. Thus $$P(class=green)=\frac{40}{60} = 2/3 \text{ and } P(class=red)=\frac{20}{60}=1/3$$ Next, we have to compute the conditional probabilities of each feature-value given a class. Here, there are two features: $feature_1$ and $feature_2$ , each of which takes one of two values (A or B for one, X or Y for the other). We therefore need to know the following: $P(feature_1=A|class=red)$ $P(feature_1=B|class=red)$ $P(feature_1=A|class=green)$ $P(feature_1=B|class=green)$ $P(feature_2=X|class=red)$ $P(feature_2=Y|class=red)$ $P(feature_2=X|class=green)$ $P(feature_2=Y|class=green)$ (in case it's not obvious, this is all possible pairs of feature-value and class) These are easy to compute by counting and dividing too. For example, for $P(feature_1=A|class=red)$ , we look only at the red points and count how many of them are in the 'A' region for $feature_1$ . There are twenty red points, all of which are in the 'A' region, so $P(feature_1=A|class=red)=20/20=1$ . None of the red points are in the B region, so $P(feature_1|class=red)=0/20=0$ . Next, we do the same, but consider only the green points. This gives us $P(feature_1=A|class=green)=5/40=1/8$ and $P(feature_1=B|class=green)=35/40=7/8$ . We repeat that process for $feature_2$ , to round out the probability table. Assuming I've counted correctly, we get $P(feature_1=A|class=red)=1$ $P(feature_1=B|class=red)=0$ $P(feature_1=A|class=green)=1/8$ $P(feature_1=B|class=green)=7/8$ $P(feature_2=X|class=red)=3/10$ $P(feature_2=Y|class=red)=7/10$ $P(feature_2=X|class=green)=8/10$ $P(feature_2=Y|class=green)=2/10$ Those ten probabilities (the two priors plus the eight conditionals) are our model Classifying a New Example Let's classify the white point from your example. It's in the "A" region for $feature_1$ and the "Y" region for $feature_2$ . We want to find the probability that it's in each class. Let's start with red. Using the formula above, we know that: $$P(class=red|example) \propto P(class=red) \cdot P(feature_1=A|class=red) \cdot P(feature_2=Y|class=red)$$ Subbing in the probabilities from the table, we get $$P(class=red|example) \propto \frac{1}{3} \cdot 1 \cdot \frac{7}{10} = \frac{7}{30}$$ We then do the same for green: $$P(class=green|example) \propto P(class=green) \cdot P(feature_1=A|class=green) \cdot P(feature_2=Y|class=green) $$ Subbing in those values gets us 0 ( $2/3 \cdot 0 \cdot 2/10$ ). Finally, we look to see which class gave us the highest probability. In this case, it's clearly the red class, so that's where we assign the point. Notes In your original example, the features are continuous. In that case, you need to find some way of assigning P(feature=value|class) for each class. You might consider fitting then to a known probability distribution (e.g., a Gaussian). During training, you would find the mean and variance for each class along each feature dimension. To classify a point, you'd find $P(feature=value|class)$ by plugging in the appropriate mean and variance for each class. Other distributions might be more appropriate, depending on the particulars of your data, but a Gaussian would be a decent starting point. I'm not too familiar with the DARPA data set, but you'd do essentially the same thing. You'll probably end up computing something like P(attack=TRUE|service=finger), P(attack=false|service=finger), P(attack=TRUE|service=ftp), etc. and then combine them in the same way as the example. As a side note, part of the trick here is to come up with good features. Source IP , for example, is probably going to be hopelessly sparse--you'll probably only have one or two examples for a given IP. You might do much better if you geolocated the IP and use "Source_in_same_building_as_dest (true/false)" or something as a feature instead. I hope that helps more. If anything needs clarification, I'd be happy to try again! | {
"source": [
"https://stats.stackexchange.com/questions/21822",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6875/"
]
} |
21,896 | I have a question about something that my statistics teacher said about the following problem: There are two hospitals named Mercy and Hope in your town. You must choose one of these in which to undergo an operation. You decide to base your decision on the success of their surgical teams. Fortunately, under the new health plan, the hospitals give data on the success of their operations, broken down into five broad categories of operations. Suppose you get the following data for the two hospitals: Mercy Hospital
Type A B C D E All
Operations 359 1836 299 2086 149 4729
Successful 292 1449 179 434 13 2366
Hope Hospital
Type A B C D E All
Operations 88 514 222 86 45 955
Successful 70 391 113 12 2 588 You notice that, in all types of operations, Mercy has a higher success rate than Hope, yet Hope has the highest overall success rate. Which hospital would you choose and why (choose two answers)? A) Mercy; since I would go in for a specific operation, I want the hospital that has the best success rate for that operation. B) Hope; since they do fewer operations in all categories, they are not "operation-happy" like Mercy. C) Hope; this is an example of Simpson's paradox and we should always chose the "obvious" conclusion. D) Mercy; looking at column E, Mercy clearly does more difficult surgeries and so is probably a better hospital. E) Hope; it has the better overall success rate. F) Mercy; this is an example of Simpson's paradox and we should always chose the opposite of the "obvious" conclusion. My question isn't even about the occurrence of Simpson's paradox in this situation. My question is simply about the fact that my professor insists that A) and D) are the right answers instead of A) and F). He says, "Because the success rate is so low for Type E surgeries,we can
conclude that they are difficult and not just uncommon. Hence, Mercy
probably has better equipment/doctors when compared to Hope." I don't understand how he could imply on a statistical basis that he can tell that Mercy does "more difficult surgeries". It is obvious that Mercy has better success rate at type E surgeries, but why does that mean they do "more difficult surgeries". I think I am being screwed over by the wording of this problem and the professor isn't budging. Can someone please explain why I am wrong or how I can explain this to the professor? | I think A and E aren't a good combination, because A says you should pick Mercy and E says you should pick Hope. A and D have the virtue of advocating the same choice. But, lets examine the line of reasoning in D in further detail, since that seems to be the confusion. The probability of success for the surgeries follows the same ordering at both hospitals, with the A type being most likely to be successful and the E type being the least likely. If we collapse over (i.e., ignore) the hospitals, we can see that the marginal probability of success for the surgeries is: Type A B C D E All
Prob .81 .78 .56 .21 .08 .52 Because E is much less likely to be successful, it is reasonable to imagine that it is more difficult (although in the real world, other possibilities exist as well). We can extend that line of thinking to the other four types also. Now lets look at what proportion of each hospital's total surgeries are of each type: Type A B C D E
Mercy .08 .39 .06 .44 .03
Hope .09 .54 .23 .09 .05 What we notice here is that Hope tends to do more of the easier surgeries A-C (and especially B & C), and fewer of the harder surgeries like D. E is pretty uncommon in both hospitals, but, for what it's worth, Hope actually does a higher percentage. Nonetheless, the Simpson's Paradox effect is going to mostly be driven by B-D here (not actually column E as answer choice D suggested). Simpson's Paradox occurs because the surgeries vary in difficulty (in general) and also because the N's differ. It is the differing base rates of the different types of surgeries that makes this counter-intuitive. What is happening would be easy to see if both hospitals did exactly the same number of each type of surgery. We can do that by simply calculating the success probabilities and multiplying by 100; this adjusts for the different frequencies: Type A B C D E All
Mercy 81 79 60 21 09 250
Hope 80 76 51 14 04 225 Now, because both hospitals did 100 of each surgery (500 total), the answer is obvious: Mercy is the better hospital. | {
"source": [
"https://stats.stackexchange.com/questions/21896",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8803/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.