source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
368,014
Is it acceptable to write: $P(T>c|T=k) = P(k>c|T=k)$ ? I think not, because k isn’t a random variable, but I have seen this written this way. If C is a rv, does this change things? $P(T>C|T=k) = P(k>C|T=k)$ ?
Penalized regression estimators such as LASSO and ridge are said to correspond to Bayesian estimators with certain priors. Yes, that is correct. Whenever we have an optimisation problem involving maximisation of the log-likelihood function plus a penalty function on the parameters, this is mathematically equivalent to posterior maximisation where the penalty function is taken to be the logarithm of a prior kernel. $^\dagger$ To see this, suppose we have a penalty function $w$ using a tuning parameter $\lambda$ . The objective function in these cases can be written as: $$\begin{equation} \begin{aligned} H_\mathbf{x}(\theta|\lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) \\[6pt] &= \ln \Big( L_\mathbf{x}(\theta) \cdot \exp ( -w(\theta|\lambda)) \Big) \\[6pt] &= \ln \Bigg( \frac{L_\mathbf{x}(\theta) \pi (\theta|\lambda)}{\int L_\mathbf{x}(\theta) \pi (\theta|\lambda) d\theta} \Bigg) + \text{const} \\[6pt] &= \ln \pi(\theta|\mathbf{x}, \lambda) + \text{const}, \\[6pt] \end{aligned} \end{equation}$$ where we use the prior $\pi(\theta|\lambda) \propto \exp ( -w(\theta|\lambda))$ . Observe here that the tuning parameter in the optimisation is treated as a fixed hyperparameter in the prior distribution. If you are undertaking classical optimisation with a fixed tuning parameter, this is equivalent to undertaking a Bayesian optimisation with a fixed hyper-parameter. For LASSO and Ridge regression the penalty functions and corresponding prior-equivalents are: $$\begin{equation} \begin{aligned} \text{LASSO Regression} & & \pi(\theta|\lambda) &= \prod_{k=1}^m \text{Laplace} \Big( 0, \frac{1}{\lambda} \Big) = \prod_{k=1}^m \frac{\lambda}{2} \cdot \exp ( -\lambda |\theta_k| ), \\[6pt] \text{Ridge Regression} & & \pi(\theta|\lambda) &= \prod_{k=1}^m \text{Normal} \Big( 0, \frac{1}{2\lambda} \Big) = \prod_{k=1}^m \sqrt{\lambda/\pi} \cdot \exp ( -\lambda \theta_k^2 ). \\[6pt] \end{aligned} \end{equation}$$ The former method penalises the regression coefficients according to their absolute magnitude, which is the equivalent of imposing a Laplace prior located at zero. The latter method penalises the regression coefficients according to their squared magnitude, which is the equivalent of imposing a normal prior located at zero. Now a frequentist would optimize the tuning parameter by cross validation. Is there a Bayesian equivalent of doing so, and is it used at all? So long as the frequentist method can be posed as an optimisation problem (rather than say, including a hypothesis test, or something like this) there will be a Bayesian analogy using an equivalent prior. Just as the frequentists may treat the tuning parameter $\lambda$ as unknown and estimate this from the data, the Bayesian may similarly treat the hyperparameter $\lambda$ as unknown. In a full Bayesian analysis this would involve giving the hyperparameter its own prior and finding the posterior maximum under this prior, which would be analogous to maximising the following objective function: $$\begin{equation} \begin{aligned} H_\mathbf{x}(\theta, \lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) - h(\lambda) \\[6pt] &= \ln \Big( L_\mathbf{x}(\theta) \cdot \exp ( -w(\theta|\lambda)) \cdot \exp ( -h(\lambda)) \Big) \\[6pt] &= \ln \Bigg( \frac{L_\mathbf{x}(\theta) \pi (\theta|\lambda) \pi (\lambda)}{\int L_\mathbf{x}(\theta) \pi (\theta|\lambda) \pi (\lambda) d\theta} \Bigg) + \text{const} \\[6pt] &= \ln \pi(\theta, \lambda|\mathbf{x}) + \text{const}. \\[6pt] \end{aligned} \end{equation}$$ This method is indeed used in Bayesian analysis in cases where the analyst is not comfortable choosing a specific hyperparameter for their prior, and seeks to make the prior more diffuse by treating it as unknown and giving it a distribution. (Note that this is just an implicit way of giving a more diffuse prior to the parameter of interest $\theta$ .) (Comment from statslearner2 below) I'm looking for numerical equivalent MAP estimates. For instance, for a fixed penalty Ridge there is a gaussian prior that will give me the MAP estimate exactly equal the ridge estimate. Now, for k-fold CV ridge, what is the hyper-prior that would give me the MAP estimate which is similar to the CV-ridge estimate? Before proceeding to look at $K$ -fold cross-validation, it is first worth noting that, mathematically, the maximum a posteriori (MAP) method is simply an optimisation of a function of the parameter $\theta$ and the data $\mathbf{x}$ . If you are willing to allow improper priors then the scope encapsulates any optimisation problem involving a function of these variables. Thus, any frequentist method that can be framed as a single optimisation problem of this kind has a MAP analogy, and any frequentist method that cannot be framed as a single optimisation of this kind does not have a MAP analogy. In the above form of model, involving a penalty function with a tuning parameter, $K$ -fold cross-validation is commonly used to estimate the tuning parameter $\lambda$ . For this method you partition the data vector $\mathbb{x}$ into $K$ sub-vectors $\mathbf{x}_1,...,\mathbf{x}_K$ . For each of sub-vector $k=1,...,K$ you fit the model with the "training" data $\mathbf{x}_{-k}$ and then measure the fit of the model with the "testing" data $\mathbf{x}_k$ . In each fit you get an estimator for the model parameters, which then gives you predictions of the testing data, which can then be compared to the actual testing data to give a measure of "loss": $$\begin{matrix} \text{Estimator} & & \hat{\theta}(\mathbf{x}_{-k}, \lambda), \\[6pt] \text{Predictions} & & \hat{\mathbf{x}}_k(\mathbf{x}_{-k}, \lambda), \\[6pt] \text{Testing loss} & & \mathscr{L}_k(\hat{\mathbf{x}}_k, \mathbf{x}_k| \mathbf{x}_{-k}, \lambda). \\[6pt] \end{matrix}$$ The loss measures for each of the $K$ "folds" can then be aggregated to get an overall loss measure for the cross-validation: $$\mathscr{L}(\mathbf{x}, \lambda) = \sum_k \mathscr{L}_k(\hat{\mathbf{x}}_k, \mathbf{x}_k| \mathbf{x}_{-k}, \lambda)$$ One then estimates the tuning parameter by minimising the overall loss measure: $$\hat{\lambda} \equiv \hat{\lambda}(\mathbf{x}) \equiv \underset{\lambda}{\text{arg min }} \mathscr{L}(\mathbf{x}, \lambda).$$ We can see that this is an optimisation problem, and so we now have two seperate optimisation problems (i.e., the one described in the sections above for $\theta$ , and the one described here for $\lambda$ ). Since the latter optimisation does not involve $\theta$ , we can combine these optimisations into a single problem, with some technicalities that I discuss below. To do this, consider the optimisation problem with objective function: $$\begin{equation} \begin{aligned} \mathcal{H}_\mathbf{x}(\theta, \lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) - \delta \mathscr{L}(\mathbf{x}, \lambda), \\[6pt] \end{aligned} \end{equation}$$ where $\delta > 0$ is a weighting value on the tuning-loss. As $\delta \rightarrow \infty$ the weight on optimisation of the tuning-loss becomes infinite and so the optimisation problem yields the estimated tuning parameter from $K$ -fold cross-validation (in the limit). The remaining part of the objective function is the standard objective function conditional on this estimated value of the tuning parameter. Now, unfortunately, taking $\delta = \infty$ screws up the optimisation problem, but if we take $\delta$ to be a very large (but still finite) value, we can approximate the combination of the two optimisation problems up to arbitrary accuracy. From the above analysis we can see that it is possible to form a MAP analogy to the model-fitting and $K$ -fold cross-validation process. This is not an exact analogy, but it is a close analogy, up to arbitrary accuracy. It is also important to note that the MAP analogy no longer shares the same likelihood function as the original problem, since the loss function depends on the data and is thus absorbed as part of the likelihood rather than the prior. In fact, the full analogy is as follows: $$\begin{equation} \begin{aligned} \mathcal{H}_\mathbf{x}(\theta, \lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) - \delta \mathscr{L}(\mathbf{x}, \lambda) \\[6pt] &= \ln \Bigg( \frac{L_\mathbf{x}^*(\theta, \lambda) \pi (\theta, \lambda)}{\int L_\mathbf{x}^*(\theta, \lambda) \pi (\theta, \lambda) d\theta} \Bigg) + \text{const}, \\[6pt] \end{aligned} \end{equation}$$ where $L_\mathbf{x}^*(\theta, \lambda) \propto \exp( \ell_\mathbf{x}(\theta) - \delta \mathscr{L}(\mathbf{x}, \lambda))$ and $\pi (\theta, \lambda) \propto \exp( -w(\theta|\lambda))$ , with a fixed (and very large) hyper-parameter $\delta$ . ( Note: For a related question looking at logistic ridge regression framed in Bayesian terms see here .) $^\dagger$ This gives an improper prior in cases where the penalty does not correspond to the logarithm of a sigma-finite density.
{ "source": [ "https://stats.stackexchange.com/questions/368014", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54664/" ] }
370,880
If the expected value of $\mathsf{Gamma}(\alpha, \beta)$ is $\frac{\alpha}{\beta}$ , what is the expected value of $\log(\mathsf{Gamma}(\alpha, \beta))$ ? Can it be calculated analytically? The parametrisation I am using is shape-rate.
This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter). We are supposing $X$ has a $\Gamma(\alpha,\beta)$ distribution and we wish to find the expectation of $Y=\log(X).$ First, because $\beta$ is a scale parameter, its effect will be to shift the logarithm by $\log\beta.$ (If you use $\beta$ as a rate parameter, as in the question, it will shift the logarithm by $-\log\beta.$ ) This permits us to work with the case $\beta=1.$ After this simplification, the probability element of $X$ is $$f_X(x) = \frac{1}{\Gamma(\alpha)} x^\alpha e^{-x} \frac{\mathrm{d}x}{x}$$ where $\Gamma(\alpha)$ is the normalizing constant $$\Gamma(\alpha) = \int_0^\infty x^\alpha e^{-x} \frac{\mathrm{d}x}{x}.$$ Substituting $x=e^y,$ which entails $\mathrm{d}x/x = \mathrm{d}y,$ gives the probability element of $Y$ , $$f_Y(y) = \frac{1}{\Gamma(\alpha)} e^{\alpha y - e^y} \mathrm{d}y.$$ The possible values of $Y$ now range over all the real numbers $\mathbb{R}.$ Because $f_Y$ must integrate to unity, we obtain (trivially) $$\Gamma(\alpha) = \int_\mathbb{R} e^{\alpha y - e^y} \mathrm{d}y.\tag{1}$$ Notice $f_Y(y)$ is a differentiable function of $\alpha.$ An easy calculation gives $$\frac{\mathrm{d}}{\mathrm{d}\alpha}e^{\alpha y - e^y} \mathrm{d}y = y\, e^{\alpha y - e^y} \mathrm{d}y = \Gamma(\alpha) y\,f_Y(y).$$ The next step exploits the relation obtained by dividing both sides of this identity by $\Gamma(\alpha),$ thereby exposing the very object we need to integrate to find the expectation; namely, $y f_Y(y):$ $$\eqalign{ \mathbb{E}(Y) &= \int_\mathbb{R} y\, f_Y(y) = \frac{1}{\Gamma(\alpha)} \int_\mathbb{R} \frac{\mathrm{d}}{\mathrm{d}\alpha}e^{\alpha y - e^y} \mathrm{d}y \\ &= \frac{1}{\Gamma(\alpha)} \frac{\mathrm{d}}{\mathrm{d}\alpha}\int_\mathbb{R} e^{\alpha y - e^y} \mathrm{d}y\\ &= \frac{1}{\Gamma(\alpha)} \frac{\mathrm{d}}{\mathrm{d}\alpha}\Gamma(\alpha)\\ &= \frac{\mathrm{d}}{\mathrm{d}\alpha}\log\Gamma(\alpha)\\ &=\psi(\alpha), }$$ the logarithmic derivative of the gamma function (aka " polygamma "). The integral was computed using identity $(1).$ Re-introducing the factor $\beta$ shows the general result is $$\mathbb{E}(\log(X)) = \log\beta + \psi(\alpha)$$ for a scale parameterization (where the density function depends on $x/\beta$ ) or $$\mathbb{E}(\log(X)) = -\log\beta + \psi(\alpha)$$ for a rate parameterization (where the density function depends on $x\beta$ ).
{ "source": [ "https://stats.stackexchange.com/questions/370880", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46836/" ] }
372,048
I know this is from a comic famous for taking advantage of certain analytical tendencies , but it actually looks kind of reasonable after a few minutes of staring. Can anyone outline for me what this " modified Bayes theorem " is doing?
Well by distributing the $P(H)$ term, we obtain $$ P(H|X) = \frac{P(X|H)P(H)}{P(X)} P(C) + P(H) [1 - P(C)], $$ which we can interpret as the Law of Total Probability applied to the event $C =$ "you are using Bayesian statistics correctly." So if you are using Bayesian statistics correctly, then you recover Bayes' law (the left fraction above) and if you aren't, then you ignore the data and just use your prior on $H$ . I suppose this is a rejoinder against the criticism that in principle Bayesians can adjust the prior to support whatever conclusion they want, whereas Bayesians would argue that this is not how Bayesian statistics actually works. (And yes, you did successfully nerd-snipe me. I'm neither a mathematician nor a physicist though, so I'm not sure how many points I'm worth.)
{ "source": [ "https://stats.stackexchange.com/questions/372048", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86176/" ] }
372,236
For example, if my table is: True Value (gold standard) Positive | Negative | | | | | Test | Pos | A | B | Result | | | | | Neg | C | D | | | | |
In the general case: you can't The ROC curve shows how sensitivity and specificity varies at every possible threshold . A contingency table has been calculated at a single threshold and information about other thresholds has been lost. Therefore you can't calculate the ROC curve from this summarized data. But my classifier is binary, so I have one single threshold Binary classifiers aren't really binary. Even though they may expose only a final binary decision, all the classifiers I know rely on some quantitative estimate under the hood. A binary decision tree? Try to build a regression tree. A classifier SVM? Do a support vector regression. Logistic regression? Get access to the raw probabilities. Neural network? Use the numeric output of the last layer instead. This will give you more freedom to choose the optimal threshold to get to the best possible classification for your needs. But I really want to You really shouldn't. ROC curves with few thresholds significantly underestimate the true area under the curve (1). A ROC curve with a single point is a worst-case scenario, and any comparison with a continuous classifier will be inaccurate and misleading. Just give me the answer! Ok, ok, you win. With a single point we can consider the AUC as the sum of two triangles T and U: We can get their areas based on the contingency table (A, B, C and D as you defined): $$ \begin{align*} T = \frac{1 \times SE}{2} &= \frac{SE}{2} = \frac{A}{2(A + C)} \\ U = \frac{SP \times 1}{2} &= \frac{SP}{2} = \frac{D}{2(B + D)} \end{align*} $$ Getting the AUC: $$ \begin{align*} AUC &= T + U \\ &= \frac{A}{2(A + C)} + \frac{D}{2(B + D)} \\ &= \frac{SE + SP}{2} \end{align*} $$ To conclude You can technically calculate a ROC AUC for a binary classifier from the confusion matrix. But just in case I wasn't clear, let me repeat one last time: DON'T DO IT! References (1) DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics 1988,44:837-845. https://www.jstor.org/stable/2531595
{ "source": [ "https://stats.stackexchange.com/questions/372236", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17072/" ] }
372,280
I'm having an argument with a colleague about the probability of a 4 digit number. The assumption is that the number is randomly and independently created. Assume the probability of generating the number 4109. Now, my colleague reckons that it's a simple P(4109) = 1/10 * 1/10 * 1/10 * 1/10. (a bit like simply tossing a coin). But I'm not interested in the probability of generating the numbers 0149, 9014, 4910 etc etc. Obviously the probability of that specific sequence, 4109 is a probability found by calculating the permutation, not simple combination.... Please confirm my believe or point me to some proper material to support this?
Your colleague is right. If your digits are randomly and independently selected (assuming equal likelihood for each digit), the chance that your number is 4109 is nothing more complicated than: the chance that the first digit is 4 the chance that the second digit is 1 the chance that the third digit is 0 the chance that the fourth digit is 9 all multiplied together. That's $(1/10)^4$ , as your colleague says. Where would permutations come into it? You said in the question that you're "not interested in the probability of generating the numbers 0149, 9014, 4910 etc etc", i.e. the set of all 4-digit numbers where the digits are 0,1,4,9 in any order, but that isn't what $(1/10) ^ 4$ tells you - to calculate that you'd need: the probability that the first digit is 0,1,4, or 9 = 4/10 the probability that the second digit is one of the three not yet selected = 3/10 the probability that the third digit is one of the two not yet selected = 2/10 the probability that the final digit is the remaining one from the set of 0,1,4,9 = 1/10 i.e. $\prod_{i=1}^{4} \frac{i}{10}$ You could then worry about permutations to pick 4109 specifically, but this is needless complication in comparison the way your colleague said.
{ "source": [ "https://stats.stackexchange.com/questions/372280", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/116913/" ] }
375,208
Is Cauchy distribution somehow an "unpredictable" distribution? I tried doing cs <- function(n) { return(rcauchy(n,0,1)) } in R for a multitude of n values and noticed that they generate quite unpredictable values occasionally. Compare that to e.g. as <- function(n) { return(rnorm(n,0,1)) } which always seems to give a "compact" cloud of points. By this pic it should look like the normal distribution? Yet it maybe only does for a subset of values. Or maybe the trick is that the Cauchy standard deviations (in the pic below) converge much more slowly (to left and right) and thus is allows for more severe outliers, although at low probabilities? Here as are normal rvs and cs are Cauchy rvs. But by the extremity of the outliers, is it possible that the tails of the Cauchy p.d.f. never converge?
While a number of posts on site address various properties of the Cauchy, I didn't manage to locate one that really laid them out together. Hopefully this might be a good place to collect some. I may expand this. Heavy tails While the Cauchy is symmetric and roughly bell shaped, somewhat like the normal distribution, it has much heavier tails (and less of a "shoulder"). For example, there's a small but distinct probability that a Cauchy random variable will lay more than 1000 interquartile ranges from the median -- roughly of the same order as a normal random variable being at least 2.67 interquartile ranges from its median. Variance The variance of the Cauchy is infinite. Edit: J.G. says in comments that it's undefined. If we take the variance as the average of half the squared distance between pairs of values - which is identical to the variance when both exist, then it would be infinite. However, by the usual definition J.G. is correct. [Nevertheless by contrast with sample means, which don't really converge to anything as n becomes large, the distribution of sample variances keep growing in size as the sample size increases; the scale increases proportionally to n, or equivalently the distribution of log variance grows linearly with sample size. It seems productive to actually consider that version of variance that yields infinity is telling us something.] Sample standard deviations exist, of course, but the larger the sample the larger they tend to be (e.g. the median sample standard deviation at n=10 is in the vicinity of 3.67 times the scale parameter (half the IQR), but at n=100 it's about 11.9). Mean The Cauchy distribution doesn't even have a finite mean; the integral for the mean doesn't converge. As a result, even the laws of large numbers don't apply -- as n grows, sample means don't converge to some fixed quantity (indeed there's nothing for them to converge to). In fact, the distribution of the sample mean from a Cauchy distribution is the same as the distribution of a single observation(!). The tail is so heavy that adding more values into the sum makes a really extreme value likely enough as to just compensate for dividing by a larger denominator when taking the mean. Predictability You can certainly produce perfectly sensible prediction intervals for observations from a Cauchy distribution; there are simple, fairly efficient estimators that perform well for estimating location and scale and approximate prediction intervals can be constructed -- so in that sense, at least, Cauchy variates are 'predictable'. However, the tail extends very far, so that if you want a high-probability interval, it may be quite wide. If you're trying to predict the center of the distribution (e.g. in a regression type model), that may in some sense be relatively easy to predict; the Cauchy is quite peaked (there's a lot of the distribution "close" to the center for a typical measure of scale), so the center can be relatively well-estimated if you have an appropriate estimator. Here's an example: I generated data from a linear relationship with standard Cauchy errors (100 observations, intercept=3, slope=1.5), and estimated regression lines by three methods that are reasonably robust to y-outliers: Tukey 3 group line (red), Theil regression (dark green) and L1-regression (blue). None are especially efficient at the Cauchy - though they would all make excellent starting points for a more efficient approach. Nevertheless the three are almost coincident compared to the noisiness of the data and lay very close to the centre of where the data runs; in that sense the Cauchy is clearly "predictable". The median of the absolute residuals is only a little larger than 1 for any of the lines (most of the data lay quite close to the estimated line); in that sense as well, the Cauchy is "predictable". For the plot on the left there's a large outlier. In order to see the data better I narrowed the scale on the y-axis down on the right.
{ "source": [ "https://stats.stackexchange.com/questions/375208", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/78575/" ] }
375,515
I notice in statistics/machine learning methods, a distribution is often approximated by a Gaussian, and then that Gaussian is used for sampling. They start by computing the first two moments of the distribution, and use those to estimate $\mu$ and $\sigma^2$ . Then they can sample from that Gaussian. It seems to me the more moments I calculate, the better I ought to be able to approximate the distribution I wish to sample. What if I calculate 3 moments...how can I use those to sample from the distribution? And can this be extended to N moments?
Three moments don't determine a distributional form; if you choose a distribution-famiy with three parameters which relate to the first three population moments, you can do moment matching ("method of moments") to estimate the three parameters and then generate values from such a distribution. There are many such distributions. Sometimes even having all the moments isn't sufficient to determine a distribution. If the moment generating function exists (in a neighborhood of 0) then it uniquely identifies a distribution (you could in principle do an inverse Laplace transform to obtain it). [If some moments are not finite this would mean the mgf doesn't exist, but there are also cases where all moments are finite but the mgf still doesn't exist in a neighborhood of 0.] Given there's a choice of distributions, one might be tempted to consider a maximum entropy solution with the constraint on the first three moments, but there's no distribution on the real line that attains it (since the resulting cubic in the exponent will be unbounded). How the process would work for a specific choice of distribution We can simplify the process of obtaining a distribution matching three moments by ignoring the mean and variance and working with a scaled third moment -- the moment-skewness ( $\gamma_1=\mu_3/\mu_2^{3/2}$ ). We can do this because having selected a distribution with the relevant skewness, we can then back out the desired mean and variance by scaling and shifting. Let's consider an example. Yesterday I created a large data set (which still happens to be in my R session) whose distribution I haven't tried to calculate the functional form of (it's a large set of values of the log of the sample variance of a Cauchy at n=10). We have the first three raw moments as 1.519, 3.597 and 11.479 respectively, or correspondingly a mean of 1.518, a standard deviation* of 1.136 and a skewness of 1.429 (so these are sample values from a large sample). Formally, method of moments would attempt to match the raw moments, but the calculation is simpler if we start with the skewness (turning solving three equations in three unknowns into solving for one parameter at a time, a much simpler task). * I am going to handwave away the distinction between using an n-denominator on the variance - as would correspond to formal method of moments - and an n-1 denominator and simply use sample calculations. This skewness (~1.43) indicates we seek a distribution which is right-skew. I could choose, for example, a shifted lognormal distribution (three parameter lognormal, shape $\sigma$ , scale $\mu$ and location-shift $\gamma$ ) with the same moments. Let's begin by matching the skewness. The population skewness of a two parameter lognormal is: $\gamma_1=(e^{\sigma ^{2}}\!\!+2){\sqrt {e^{\sigma ^{2}}\!\!-1}}$ So let's start by equating that to the desired sample value to obtain an estimate of $\sigma^2$ , $\tilde{\sigma}^2$ , say. Note that $\gamma_1^2$ is $(\tau+2)^2(\tau-1)$ where $\tau=e^{\sigma^2}$ . This then yields a simple cubic equation $\tau^3+3\tau^2-4=\gamma_1^2$ . Using the sample skewness in that equation yields $\tilde{\tau}\approx 1.1995$ or $\tilde{\sigma}^2\approx 0.1819$ . (The cubic has only one real root so there's no issue with choosing between roots; nor is there any risk of choosing the wrong sign on $\gamma_1$ -- we can flip the distribution left-for-right if we need negative skewness) We can then in turn solve for $\mu$ by matching the variance (or standard deviation) and then for the location parameter by matching the mean. But we could as easily have chosen a shifted-gamma or a shifted-Weibull distribution (or a shifted-F or any number of other choices) and run through essentially the same process. Each of them would be different. [For the sample I was dealing with, a shifted gamma would probably have been a considerably better choice than a shifted lognormal, since the distribution of the logs of the values was left skew and the distribution of their cube root was very close to symmetric; these are consistent with what you will see with (unshifted) gamma densities, but a left-skewed density of the logs cannot be achieved with any shifted lognormal.] One could even take the skewness-kurtosis diagram in a Pearson plot and draw a line at the desired skewness and thereby obtain a two-point distribution, sequence of beta distributions, a gamma distribution, a sequence of beta-prime distributions, an inverse-gamma disribution and a sequence of Pearson type IV distributions all with the same skewness. We can see this illustrated in a skewness-kurtosis plot (Pearson plot) below (note that $\beta_1=\gamma_1^2$ and $\beta_2$ is the kurtosis), with the regions for the various Pearson-distributions marked in. The green horizontal line represents $\gamma_1^2 = 2.042$ , and we see it pass through each of the mentioned distribution-families, each point corresponding to a different population kurtosis. (The dashed curve represents the lognormal, which is not a Pearson-family distribution; its intersection with the green line marks the particular lognormal-shape we identified. Note that the dashed curve is purely a function of $\sigma$ .) More moments Moments don't pin distributions down very well, so even if you specify many moments, there will still be a lot of different distributions (particularly in relation to their extreme-tail behavior) that will match them. You can of course choose some distributional family with at least four parameters and attempt to match more than three moments; for example the Pearson distributions above allow us to match the first four moments, and there are other choices of distributions that would allow similar degree of flexibility. One can adopt other strategies to choose distributions that can match distributional features - mixture distributions, modelling the log-density using splines, and so forth. Frequently, however, if one goes back to the initial purpose for which one was trying to find a distribution, it often turns out there's something better that can be done than the sort of strategy outlined here.
{ "source": [ "https://stats.stackexchange.com/questions/375515", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/225762/" ] }
376,772
A single statistical test can give evidence that the null hypothesis (H0) is false and therefore the alternative hypothesis (H1) is true. But it cannot be used to show that H0 is true because failure to reject H0 does not mean that H0 is true. But let's assume you have the possibility to do the statistical test many times because you have many datasets, all independent of each other. All datasets are the result of the same process and you want to make some statement (H0/H1) over the process itself and aren't interested in the results of each single test. You then collect all the resulting p-values and happen to see via histogram plot that the p-values are clearly uniformly distributed. My reasoning now is that this can only happen if H0 is true — else the p-values would be distributed differently. Is this therefore enough evidence to conclude that H0 is true? Or am i missing here something essential, because it took me a lot of willpower to write "conclude that H0 is true" which just sounds horribly wrong in my head.
I like your question, but unfortunately my answer is NO, it doesn't prove $H_0$ . The reason is very simple. How would do you know that the distribution of p-values is uniform? You would probably have to run a test for uniformity which will return you its own p-value, and you end up with the same kind of inference question that you were trying to avoid, only one step farther. Instead of looking at p-value of the original $H_0$ , now you look at a p-value of another $H'_0$ about the uniformity of distribution of original p-values. UPDATE Here's the demonstration. I generate 100 samples of 100 observations from Gaussian and Poisson distribution, then obtain 100 p-values for normality test of each sample. So, the premise of the question is that if the p-values are from uniform distribution, then it proves that the null hypothesis is correct, which is a stronger statement than a usual "fails to reject" in statistical inference. The trouble is that "the p-values are from uniform" is a hypothesis itself, which you have to somehow test. In the picture (first row) below I'm showing the histograms of p-values from a normality test for the Guassian and Poisson sample, and you can see that it's hard to say whether one is more uniform than the other. That was my main point. The second row shows one of the samples from each distribution. The samples are relatively small, so you can't have too many bins indeed. Actually, this particular Gaussian sample doesn't look that much Gaussian at all on the histogram. In the third row, I'm showing the combined samples of 10,000 observations for each distribution on a histogram. Here, you can have more bins, and the shapes are more obvious. Finally, I run the same normality test and get p-values for the combined samples and it rejects normality for Poisson, while failing to reject for Gaussian. The p-values are: [0.45348631] [0.] This is not a proof, of course, but the demonstration of the idea that you better run the same test on the combined sample, instead of trying to analyze the distribution of p-values from subsamples. Here's Python code: import numpy as np from scipy import stats from matplotlib import pyplot as plt def pvs(x): pn = x.shape[1] pvals = np.zeros(pn) for i in range(pn): pvals[i] = stats.jarque_bera(x[:,i])[1] return pvals n = 100 pn = 100 mu, sigma = 1, 2 np.random.seed(0) x = np.random.normal(mu, sigma, size=(n,pn)) x2 = np.random.poisson(15, size=(n,pn)) print(x[1,1]) pvals = pvs(x) pvals2 = pvs(x2) x_f = x.reshape((n*pn,1)) pvals_f = pvs(x_f) x2_f = x2.reshape((n*pn,1)) pvals2_f = pvs(x2_f) print(pvals_f,pvals2_f) print(x_f.shape,x_f[:,0]) #print(pvals) plt.figure(figsize=(9,9)) plt.subplot(3,2,1) plt.hist(pvals) plt.gca().set_title('True Normal') plt.gca().set_ylabel('p-value') plt.subplot(3,2,2) plt.hist(pvals2) plt.gca().set_title('Poisson') plt.gca().set_ylabel('p-value') plt.subplot(3,2,3) plt.hist(x[:,0]) plt.gca().set_title('a small sample') plt.gca().set_ylabel('x') plt.subplot(3,2,4) plt.hist(x2[:,0]) plt.gca().set_title('a small Sample') plt.gca().set_ylabel('x') plt.subplot(3,2,5) plt.hist(x_f[:,0],100) plt.gca().set_title('Full Sample') plt.gca().set_ylabel('x') plt.subplot(3,2,6) plt.hist(x2_f[:,0],100) plt.gca().set_title('Full Sample') plt.gca().set_ylabel('x') plt.show()
{ "source": [ "https://stats.stackexchange.com/questions/376772", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/162797/" ] }
376,920
I am reading The Book of Why by Judea Pearl, and it is getting under my skin 1 . Specifically, it appears to me that he is unconditionally bashing "classical" statistics by putting up a straw man argument that statistics is never, ever able to investigate causal relations, that it never is interested in causal relations, and that statistics "became a model-blinded data-reduction enterprise". Statistics becomes an ugly s-word in his book. For example: Statisticians have been immensely confused about what variables should and should not be controlled for, so the default practice has been to control for everything one can measure. [...] It is a convenient, simple procedure to follow, but it is both wasteful and ridden with errors. A key achievement of the Causal Revolution has been to bring an end to this confusion. At the same time, statisticians greatly underrate controlling in the sense that they are loath to talk about causality at all [...] However, causal models have been in statistics like, forever. I mean, a regression model can be used essentially a causal model, since we are essentially assuming that one variable is the cause and another is the effect (hence correlation is different approach from regression modelling) and testing whether this causal relationship explains the observed patterns. Another quote: No wonder statisticians in particular found this puzzle [The Monty Hall problem] hard to comprehend. They are accustomed to, as R.A. Fisher (1922) put it, "the reduction of data" and ignoring the data-generating process. This reminds me of the reply Andrew Gelman wrote to the famous xkcd cartoon on Bayesians and frequentists: "Still, I think the cartoon as a whole is unfair in that it compares a sensible Bayesian to a frequentist statistician who blindly follows the advice of shallow textbooks." The amount of misrepresentation of s-word which, as I perceive it, exists in Judea Pearls book made me wonder whether causal inference (which hitherto I perceived as a useful and interesting way of organizing and testing a scientific hypothesis 2 ) is questionable. Questions: do you think that Judea Pearl is misrepresenting statistics, and if yes, why? Just to make causal inference sound bigger than it is? Do you think that causal inference is a Revolution with a big R which really changes all our thinking? Edit: The questions above are my main issue, but since they are, admittedly, opinionated, please answer these concrete questions (1) what is the meaning of the "Causation Revolution"? (2) how is it different from "orthodox" statistics? 1. Also because he is such a modest guy. 2. I mean in the scientific, not statistical sense. EDIT : Andrew Gelman wrote this blog post on Judea Pearls book and I think he did a much better job explaining my problems with this book than I did. Here are two quotes: On page 66 of the book, Pearl and Mackenzie write that statistics “became a model-blind data reduction enterprise.” Hey! What the hell are you talking about?? I’m a statistician, I’ve been doing statistics for 30 years, working in areas ranging from politics to toxicology. “Model-blind data reduction”? That’s just bullshit. We use models all the time. And another one: Look. I know about the pluralist’s dilemma. On one hand, Pearl believes that his methods are better than everything that came before. Fine. For him, and for many others, they are the best tools out there for studying causal inference. At the same time, as a pluralist, or a student of scientific history, we realize that there are many ways to bake a cake. It’s challenging to show respect to approaches that you don’t really work for you, and at some point the only way to do it is to step back and realize that real people use these methods to solve real problems. For example, I think making decisions using p-values is a terrible and logically incoherent idea that’s led to lots of scientific disasters; at the same time, many scientists do manage to use p-values as tools for learning. I recognize that. Similarly, I’d recommend that Pearl recognize that the apparatus of statistics, hierarchical regression modeling, interactions, poststratification, machine learning, etc etc., solves real problems in causal inference. Our methods, like Pearl’s, can also mess up—GIGO!—and maybe Pearl’s right that we’d all be better off to switch to his approach. But I don’t think it’s helping when he gives out inaccurate statements about what we do.
I fully agree that Pearl's tone is arrogant, and his characterisation of "statisticians" is simplistic and monolithic. Also, I don't find his writing particularly clear. However, I think he has a point. Causal reasoning was not part of my formal training (MSc): the closest I got to the topic was an elective course in experimental design, i.e. any causality claims required me to physically control the environment. Pearl's book Causality was my first exposure to a refutation of this idea. Obviously I can't speak for all statisticians and curricula, but from my own perspective I subscribe to Pearl's observation that causal reasoning is not a priority in statistics. It is true that statisticians sometimes control for more variables than is strictly necessary, but this rarely leads to error (at least in my experience). This is also a belief that I held after graduating with an MSc in statistics in 2010. However, it is deeply incorrect. When you control for a common effect (called "collider" in the book), you can introduce selection bias. This realization was quite astonishing to me, and really convinced me of the usefulness of representing my causal hypotheses as graphs. EDIT: I was asked to elaborate on selection bias. This topic is quite subtle, I highly recommend perusing the edX MOOC on Causal Diagrams , a very nice introduction to graphs which has a chapter dedicated to selection bias. For a toy example, to paraphrase this paper cited in the book: Consider the variables A=attractiveness, B=beauty, C=competence. Suppose that B and C are causally unrelated in the general population (i.e., beauty does not cause competence, competence does not cause beauty, and beauty and competence do not share a common cause). Suppose also that any one of B or C is sufficient for being attractive, i.e. A is a collider. Conditioning on A creates a spurious association between B and C. A more serious example is the "birth weight paradox", according to which a mother's smoking (S) during pregnancy seems to decrease the mortality (M) of the baby, if the baby is underweight (U). The proposed explanation is that birth defects (D) also cause low birth weight, and also contribute to mortality. The corresponding causal diagram is { S -> U, D -> U, U -> M, S -> M, D -> M } in which U is a collider; conditioning on it introduces the spurious association. The intuition behind this is that if the mother is a smoker, the low birth weight is less likely to be due to a defect.
{ "source": [ "https://stats.stackexchange.com/questions/376920", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14803/" ] }
378,939
Let's say we have a model mod <- Y ~ X*Condition + (X*Condition|subject) # Y = logit variable # X = continuous variable # Condition = values A and B, dummy coded; the design is repeated # so all participants go through both Conditions # subject = random effects for different subjects summary(model) Random effects: Groups Name Variance Std.Dev. Corr subject (Intercept) 0.85052 0.9222 X 0.08427 0.2903 -1.00 ConditionB 0.54367 0.7373 -0.37 0.37 X:ConditionB 0.14812 0.3849 0.26 -0.26 -0.56 Number of obs: 39401, groups: subject, 219 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.49686 0.06909 36.14 < 2e-16 *** X -1.03854 0.03812 -27.24 < 2e-16 *** ConditionB -0.19707 0.06382 -3.09 0.00202 ** X:ConditionB 0.22809 0.05356 4.26 2.06e-05 *** Here we observe a singular fit, because the correlation between intercept and x random effects is -1. Now, according to this helpful link one way to deal with this model is to remove higher-order random effects (e.g., X:ConditionB) and see whether that makes a difference when testing for singularity. The other is to use the Bayesian approach, e.g., the blme package to avoid singularity. What is the preferred method and why? I am asking this because using the first or the second one leads to different results - in the first case, I will remove the X:ConditionB random effect and won't be able to estimate the correlation between X and X:ConditionB random effects. On the other hand, using blme allows me to keep X:ConditionB and to estimate the given correlation. I see no reason why I should even use the non-bayesian estimations and remove random effects when singular fits occur when I can estimate everything with the Bayesian approach. Can someone explain to me the benefits and problems using either method to deal with singular fits?
When you obtain a singular fit, this is often indicating that the model is overfitted – that is, the random effects structure is too complex to be supported by the data, which naturally leads to the advice to remove the most complex part of the random effects structure (usually random slopes). The benefit of this approach is that it leads to a more parsimonious model that is not over-fitted. However, before doing anything, do you have a good reason for wanting X , Condition and their interaction, all to vary by subject in the first place ? Does the theory of how the data are generated suggest this ? If you desire to fit the model with the maximal random effects structure, and lme4 obtains a singular fit, then fitting the same model in a Bayesian framework might very well inform you why lme4 had problems, by inspecting trace plots and how well the various parameter estimates converge. The advantage in taking the Bayesian approach is that by doing so you may uncover a problem with original model ie. the reason why the maximum random effects structure isn’t supported by the data) or it might uncover why lme4 is unable to fit the model. I have encountered situations where a Bayesian model does not converge well, unless informative priors are used – which may or may not be OK. In short, both approaches have merit. However, I would always start from a place where the initial model is parsimonious and informed by expert domain knowledge to determine the most appropriate random effects structure. Specifying grouping variables is relatively easy, but random slopes usually don’t have to be included. Only include them if they make sound theoretical sense AND they are supported by the data. Edit: It is mentioned in the comments that there are sound theoretical reasons to fit the maximal random effects structure. So, a relatively easy way to proceed with an equivalent Bayesian model is to swap the call to glmer with stan_glmer from the rstanarm package – it is designed to be plug and play. It has default priors, so you can quickly get a model fitted. The package also has many tools for assessing convergence. If you find that all the parameters have converging to plausible values, then you are all good. However there can be a number of issues – for example a variance being estimated at or below zero, or an estimate that continues to drift. The mc-stan.org site has a wealth of information and a user forum.
{ "source": [ "https://stats.stackexchange.com/questions/378939", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/137404/" ] }
379,799
In Judea Pearl's "Book of Why" he talks about what he calls the Ladder of Causation, which is essentially a hierarchy comprised of different levels of causal reasoning. The lowest is concerned with patterns of association in observed data (e.g., correlation, conditional probability, etc.), the next focuses on intervention (what happens if we deliberately change the data generating process in some prespecified way?), and the third is counterfactual (what would happen in another possible world if something had or had not happened)? What I'm not understanding is how rungs two and three differ. If we ask a counterfactual question, are we not simply asking a question about intervening so as to negate some aspect of the observed world?
There is no contradiction between the factual world and the action of interest in the interventional level. For example, smoking until today and being forced to quit smoking starting tomorrow are not in contradiction with each other, even though you could say one “negates” the other. But now imagine the following scenario. You know Joe, a lifetime smoker who has lung cancer, and you wonder: what if Joe had not smoked for thirty years, would he be healthy today? In this case we are dealing with the same person, in the same time, imagining a scenario where action and outcome are in direct contradiction with known facts. Thus, the main difference of interventions and counterfactuals is that, whereas in interventions you are asking what will happen on average if you perform an action, in counterfactuals you are asking what would have happened had you taken a different course of action in a specific situation, given that you have information about what actually happened. Note that, since you already know what happened in the actual world, you need to update your information about the past in light of the evidence you have observed. These two types of queries are mathematically distinct because they require different levels of information to be answered (counterfactuals need more information to be answered) and even more elaborate language to be articulated!. With the information needed to answer Rung 3 questions you can answer Rung 2 questions, but not the other way around. More precisely, you cannot answer counterfactual questions with just interventional information. Examples where the clash of interventions and counterfactuals happens were already given here in CV, see this post and this post . However, for the sake of completeness, I will include an example here as well. The example below can be found in Causality, section 1.4.4. Consider that you have performed a randomized experiment where patients were randomly assigned (50% / 50%) to treatment ( $x =1$ ) and control conditions ( $x=0$ ), and in both treatment and control groups 50% recovered ( $y=0$ ) and 50% died ( $y=1$ ). That is $P(y|x) = 0.5~~~\forall x,y$ . The result of the experiment tells you that the average causal effect of the intervention is zero. This is a rung 2 question, $P(Y = 1|do(X = 1)) - P(Y=1|do(X =0) = 0$ . But now let us ask the following question: what percentage of those patients who died under treatment would have recovered had they not taken the treatment? Mathematically, you want to compute $P(Y_{0} = 0|X =1, Y = 1)$ . This question cannot be answered just with the interventional data you have. The proof is simple: I can create two different causal models that will have the same interventional distributions, yet different counterfactual distributions. The two are provided below: Here, $U$ amounts to unobserved factors that explain how the patient reacts to the treatment. You can think of factors that explain treatment heterogeneity, for instance. Note the marginal distribution $P(y, x)$ of both models agree. Note that, in the first model, no one is affected by the treatment, thus the percentage of those patients who died under treatment that would have recovered had they not taken the treatment is zero. However, in the second model, every patient is affected by the treatment, and we have a mixture of two populations in which the average causal effect turns out to be zero. In this example, the counterfactual quantity now goes to 100% --- in Model 2, all patients who died under treatment would have recovered had they not taken the treatment. Thus, there's a clear distinction of rung 2 and rung 3. As the example shows, you can't answer counterfactual questions with just information and assumptions about interventions. This is made clear with the three steps for computing a counterfactual: Step 1 (abduction): update the probability of unobserved factors $P(u)$ in light of the observed evidence $P(u|e)$ Step 2 (action): perform the action in the model (for instance $do(x))$ . Step 3 (prediction): predict $Y$ in the modified model. This will not be possible to compute without some functional information about the causal model, or without some information about latent variables.
{ "source": [ "https://stats.stackexchange.com/questions/379799", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/78861/" ] }
379,808
Consider the most powerful test of the null hypothesis that $X$ is a standard normal random variable against the alternative that $X$ is a random variable having pdf $$f(x) =\frac{2}{\Gamma(1/4)}\text{exp}(−x^4)\text{ }I_{(-\infty,\infty)}(x)$$ and give the p-value if the observed value of $X$ is $0.6$ My try: I think I should be using a likelihood ratio test. I read that the Neyman–Pearson lemma states that the likelihood ratio test is the most powerful among all level $\alpha$ tests. We have that the likelihood ratio is $$\frac{f_0(x)}{f_1(x)}=\frac{\frac{1}{\sqrt{2\pi}}\text{exp}(-x^2/2)}{\frac{2}{\Gamma(1/4)}\text{exp}(-x^4)}=\frac{\Gamma(1/4)}{\sqrt{8\pi}}\text{exp}\left(\frac{-x^2}{2}+x^4\right)$$ Thus we accept $H_0$ if $$\frac{\Gamma(1/4)}{\sqrt{8\pi}}\text{exp}\left(\frac{-x^2}{2}+x^4\right)\geq c$$ or equivalently if $$\frac{-x^2}{2}+x^4 \geq \text{log}\left(\frac{\sqrt{8\pi}\cdot c}{\Gamma(1/4)}\right)$$ or equivalently if one of the following holds: $$x^2\geq \frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}$$ $$x^2\geq \frac{1-\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}$$ or equivalently if one of the following holds: $$x\geq \sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$ $$x\leq -\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$ $$x\geq \sqrt{\frac{1-\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$ $$x\leq -\sqrt{\frac{1-\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$ For a meaningful acceptance region we only consider the top two of the four constraints. Hence we reject if $$x\in\left(-\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}},\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}\right)$$ We wish, under the null, that the probability that $X$ assumes a value in this range to be $0.05$ For this to occur, we need $$\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}=0.06270678$$ But software gives that there are no solutions for $c\in\mathbb{R}$ . Any suggestions or confirmation of my approach would be much appreciated.
There is no contradiction between the factual world and the action of interest in the interventional level. For example, smoking until today and being forced to quit smoking starting tomorrow are not in contradiction with each other, even though you could say one “negates” the other. But now imagine the following scenario. You know Joe, a lifetime smoker who has lung cancer, and you wonder: what if Joe had not smoked for thirty years, would he be healthy today? In this case we are dealing with the same person, in the same time, imagining a scenario where action and outcome are in direct contradiction with known facts. Thus, the main difference of interventions and counterfactuals is that, whereas in interventions you are asking what will happen on average if you perform an action, in counterfactuals you are asking what would have happened had you taken a different course of action in a specific situation, given that you have information about what actually happened. Note that, since you already know what happened in the actual world, you need to update your information about the past in light of the evidence you have observed. These two types of queries are mathematically distinct because they require different levels of information to be answered (counterfactuals need more information to be answered) and even more elaborate language to be articulated!. With the information needed to answer Rung 3 questions you can answer Rung 2 questions, but not the other way around. More precisely, you cannot answer counterfactual questions with just interventional information. Examples where the clash of interventions and counterfactuals happens were already given here in CV, see this post and this post . However, for the sake of completeness, I will include an example here as well. The example below can be found in Causality, section 1.4.4. Consider that you have performed a randomized experiment where patients were randomly assigned (50% / 50%) to treatment ( $x =1$ ) and control conditions ( $x=0$ ), and in both treatment and control groups 50% recovered ( $y=0$ ) and 50% died ( $y=1$ ). That is $P(y|x) = 0.5~~~\forall x,y$ . The result of the experiment tells you that the average causal effect of the intervention is zero. This is a rung 2 question, $P(Y = 1|do(X = 1)) - P(Y=1|do(X =0) = 0$ . But now let us ask the following question: what percentage of those patients who died under treatment would have recovered had they not taken the treatment? Mathematically, you want to compute $P(Y_{0} = 0|X =1, Y = 1)$ . This question cannot be answered just with the interventional data you have. The proof is simple: I can create two different causal models that will have the same interventional distributions, yet different counterfactual distributions. The two are provided below: Here, $U$ amounts to unobserved factors that explain how the patient reacts to the treatment. You can think of factors that explain treatment heterogeneity, for instance. Note the marginal distribution $P(y, x)$ of both models agree. Note that, in the first model, no one is affected by the treatment, thus the percentage of those patients who died under treatment that would have recovered had they not taken the treatment is zero. However, in the second model, every patient is affected by the treatment, and we have a mixture of two populations in which the average causal effect turns out to be zero. In this example, the counterfactual quantity now goes to 100% --- in Model 2, all patients who died under treatment would have recovered had they not taken the treatment. Thus, there's a clear distinction of rung 2 and rung 3. As the example shows, you can't answer counterfactual questions with just information and assumptions about interventions. This is made clear with the three steps for computing a counterfactual: Step 1 (abduction): update the probability of unobserved factors $P(u)$ in light of the observed evidence $P(u|e)$ Step 2 (action): perform the action in the model (for instance $do(x))$ . Step 3 (prediction): predict $Y$ in the modified model. This will not be possible to compute without some functional information about the causal model, or without some information about latent variables.
{ "source": [ "https://stats.stackexchange.com/questions/379808", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/177990/" ] }
380,426
I realize this may be a potentially broad question, but I was wondering whether there are assumptions that indicate the use of a GAM (Generalized additive model) over a GLM (Generalized linear model)? Someone recently told me that GAMs should only be used when I assume the data structure to be "additive", i.e. I expect additions of x to predict y. Another person pointed out that a GAM does a different type of regression analysis than a GLM, and that a GLM is preferred when linearity can be assumed. In the past I have been using a GAM for ecological data, e.g.: continuous time-series when the data did not have a linear shape I had multiple x to predict my y that I thought to have some nonlinear interaction that I could visualize using "surface plots" together with a statistical test I obviously don't have a great understanding of what a GAM does different than a GLM. I believe it's a valid statistical test, (and I see an increase in the use GAMs, at least in ecological journals), but I need to know better when its use is indicated over other regression analyses.
The main difference imho is that while "classical" forms of linear, or generalized linear, models assume a fixed linear or some other parametric form of the relationship between the dependent variable and the covariates, GAM do not assume a priori any specific form of this relationship, and can be used to reveal and estimate non-linear effects of the covariate on the dependent variable. More in detail, while in (generalized) linear models the linear predictor is a weighted sum of the $n$ covariates, $\sum_{i=1}^n \beta_i x_i$ , in GAMs this term is replaced by a sum of smooth function, e.g. $\sum_{i=1}^n \sum_{j=1}^q \beta_i \, s_j \left( x_i \right)$ , where the $s_1(\cdot),\dots,s_q(\cdot)$ are smooth basis functions (e.g. cubic splines) and $q$ is the basis dimension. By combining the basis functions GAMs can represent a large number of functional relationship (to do so they rely on the assumption that the true relationship is likely to be smooth, rather than wiggly). They are essentially an extension of GLMs, however they are designed in a way that makes them particularly useful for uncovering nonlinear effects of numerical covariates, and for doing so in an "automatic" fashion (from Hastie and Tibshirani original article, they have 'the advantage of being completely automatic, i.e. no "detective" work is needed on the part of the statistician' ).
{ "source": [ "https://stats.stackexchange.com/questions/380426", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/123056/" ] }
380,451
I am looking for "high-dimensional" data for a course project. The requirements of an ideal dataset for me are: 1. $p>n$ (or at least $p> \sqrt{n}$ ), where $p$ is the number of variables and $n$ is the number of observations; 2. $p\times n$ is hundreds by hundreds. I find it's hard to find datasets that meet such conditions so any kinds of topics of the dataset is welcome. Many thanks.
The main difference imho is that while "classical" forms of linear, or generalized linear, models assume a fixed linear or some other parametric form of the relationship between the dependent variable and the covariates, GAM do not assume a priori any specific form of this relationship, and can be used to reveal and estimate non-linear effects of the covariate on the dependent variable. More in detail, while in (generalized) linear models the linear predictor is a weighted sum of the $n$ covariates, $\sum_{i=1}^n \beta_i x_i$ , in GAMs this term is replaced by a sum of smooth function, e.g. $\sum_{i=1}^n \sum_{j=1}^q \beta_i \, s_j \left( x_i \right)$ , where the $s_1(\cdot),\dots,s_q(\cdot)$ are smooth basis functions (e.g. cubic splines) and $q$ is the basis dimension. By combining the basis functions GAMs can represent a large number of functional relationship (to do so they rely on the assumption that the true relationship is likely to be smooth, rather than wiggly). They are essentially an extension of GLMs, however they are designed in a way that makes them particularly useful for uncovering nonlinear effects of numerical covariates, and for doing so in an "automatic" fashion (from Hastie and Tibshirani original article, they have 'the advantage of being completely automatic, i.e. no "detective" work is needed on the part of the statistician' ).
{ "source": [ "https://stats.stackexchange.com/questions/380451", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/181366/" ] }
380,482
We want to apply extreme value theory to the maximum yearly temperatures. Before we choose the model we want to test whether or not we can assume independence. First we made a plot of the autocorrelation function: Thereafter, we applied the Ljung-Box and/or Box-Pierce test. However we find that if we only include one lag, there is no statistical evidence of dependence on the 5% level (our p-value is about 0.1) but if we include 2 or more lags, the p-values are significantly lower than our significance level. Can anyone help us understand what we can conclude from these results?
The main difference imho is that while "classical" forms of linear, or generalized linear, models assume a fixed linear or some other parametric form of the relationship between the dependent variable and the covariates, GAM do not assume a priori any specific form of this relationship, and can be used to reveal and estimate non-linear effects of the covariate on the dependent variable. More in detail, while in (generalized) linear models the linear predictor is a weighted sum of the $n$ covariates, $\sum_{i=1}^n \beta_i x_i$ , in GAMs this term is replaced by a sum of smooth function, e.g. $\sum_{i=1}^n \sum_{j=1}^q \beta_i \, s_j \left( x_i \right)$ , where the $s_1(\cdot),\dots,s_q(\cdot)$ are smooth basis functions (e.g. cubic splines) and $q$ is the basis dimension. By combining the basis functions GAMs can represent a large number of functional relationship (to do so they rely on the assumption that the true relationship is likely to be smooth, rather than wiggly). They are essentially an extension of GLMs, however they are designed in a way that makes them particularly useful for uncovering nonlinear effects of numerical covariates, and for doing so in an "automatic" fashion (from Hastie and Tibshirani original article, they have 'the advantage of being completely automatic, i.e. no "detective" work is needed on the part of the statistician' ).
{ "source": [ "https://stats.stackexchange.com/questions/380482", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/229265/" ] }
380,789
If the joint probability is the intersection of 2 events, then shouldn't the joint probability of 2 independent events be zero since they don't intersect at all? I'm confused.
There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$ , i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whether the other happened mutually disjoint events: $\mathbb P(A \cap B) = 0$ , i.e. $\mathbb P(A \mid B)= 0$ so knowing one happened means the other did not happen You asked for a picture. This might help:
{ "source": [ "https://stats.stackexchange.com/questions/380789", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/213041/" ] }
383,310
I've read this post , but I wanted more clarification for a broader question. In Keras, there are now three types of regularizers for a layer: kernel_regularizer , bias_regularizer , activity_regularizer . I have read posts that explain the difference between L1 and L2 norm, but in an intuitive sense, I'd like to know how each regularizer will affect the aforementioned three types of regularizers and when to use what. The motivation for my question is that my understanding is that regularizers are usually applied to the loss function. However, they're even being added to bias term. I'm not able to wrap my head around why one would think to do this, let alone be able to discern when to use L1 and L2 for the bias regularizer. Hence, I wanted to get an overall understanding of all three entities that regularizers are applied on and in general know how the 2 kinds of regularizers can affect each of those entities at a high level.
What is the difference between them? You have the regression equation $y = Wx+b$ , where $x$ is the input, $W$ the weights matrix and $b$ the bias. Kernel Regularizer: Tries to reduce the weights $W$ (excluding bias). Bias Regularizer: Tries to reduce the bias $b$ . Activity Regularizer: Tries to reduce the layer's output $y$ , thus will reduce the weights and adjust bias so $Wx+b$ is smallest. When to use which? Usually, if you have no prior on the distribution that you wish to model, you would only use the kernel regularizer , since a large enough network can still model your function even if the regularization on the weights are big. If you want the output function to pass through (or have an intercept closer to) the origin, you can use the bias regularizer . If you want the output to be smaller (or closer to 0), you can use the activity regularizer . $L_1$ versus $L_2$ regularization Now, for the $L_1$ versus $L_2$ loss for weight decay (not to be confused with the outputs loss function). $L_2$ loss is defined as $w^2$ $L_1$ loss is defined as $|w|$ . where $w$ is a component of the matrix $W$ . The gradient of $L_2$ will be: $2w$ The gradient of $L_1$ will be: $sign(w)$ Thus, for each gradient update with a learning rate $a$ , in $L_2$ loss, the weights will be subtracted by $aW$ , while in $L_1$ loss they will be subtracted by $a \cdot sign(W)$ . The effect of $L_2$ loss on the weights is a reduction of large components in the matrix $W$ , while $L_1$ loss will make the weights matrix sparse, with many zero values. The same applies to the bias and output respectively using the bias and activity regularizer.
{ "source": [ "https://stats.stackexchange.com/questions/383310", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/133742/" ] }
383,605
Statistics.com published a problem of the week: The rate of residential insurance fraud is 10% (one out of ten claims is fraudulent). A consultant has proposed a machine learning system to review claims and classify them as fraud or no-fraud. The system is 90% effective in detecting the fraudulent claims, but only 80% effective in correctly classifying the non-fraud claims (it mistakenly labels one in five as “fraud”). If the system classifies a claim as fraudulent, what is the probability that it really is fraudulent? https://www.statistics.com/news/231/192/Conditional-Probability/?showtemplate=true My peer and I both came up with the same answer independently and it doesn't match the published solution. Our solution: (.9*.1)/((.9*.1)+(.2*.9))=1/3 Their solution: This is a problem in conditional probability. (It’s also a Bayesian problem, but applying the formula in Bayes Rule only helps to obscure what’s going on.) Consider 100 claims. 10 will be fraudulent, and the system will correctly label 9 of them as “fraud.” 90 claims will be OK, but the system will incorrectly classify 72 (80%) as “fraud.” So a total of 81 claims have been labeled as fraudulent, but only 9 of them, 11%, are actually fraudulent. Who was right
I believe that you and your colleague are correct. Statistics.com has the correct line of thinking, but makes a simple mistake. Out of the 90 "OK" claims, we expect 20% of them to be incorrectly classified as fraud, not 80%. 20% of 90 is 18, leading to 9 correctly identified claims and 18 incorrect claims, with a ratio of 1/3, exactly what Bayes' rule yields.
{ "source": [ "https://stats.stackexchange.com/questions/383605", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/231634/" ] }
383,656
Consider the joint density function: $$f(x,y) = \begin{cases} 2 & & \text{for } 0 \leq x \leq1 \text{ and } 0 \leq y \leq 1-x, \\[6pt] 0 & & \text{otherwise}. \end{cases}$$ From this joint density I figured out the following marginal densities: $$f_X(x) = 2(1-x),\\ f_Y(y) = 2.$$ The marginal density $f_Y$ is supposedly wrong, as the solutions provided to me say to calculate $\int^{1-y}_0 2 \, dx$ . I don't see why I need to integrate over $[0, 1-y]$ and not over $[0,1]$ . I thought the range for $x$ does not depend on $y$ , or does it?
I believe that you and your colleague are correct. Statistics.com has the correct line of thinking, but makes a simple mistake. Out of the 90 "OK" claims, we expect 20% of them to be incorrectly classified as fraud, not 80%. 20% of 90 is 18, leading to 9 correctly identified claims and 18 incorrect claims, with a ratio of 1/3, exactly what Bayes' rule yields.
{ "source": [ "https://stats.stackexchange.com/questions/383656", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/225012/" ] }
383,703
Apologies in advance since I cannot provide a reproducible example due to the immense size of my model. I'll do my best to describe my situation fully, hopefully this will be sufficient. My model looks like this: model <- binary_outcome ~ s(height, side, bottom, top, by=level) + s(side_angle, height_angle, by=level) + fixed_eff + level I'm trying to use splines to model non-linear relationships in the data, splitting them by level. I am fairly confident that this model structure makes sense. Due to data set size issues, I estimate a different model for each year. I then fit the model: fit <- bam(model, data=d, family=binomial(link="probit"), gc.level=1, nthreads=14, control=gam.control(trace=TRUE)) The first thing I've noticed is that the size of fit does not vary in proportion by d . The largest data sets sometimes create the smallest model sizes. Inspecting this further it seems to be that fit$smooths has an object X0 that drives the size of fit . The largest objects have nrow(X0) == nrow(d) whereas the smaller objects have nrow(X0) << nrow(d) . This leads me to the biggest issue which is that the smaller fit objects tend to produce much worse predictions (measuring by RMSE) than the larger objects when I do: d$pred <- predict.bam(fit, newdata=d, type="response") Now I've found a solution which seems to produce predictions in line with what I would expect, but it seems strange that it should work, and I would love to know why it's working. It involves subsetting d by level, applying d $level <- factor(d$ level) and then running predict.bam() again on the subsetted data: d_l <- d[d $level == level,] d_l$ level <- factor(d $level) d_l$ pred <- predict.bam(fit, newdata=d_l, type="response") This seems odd since the data is essentially the same, the only difference really is that the level factor now only has one level since the data was subset in line 1. Happy to elaborate or test out any suggestions/ideas. Thanks!
I believe that you and your colleague are correct. Statistics.com has the correct line of thinking, but makes a simple mistake. Out of the 90 "OK" claims, we expect 20% of them to be incorrectly classified as fraud, not 80%. 20% of 90 is 18, leading to 9 correctly identified claims and 18 incorrect claims, with a ratio of 1/3, exactly what Bayes' rule yields.
{ "source": [ "https://stats.stackexchange.com/questions/383703", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/108959/" ] }
385,231
On page 304 of chapter 8 of An Introduction to Statistical Learning with Applications in R (James et al.), the authors say: We use the Hitters data set to predict a baseball player’s Salary based on Years (the number of years that he has played in the major leagues) and Hits (the number of hits that he made in the previous year). We first remove observations that are missing Salary values, and log-transform Salary so that its distribution has more of a typical bell-shape. (Recall that Salary is measured in thousands of dollars.) No additional motivation for the log-transform is given. Being that the data are being fed into decision tree algorithms, why was it important to force the data into a normal distribution? I thought most/all decision tree algorithms were invariant to scale changes.
In this case, the salary is the target (dependent variable/outcome) of the decision tree, not one of the features (independent variables/predictors). You are correct that decision trees are insensitive to the scale of the predictors , but since I suspect there are a small number of extremely large salaries, transforming the salaries might improve predictions because loss functions which minimize square error will not be so strongly influenced by these large values.
{ "source": [ "https://stats.stackexchange.com/questions/385231", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/165782/" ] }
388,566
Statement One (S1): "One in 80 deaths is caused by a car accident." Statement Two (S2): "One in 80 people dies as a result of a car accident." Now, I personally don't see very much difference at all between these two statements. When writing, I would consider them interchangeable to a lay audience. However, I've been challenged on this by two people now, and am looking for some additional perspective. My default interpretation of S2 is, "Of 80 people drawn uniformly at random from the population of humans, we would expect one of them to die as a result of a car accident"- and I do consider this qualified statement equivalent to S1. My questions are as follows: Q1) Is my default interpretation indeed equivalent to Statement One? Q2) Is unusual or reckless for this to be my default interpretation? Q3) If you do think S1 and S2 different, such that to state the second when one means the first is misleading/incorrect, could you please provide a fully-qualified revision of S2 that is equivalent? Let's put aside the obvious quibble that S1 does not specifically refer to human deaths and assume that that is understood in context. Let us also put aside any discussion of the veracity of the claim itself: it is meant to be illustrative. As best I can tell, the disagreements I've heard so far seem to center around defaulting to different interpretations of the first and second statement. For the first, my challengers seem to interpret it as as 1/80 * num_deaths = number of deaths caused by car accidents, but for some reason, default to a different interpretation of the second along the lines of, "if you have any set of 80 people, one of them will die in a car accident" (which is obviously not an equivalent claim). I would think that given their interpretation of S1, their default for S2 would be to read it as (1/80 * num_dead_people = number of people who died in a car accident == number of deaths caused by car accident). I'm not sure why the discrepancy in interpretation (their default for S2 is a much stronger assumption), or if they have some innate statistical sense that I'm in fact lacking.
First of all, my first intuitive thought was: "S2 can only be the same as S1 if the traffic death rate stays constant, possibly over decades" - which certainly wouldn't have been a good assumption in the last so many decades. This already hints that one difficulty lies with implicit/unspoken temporal assumptions. I'd say your statements have the form 1 in $x$ $population$ experience $event$ . In S1, the population are deaths, and the implied temporal specification is at present or "in a suitably large [to have sufficent case numbers] but not too wide time frame [to have approximately constant car accident characteristics] around the present" In S2, the population are people. And others seem to read this not as "dying people" but as "living people" (which after all, is what people more frequently/longer do). If you read the population as living people, clearly, not one of every 80 people living now dies "now" of a car accident. So that is read as "when they are dying [possibly decades from now], the cause of death is car accident". Take home message: always be careful to spell out who your population are and the denominator of fractions in general. (Gerd Gigerenzer has papers about not spelling out the denominator being a major cause of confusion, particularly in statistics and risk communication).
{ "source": [ "https://stats.stackexchange.com/questions/388566", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/228214/" ] }
388,937
Is there a rational for the number of observations per cluster in a random effect model? I have a sample size of 1,500 with 700 clusters modeled as exchangeable random effect. I have the option to merge clusters in order to build fewer, but larger clusters. I wonder how can I choose the minimum sample size per cluster as to have meaningful results in predicting the random effect for each cluster? Is there a good paper that explains this?
TL;DR : The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high" Longer version: In general, the number of clusters is more important than the number of observations per cluster. With 700, clearly you have no problem there. Small cluster sizes are quite common, especially in social science surveys that follow stratified sampling designs, and there is a body of research that has investigated cluster-level sample size. While increasing the cluster size increases statistical power to estimate the random effects (Austin & Leckie, 2018), small cluster sizes do not lead to serious bias (Bell et al, 2008; Clarke, 2008; Clarke & Wheaton, 2007; Maas & Hox, 2005). Thus, the minimum sample size per cluster is 1. In particular, Bell, et al (2008) performed a Monte Carlo simulation study with proportions of singleton clusters (clusters containing only a single observation) ranging from 0% to 70%, and found that, provided the number of clusters was large (~500) the small cluster sizes had almost no impact on bias and Type 1 error control. They also reported very few problems with model convergence under any of their modelling scenarios. For the particular scenario in the OP, I would suggest running the model with 700 clusters in the first instance. Unless there was a clear problem with this, I would be disinclined to merge clusters. I ran a simple simulation in R: Here we create a clustered dataset with with a residual variance of 1, a single fixed effect also of 1, 700 clusters, of which 690 are singletons and 10 have just 2 observations. We run the simulation 1000 times and observe the histograms of the estimated fixed and residual random effects. > set.seed(15) > dtB <- expand.grid(Subject = 1:700, measure = c(1)) > dtB <- rbind(dtB, dtB[691:700, ]) > fixef.v <- numeric(1000) > ranef.v <- numeric(1000) > for (i in 1:1000) { dtB $x <- rnorm(nrow(dtB), 0, 1) dtB$ y <- dtB $Subject/100 + rnorm(nrow(dtB), 0, 1) + dtB$ x * 1 fm0B <- lmer(y ~ x + (1|Subject), data = dtB) fixef.v[i] <- fixef(fm0B)[[2]] ranef.v[i] <- attr(VarCorr(fm0B), "sc") } > hist(fixef.v, breaks = 15) > hist(ranef.v, breaks = 15) As you can see, the fixed effects are very well estimated, while the residual random effects appear to be a little downward-biased, but not drastically so: > summary(fixef.v) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.6479 0.9439 0.9992 1.0005 1.0578 1.2544 > summary(ranef.v) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2796 0.7745 0.9004 0.8993 1.0212 1.4837 The OP specifically mentions the estimation of cluster-level random effects. In the simulation above, the random effects were created simply as the value of each Subject 's ID (scaled down by a factor of 100). Obviously these are not normally distributed, which is the assumption of linear mixed effects models, however, we can extract the (conditional modes of) the cluster level effects and plot them against the actual Subject IDs: > re <- ranef(fm0B)[[1]][, 1] > dtB $re <- append(re, re[691:700]) > hist(dtB$ re) > plot(dtB $re, dtB$ Subject) The histogram departs from normality somewhat, but this is due to the way we simulated the data. There is still a reasonable relationship between the estimated and actual random effects. References: Peter C. Austin & George Leckie (2018) The effect of number of clusters and cluster size on statistical power and Type I error rates when testing random effects variance components in multilevel linear and logistic regression models, Journal of Statistical Computation and Simulation, 88:16, 3151-3163, DOI: 10.1080/00949655.2018.1504945 Bell, B. A., Ferron, J. M., & Kromrey, J. D. (2008). Cluster size in multilevel models: the impact of sparse data structures on point and interval estimates in two-level models . JSM Proceedings, Section on Survey Research Methods, 1122-1129. Clarke, P. (2008). When can group level clustering be ignored? Multilevel models versus single-level models with sparse data . Journal of Epidemiology and Community Health, 62(8), 752-758. Clarke, P., & Wheaton, B. (2007). Addressing data sparseness in contextual population research using cluster analysis to create synthetic neighborhoods . Sociological Methods & Research, 35(3), 311-351. Maas, C. J., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling . Methodology, 1(3), 86-92.
{ "source": [ "https://stats.stackexchange.com/questions/388937", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/179411/" ] }
389,395
Why is it most of the time (in many websites, articles or demonstration) the feature variable (columns) is denoted by a upper-case 'X' whereas the target variable is a lower-case 'y'? Looks more like a coding standard to me. Ex. X = df.iloc[:, :-1] y = df.iloc[:, -1] Just curious because I hardly ever use just a single letter to represent a variable storing meaningful data.
The question about why $X$ and $y$ are popular choices in mathematical notions has been answered in the History of Science and Mathematics SE website: Why are X and Y commonly used as mathematical placeholders? (In short: cause Descartes said so!) In terms of Linear Algebra, it is extremely common to use capital Latin letters for matrices (e.g. design matrix $X$ ) and lowercase Latin letters for vectors (response vector $y$ ). Standard textbooks on the use of matrices in Statistics (e.g. Matrix Algebra Useful for Statistics by Searle, Matrix Algebra From a Statistician's Perspective by Harville and Matrix Algebra: Theory, Computations, and Applications in Statistics by Gentle) utilise this convention too, so it has become a standard way to denote things.
{ "source": [ "https://stats.stackexchange.com/questions/389395", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/233405/" ] }
389,412
I'm taking a course on ML and have just began. Given a loss function, $$L = \frac{1}{N}\sum^N_{n=1}(t_n - w_0 + w_1x_n)^2$$ I am confused between the difference of using gradient descent (and maybe even its purpose) as opposed to minimizing the error by finding the best values of our parameters $w_0, w_1$ by differentiating them to find a minima for each. $$w_0 = \bar{t} - w_1\bar{x}, \;\;\;\;\;\;w_1 = \frac{\bar{x}\bar{t} - \bar{x}t}{\bar{x}\bar{x} - \bar{x}^2}$$ Or, am I greatly confused, and this is gradient descent?
The question about why $X$ and $y$ are popular choices in mathematical notions has been answered in the History of Science and Mathematics SE website: Why are X and Y commonly used as mathematical placeholders? (In short: cause Descartes said so!) In terms of Linear Algebra, it is extremely common to use capital Latin letters for matrices (e.g. design matrix $X$ ) and lowercase Latin letters for vectors (response vector $y$ ). Standard textbooks on the use of matrices in Statistics (e.g. Matrix Algebra Useful for Statistics by Searle, Matrix Algebra From a Statistician's Perspective by Harville and Matrix Algebra: Theory, Computations, and Applications in Statistics by Gentle) utilise this convention too, so it has become a standard way to denote things.
{ "source": [ "https://stats.stackexchange.com/questions/389412", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/235520/" ] }
389,545
Let's say I am studying how daffodils respond to various soil conditions. I have collected data on the pH of the soil versus the mature height of the daffodil. I'm expecting a linear relationship, so I go about running a linear regression. However, I didn't realize when I started my study that the population actually contains two varieties of daffodil, each of which responds very differently to soil pH. So the graph contains two distinct linear relationships: I can eyeball it and separate it manually, of course. But I wonder if there is a more rigorous approach. Questions: Is there a statistical test to determine whether a data set would be better fit by a single line or by N lines? How would I run a linear regression to fit the N lines? In other words, how do I disentangle the co-mingled data? I can think of some combinatorial approaches, but they seem computationally expensive. Clarifications: The existence of two varieties was unknown at the time of data collection. The variety of each daffodil was not observed, not noted, and not recorded. It is impossible to recover this information. The daffodils have died since the time of data collection. I have the impression that this problem is something similar to applying clustering algorithms, in that you almost need to know the number of clusters before you start. I believe that with ANY data set, increasing the number of lines will decrease the total r.m.s. error. In the extreme, you can divide your data set into arbitrary pairs and simply draw a line through each pair. (E.g., if you had 1000 data points, you could divide them into 500 arbitrary pairs and draw a line through each pair.) The fit would be exact and the r.m.s. error would be exactly zero. But that's not what we want. We want the "right" number of lines.
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach based on the EM algorithm to basically fit the model that Demetri suggests but without knowing the labels for the variety. Luckily the mixtools package in R provides this functionality for us. Since your data is quite separated and you seem to have quite a bit it should be fairly successful. library(mixtools) # Generate some fake data that looks kind of like yours n1 <- 150 ph1 = runif(n1, 5.1, 7.8) y1 <- 41.55 + 5.185*ph1 + rnorm(n1, 0, .25) n2 <- 150 ph2 <- runif(n2, 5.3, 8) y2 <- 65.14 + 1.48148*ph2 + rnorm(n2, 0, 0.25) # There are definitely better ways to do all of this but oh well dat <- data.frame(ph = c(ph1, ph2), y = c(y1, y2), group = rep(c(1,2), times = c(n1, n2))) # Looks about right plot(dat $ph, dat$ y) # Fit the regression. One line for each component. This defaults # to assuming there are two underlying groups/components in the data out <- regmixEM(y = dat $y, x = dat$ ph, addintercept = T) We can examine the results > summary(out) summary of regmixEM object: comp 1 comp 2 lambda 0.497393 0.502607 sigma 0.248649 0.231388 beta1 64.655578 41.514342 beta2 1.557906 5.190076 loglik at estimate: -182.4186 So it fit two regressions and it estimated that 49.7% of the observations fell into the regression for component 1 and 50.2% fell into the regression for component 2. The way I simulated the data it was a 50-50 split so this is good. The 'true' values I used for the simulation should give the lines: y = 41.55 + 5.185*ph and y = 65.14 + 1.48148*ph (which I estimated 'by hand' from your plot so that the data I create looks similar to yours) and the lines that the EM algorithm gave in this case were: y = 41.514 + 5.19*ph and y = 64.655 + 1.55*ph Pretty darn close to the actual values. We can plot the fitted lines along with the data plot(dat $ph, dat$ y, xlab = "Soil Ph", ylab = "Flower Height (cm)") abline(out $beta[,1], col = "blue") # plot the first fitted line abline(out$ beta[,2], col = "red") # plot the second fitted line
{ "source": [ "https://stats.stackexchange.com/questions/389545", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/235632/" ] }
389,557
In the deep learning process, especially SAR-ATR (e.g., generic object detection), is there any relation between the training time (speed) and the training batch size? Is there a paper related to this question?
I think Demetri's answer is a great one if we assume that you have the labels for the different varieties. When I read your question that didn't seem to be the case to me. We can use an approach based on the EM algorithm to basically fit the model that Demetri suggests but without knowing the labels for the variety. Luckily the mixtools package in R provides this functionality for us. Since your data is quite separated and you seem to have quite a bit it should be fairly successful. library(mixtools) # Generate some fake data that looks kind of like yours n1 <- 150 ph1 = runif(n1, 5.1, 7.8) y1 <- 41.55 + 5.185*ph1 + rnorm(n1, 0, .25) n2 <- 150 ph2 <- runif(n2, 5.3, 8) y2 <- 65.14 + 1.48148*ph2 + rnorm(n2, 0, 0.25) # There are definitely better ways to do all of this but oh well dat <- data.frame(ph = c(ph1, ph2), y = c(y1, y2), group = rep(c(1,2), times = c(n1, n2))) # Looks about right plot(dat $ph, dat$ y) # Fit the regression. One line for each component. This defaults # to assuming there are two underlying groups/components in the data out <- regmixEM(y = dat $y, x = dat$ ph, addintercept = T) We can examine the results > summary(out) summary of regmixEM object: comp 1 comp 2 lambda 0.497393 0.502607 sigma 0.248649 0.231388 beta1 64.655578 41.514342 beta2 1.557906 5.190076 loglik at estimate: -182.4186 So it fit two regressions and it estimated that 49.7% of the observations fell into the regression for component 1 and 50.2% fell into the regression for component 2. The way I simulated the data it was a 50-50 split so this is good. The 'true' values I used for the simulation should give the lines: y = 41.55 + 5.185*ph and y = 65.14 + 1.48148*ph (which I estimated 'by hand' from your plot so that the data I create looks similar to yours) and the lines that the EM algorithm gave in this case were: y = 41.514 + 5.19*ph and y = 64.655 + 1.55*ph Pretty darn close to the actual values. We can plot the fitted lines along with the data plot(dat $ph, dat$ y, xlab = "Soil Ph", ylab = "Flower Height (cm)") abline(out $beta[,1], col = "blue") # plot the first fitted line abline(out$ beta[,2], col = "red") # plot the second fitted line
{ "source": [ "https://stats.stackexchange.com/questions/389557", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/235633/" ] }
389,590
I'm reading the Statistics (Freeman, Pisani, Purves) book and I'm trying to reproduce an example where a coin is tossed say 50 times, the number of heads counted and this is repeated say 1,000 times. First, I kept the number of tosses (sample size) at 1000 and increased the repetitions. The more repetitions, the better the data fit the normal curve. So next, I tried keeping the number of repetitions fixed at 1,000 and increased the sample size. The larger the sample size, the worst the normal curved seemed to fit the data. This seems to be contradictory to the book example which better approximates the normal curve as the sample size increases. I wanted to see what would happen if I increased the sample size, but with a larger number of repetitions which were fixed at 10,000. This seems to also contradict the book. Any ideas what I'm doing wrong? Code and graphs below. %matplotlib inline def plot_hist(num_repetitions, num_tosses): tosses = np.random.randint(0, 2, size=[num_repetitions, num_tosses]) sums = np.apply_along_axis(lambda a: np.sum(a == 1), 1, tosses) xmin, xmax = min(sums), max(sums) lnspc = np.linspace(xmin, xmax, len(sums)) m, s = stats.norm.fit(sums) # get mean and standard deviation pdf_g = stats.norm.pdf(lnspc, m, s) # now get theoretical values in our interval bins = np.arange(xmin, xmax) - 0.5 step = int((xmax - xmin)/5) fig, ax = plt.subplots() _ = ax.hist(sums, bins, edgecolor='black', linewidth=1.2, density=True) _ = ax.plot(lnspc, pdf_g, label="Norm", color='red') _ = ax.set_xticks(bins[::step] + 0.5) _ = ax.set_title('{:,} tosses - {:,} repetitions'.format(num_tosses, num_repetitions)) 1. Experiment with increasing number of repetitions (fixed sample size of 1000) plot_hist(1000, 1000) plot_hist(10000, 1000) plot_hist(100000, 1000) 2. Experiment with increasing sample size (fixed at 1000 repetitions) plot_hist(1000, 100) plot_hist(1000, 1000) plot_hist(1000, 10000) 3. Experiment with increasing sample size (fixed at 10,000 repetitions) plot_hist(10000, 100) plot_hist(10000, 1000) plot_hist(10000, 10000) plot_hist(10000, 100000)
In the second case, by increasing the number of tosses, you increase the number of bins a single trial can fall into. While the first case of experiment 2 only has a maximum of 100 bins that can be filled, the last example has 10000 bins. You increased the "resolution" of your experiment by a factor 100 (i.e., one bin in your first experiment is now represented by roughly 100 in your second). Of course this means that you would expect to require a factor 100 more data to fill your bins.
{ "source": [ "https://stats.stackexchange.com/questions/389590", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/48448/" ] }
389,613
I'm reading Agresti - Categorical Data Analysis and it says Consider two models, $M_0$ with fitted values $\hat{\mu}_0$ and $M_1$ with fitted values $\hat{\mu}_1$ with $M_0$ a special case of $M_1$ . A smaller set of parameter values satisfies $M_0$ than satisfies $M_1$ . Maximizing the log likelihood over a smaller space cannot yield a larger maximum. Thus $L(\hat{\mu}_0;y) \leq L(\hat{\mu}_1;y)$ But this is not like say $L(\theta)\leq L(\hat{\theta}), \forall \theta$ if $\hat{\theta}$ is the MLE. Because in that quote, the dimensions are different. Maybe the intuition is correct: the fit is more "likely" if I use more parameters to adjust the data. But I'd like a mathematical explanation of that quote. Thanks
In the second case, by increasing the number of tosses, you increase the number of bins a single trial can fall into. While the first case of experiment 2 only has a maximum of 100 bins that can be filled, the last example has 10000 bins. You increased the "resolution" of your experiment by a factor 100 (i.e., one bin in your first experiment is now represented by roughly 100 in your second). Of course this means that you would expect to require a factor 100 more data to fill your bins.
{ "source": [ "https://stats.stackexchange.com/questions/389613", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/235672/" ] }
390,712
I'm trying to understand what Markov chain Monte Carlo (MCMC) are from the French Wikipedia page. They say "that the Markov chain Monte Carlo methods consist of generating a vector $x_ {i}$ only from the vector data $x_ {i-1}$ it is therefore a process "without memory"" Les méthodes de Monte-Carlo par chaînes de Markov consistent à générer un vecteur $x_{i}$ uniquement à partir de la donnée du vecteur $x_{{i-1}}$ ; c'est donc un processus « sans mémoire », I don't understand why they say MCMC are "without memory" as far as we use information from the vector data $x_ {i-1}$ to generate $x_i$ .
The defining characteristic of a Markov chain is that the conditional distribution of its present value conditional on past values depends only on the previous value . So every Markov chain is "without memory" to the extent that only the previous value affects the present conditional probability, and all previous states are "forgotten". (You are right that it is not completely without memory - after all, the conditional distribution of the present value depends on the previous value.) That is true for MCMC and also for any other Markov chain.
{ "source": [ "https://stats.stackexchange.com/questions/390712", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/92423/" ] }
391,054
I am trying to write a program in R that simulates pseudo random numbers from a distribution with the cumulative distribution function: $$F(x)= 1-\exp \left(-ax-\frac{b}{p+1}x^{p+1}\right), \quad x \geq 0$$ where $a,b>0, p \in (0,1)$ I tried inverse transform sampling but the inverse does not seem to be analytically solvable. I would be glad if you could suggest a solution to this problem
There is a straightforward (and if I may add, elegant) solution to this exercise: since $1-F(x)$ appears like a product of two survival distributions: $$(1-F(x))=\exp\left\{-ax-\frac{b}{p+1}x^{p+1}\right\}=\underbrace{\exp\left\{-ax\right\}}_{1-F_1(x)}\underbrace{\exp\left\{-\frac{b}{p+1}x^{p+1}\right\}}_{1-F_2(x)}$$ the distribution $F$ is the distribution of $$X=\min\{X_1,X_2\}\qquad X_1\sim F_1\,,X_2\sim F_2$$ In this case $F_1$ is the Exponential $\mathcal{E}(a)$ distribution and $F_2$ is the $1/(p+1)$ -th power of an Exponential $\mathcal{E}(b/(p+1))$ distribution. The associated R code is as simple as it gets x=pmin(rexp(n,a),rexp(n,b/(p+1))^(1/(p+1))) #simulating an n-sample and it is definitely much faster than the inverse pdf and accept-reject resolutions: > n=1e6 > system.time(results <- Vectorize(simulate,"prob")(runif(n))) utilisateur système écoulé 89.060 0.072 89.124 > system.time(x <- simuF(n,1,2,3)) utilisateur système écoulé 1.080 0.020 1.103 > system.time(x <- pmin(rexp(n,a),rexp(n,b/(p+1))^(1/(p+1)))) utilisateur système écoulé 0.160 0.000 0.163 with an unsurprisingly perfect fit:
{ "source": [ "https://stats.stackexchange.com/questions/391054", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/119920/" ] }
391,838
Assuming I have two non-independent random variables and I want to reduce covariance between them as much as possible without loosing too much "signal", does mean centering help? I read somewhere that mean centering reduces correlation by a significant factor, so I'm thinking it should do the same for covariance.
If $X$ and $Y$ are random variables and $a$ and $b$ are constants, then $$ \begin{aligned} \operatorname{Cov}(X + a, Y + b) &= E[(X + a - E[X + a])(Y + b - E[Y + b])] \\ &= E[(X + a - E[X] - E[a])(Y + b - E[Y] - E[b])] \\ &= E[(X + a - E[X] - a)(Y + b - E[Y] - b)] \\ &= E[(X - E[X])(Y - E[Y])] \\ &= \operatorname{Cov}(X, Y). \end{aligned} $$ Centering is the special case $a = -E[X]$ and $b = -E[Y]$ , so centering does not affect covariance. Also, since correlation is defined as $$ \operatorname{Corr}(X, Y) = \frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}}, $$ we can see that $$ \begin{aligned} \operatorname{Corr}(X + a, Y + b) &= \frac{\operatorname{Cov}(X + a, Y + b)}{\sqrt{\operatorname{Var}(X + a) \operatorname{Var}(Y + b)}} \\ &= \frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}}, \end{aligned} $$ so in particular, correlation isn't affected by centering either. That was the population version of the story. The sample version is the same: If we use $$ \widehat{\operatorname{Cov}}(X, Y) = \frac{1}{n} \sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)\left(Y_i - \frac{1}{n}\sum_{j=1}^n Y_j\right) $$ as our estimate of covariance between $X$ and $Y$ from a paired sample $(X_1,Y_1), \ldots, (X_n,Y_n)$ , then $$ \begin{aligned} \widehat{\operatorname{Cov}}(X + a, Y + b) &= \frac{1}{n} \sum_{i=1}^n \left(X_i + a - \frac{1}{n}\sum_{j=1}^n (X_j + a)\right)\left(Y_i + b - \frac{1}{n}\sum_{j=1}^n (Y_j + b)\right) \\ &= \frac{1}{n} \sum_{i=1}^n \left(X_i + a - \frac{1}{n}\sum_{j=1}^n X_j - \frac{n}{n} a\right)\left(Y_i + b - \frac{1}{n}\sum_{j=1}^n Y_j - \frac{n}{n} b\right) \\ &= \frac{1}{n} \sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)\left(Y_i - \frac{1}{n}\sum_{j=1}^n Y_j\right) \\ &= \widehat{\operatorname{Cov}}(X, Y) \end{aligned} $$ for any $a$ and $b$ .
{ "source": [ "https://stats.stackexchange.com/questions/391838", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40888/" ] }
393,256
Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below. The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes ; while saying nothing about the studies that remained unpublished. Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published! References Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures . BMJ, 315(7121), 1533-1537. Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test . BMJ , 315(7109), 629-634.
The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size. The grey points in the plot would not be published under a strict $p < .05$ regime. The grey line is a regression of effect size on sample size including the "bad p-value" studies, while the red one excludes these. The black line shows the true effect. As you can see, under publication bias there is a strong tendency for small studies to overestimate effect sizes and for the larger ones to report effect sizes closer to the truth. set.seed(20-02-19) n_studies <- 1000 sample_size <- sample(2:2000, n_studies, replace=T) studies <- plyr::aaply(sample_size, 1, function(size) { dat <- rnorm(size, mean = .01, sd = .1) c(effect_size=mean(dat), p_value=t.test(dat)$p.value) }) studies <- cbind(studies, sample_size=log(sample_size)) include <- studies[, "p_value"] < .05 plot(studies[, "sample_size"], studies[, "effect_size"], xlab = "log(sample size)", ylab="effect size", col=ifelse(include, "black", "grey"), pch=20) lines(lowess(x = studies[, "sample_size"], studies[, "effect_size"]), col="grey", lwd=2) lines(lowess(x = studies[include, "sample_size"], studies[include, "effect_size"]), col="red", lwd=2) abline(h=.01) Created on 2019-02-20 by the reprex package (v0.2.1)
{ "source": [ "https://stats.stackexchange.com/questions/393256", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41307/" ] }
393,269
I've estimated 2 models on simulated data and basically replicate the estimations 100 times. What I want to see is if the models are actually different in terms of their MSE. What I've done is to keep the MSE1 and MSE2 (for model 1 and 2) for each one of the 100 simulations, and I was thinking of comparing the mean MSE (I am not sure if this is actually the correct way of doing it). Anyway I don't think I can apply a t-test due to the fact that the variables MSE1 and MSE2 are not part of a normal population (actually they look like a sort of $\chi^2$ ) because it's truncated ,no negative MSE are allowed. Should I apply a $t$ test (isn't normality of observations an assumption of this test?) or is there any other method I can use to compare the both models? Thanks in advance.
The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size. The grey points in the plot would not be published under a strict $p < .05$ regime. The grey line is a regression of effect size on sample size including the "bad p-value" studies, while the red one excludes these. The black line shows the true effect. As you can see, under publication bias there is a strong tendency for small studies to overestimate effect sizes and for the larger ones to report effect sizes closer to the truth. set.seed(20-02-19) n_studies <- 1000 sample_size <- sample(2:2000, n_studies, replace=T) studies <- plyr::aaply(sample_size, 1, function(size) { dat <- rnorm(size, mean = .01, sd = .1) c(effect_size=mean(dat), p_value=t.test(dat)$p.value) }) studies <- cbind(studies, sample_size=log(sample_size)) include <- studies[, "p_value"] < .05 plot(studies[, "sample_size"], studies[, "effect_size"], xlab = "log(sample size)", ylab="effect size", col=ifelse(include, "black", "grey"), pch=20) lines(lowess(x = studies[, "sample_size"], studies[, "effect_size"]), col="grey", lwd=2) lines(lowess(x = studies[include, "sample_size"], studies[include, "effect_size"]), col="red", lwd=2) abline(h=.01) Created on 2019-02-20 by the reprex package (v0.2.1)
{ "source": [ "https://stats.stackexchange.com/questions/393269", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/224433/" ] }
394,083
It seems that, especially for deep learning, there are dominating very simple methods for optimizing SGD convergence like ADAM - nice overview: http://ruder.io/optimizing-gradient-descent/ They trace only single direction - discarding information about the remaining ones, they do not try to estimate distance from near extremum - which is suggested by gradient evolution ( $\rightarrow 0$ in extremum), and could help with the crucial choice of step size. Both these missed opportunities could be exploited by second order methods - trying to locally model parabola in simultaneously multiple directions (not all, just a few), e.g. near saddle attracting in some directions, repulsing in the others. Here are some: L-BFGS: http://aria42.com/blog/2014/12/understanding-lbfgs TONGA: https://papers.nips.cc/paper/3234-topmoumoute-online-natural-gradient-algorithm K-FAC: https://arxiv.org/pdf/1503.05671.pdf saddle-free Newton: https://arxiv.org/pdf/1406.2572 my second order local parametrization: https://arxiv.org/pdf/1901.11457 But still first order methods dominate (?), I have heard opinions that second order just don't work for deep learning (?) There are mainly 3 challenges (any more?): inverting Hessian , stochasticity of gradients, and handling saddles . All of them should be resolved if locally modelling parametrization as parabolas in a few promising directions (I would like to use): update this parametrization based on calculated gradients, and perform proper step based on this parametrization. This way extrema can be in updated parameters - no Hessian inversion, slow evolution of parametrization allows to accumulate statistical trends from gradients, we can model both curvatures near saddles: correspondingly attract or repulse, with strength depending on modeled distance. Should we go toward second order methods for deep learning? Why is it so difficult to make them more successful than simple first order methods - could we identify these challenges ... resolve them? As there are many ways to realize second order methods, which seems the most promising? Update: Overview of SGD convergence methods including 2nd order: https://www.dropbox.com/s/54v8cwqyp7uvddk/SGD.pdf Update: There are criticized huge 2nd order methods, but we can work on the opposite end of cost spectrum: make tiny steps from successful 1st order methods, like just cheap online parabola model in single direction e.g. of momentum method for smarter choice of step size - are there interesting approaches for such 2nd order enhancement of 1st order methods? Update: 2D example of SGD augmented with 2nd order information from sequence of gradients ( github ):
Should we go toward second order methods for deep learning? TL;DR: No, especially now when the pace of innovation is slowing down, and we're seeing less new architectural innovations, and more ways to train what are basically just copies of existing architectures, on larger datasets (see OpenAI's GPT-2). First , without even getting to second order, it's worth mentioning that in Deep Learning you don't even use (mini-batch) gradient descent to the fullest of its potential (i.e., you don't perform line search), because the optimization step in line search would be very costly. Second , second order methods are: way more complex, i.e., harder to implement without bugs. DL systems are increasingly becoming a small part of huge data processing pipelines. Introducing further complexity and brittleness in a complex system is only wise if the gains largely offset the risks. I'll argue below that they don't. harder to optimize for distributed computing on heterogeneous hardware, which is becoming more and more common. See how much work was required in order to make K-FAC work on distributed ( non heterogeneous) systems, and performances are still no better than the best first-order methods: https://arxiv.org/pdf/1811.12019.pdf . Instead, if just switching to distributed computing makes my first-order method as fast as, or faster, than second-order methods, I don't see the reason to use a more complicated optimization algorithm. way more expensive in terms of iteration cost (not number) and memory occupation, thus they introduce a considerable overhead. Current architectures (GPUs) are more memory-bound that computation-bound. As explained very well here , the increase in iteration cost and memory occupation is steeper, the more high-dimensional the problem is. Optimization in Deep Learning is arguably one of the most high-dimensional optimization problems, so it's not clear that second order methods would have a clear advantage in terms of computational time (not iteration count, which is not what we really care about) wrt first-order methods. another issue with Deep Learning optimization are saddle points. It's becoming abundantly clear that "bad" local minima are not an issue in Deep Learning, but saddle points are. Newton's method does have a tendency to be attracted to saddle points . If I remember correctly, Hessian approximating methods such as K-FAC don't have this issue, but I think the proof depends on the type of architecture, making the use of such methods brittle. they don't fix the problems which make practitioners waste most of their time. Dead or saturated units are not solved by K-FAC, but by better initialization schemes, so that's what we should focus on, e.g., Fixup: https://arxiv.org/abs/1901.09321 another issue with second order methods is that for most common loss functions, it's easy to use mini-batches to get an estimator which converges to the actual gradient. It is much more complicated to build a sampling-based estimator for the approximation to the inverse of the Hessian. In other words, second order methods introduce a lot of complexity and extra memory occupation, but stochastic second order methods introduce even more complexity. Contrast that with stochastic first order methods, where the algorithm is just slightly more complicated than that of deterministic first order methods. finally, they have a lot of moving parts, which are difficult to tune up in the best way. Your same paper leaves a lot of details to be specified. Do we need even more extra hyperparameters, or do we need robust optimization methods? Keep in mind that in Deep Learning, as explained very well by Shai Shalev-Shwartz, when something goes wrong, it's very difficult to understand how to fix it https://www.youtube.com/watch?v=1nvf_DBnsxo and more hyperparameters don't help in that respect.
{ "source": [ "https://stats.stackexchange.com/questions/394083", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/207118/" ] }
394,118
A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. When my son was 2, he was able to identify trams and trains, even though he had seen just a few. Since he was usually confusing one with each other, apparently his neural network was not trained enough, but still. What is it that artificial neural networks are missing that prevent them from being able to learn way quicker? Is transfer learning an answer?
I caution against expecting strong resemblance between biological and artificial neural networks. I think the name "neural networks" is a bit dangerous, because it tricks people into expecting that neurological processes and machine learning should be the same. The differences between biological and artificial neural networks outweigh the similarities. As an example of how this can go awry, you can also turn the reasoning in the original post on its head. You can train a neural network to learn to recognize cars in an afternoon, provided you have a reasonably fast computer and some amount of training data. You can make this a binary task (car/not car) or a multi-class task (car/tram/bike/airplane/boat) and still be confident in a high level of success. By contrast, I wouldn't expect a child to be able to pick out a car the day - or even the week - after it's born, even after it has seen "so many training examples." Something is obviously different between a two-year-old and an infant that accounts for the difference in learning ability, whereas a vanilla image classification neural network is perfectly capable of picking up object classification immediately after "birth." I think that there are two important differences: (1) the relative volumes of training data available and (2) a self-teaching mechanism that develops over time because of abundant training data. The original post exposes two questions. The title and body of the question ask why neural networks need "so many examples." Relative to a child's experience, neural networks trained using common image benchmarks have comparatively little data. I will re-phrases the question in the title to "How does training a neural network for a common image benchmark compare & contrast to the learning experience of a child?" For the sake of comparison I'll consider the CIFAR-10 data because it is a common image benchmark. The labeled portion is composed of 10 classes of images with 6000 images per class. Each image is 32x32 pixels. If you somehow stacked the labeled images from CIFAR-10 and made a standard 48 fps video, you'd have about 20 minutes of footage. A child of 2 years who observes the world for 12 hours daily has roughly 263000 minutes (more than 4000 hours) of direct observations of the world, including feedback from adults (labels). (These are just ballpark figures -- I don't know how many minutes a typical two-year-old has spent observing the world.) Moreover, the child will have exposure to many, many objects beyond the 10 classes that comprise CIFAR-10. So there are a few things at play. One is that the child has exposure to more data overall and a more diverse source of data than the CIFAR-10 model has. Data diversity and data volume are well-recognized as pre-requisites for robust models in general. In this light, it doesn't seem surprising that a neural network is worse at this task than the child, because a neural network trained on CIFAR-10 is positively starved for training data compared to the two-year-old. The image resolution available to a child is better than the 32x32 CIFAR-10 images, so the child is able to learn information about the fine details of objects. The CIFAR-10 to two-year-old comparison is not perfect because the CIFAR-10 model will likely be trained with multiple passes over the same static images, while the child will see, using binocular vision, how objects are arranged in a three-dimensional world while moving about and with different lighting conditions and perspectives on the same objects. The anecdote about OP's child implies a second question, "How can neural networks become self-teaching?" A child is endowed with some talent for self-teaching, so that new categories of objects can be added over time without having to start over from scratch. OP's remark about transfer-learning names one kind of model adaptation in the machine learning context. In comments, other users have pointed out that one- and few-shot learning* is another machine learning research area. Additionally, reinforcement-learning addresses self-teaching models from a different perspective, essentially allowing robots to undertake trial-and-error experimentation to find optimal strategies for solving specific problems (e.g. playing chess). It's probably true that all three of these machine learning paradigms are germane to improving how machines adapt to new computer vision tasks. Quickly adapting machine learning models to new tasks is an active area of research. However, because the practical goals of these projects (identify new instances of malware, recognize imposters in passport photos, index the internet) and criteria for success differ from the goals of a child learning about the world, and the fact that one is done in a computer using math and the other is done in organic material using chemistry, direct comparisons between the two will remain fraught. As an aside, it would be interesting to study how to flip the CIFAR-10 problem around and train a neural network to recognize 6000 objects from 10 examples of each. But even this wouldn't be a fair comparison to 2-year-old, because there would still be a large discrepancy in the total volume, diversity and resolution of the training data. *We don't presently have a tags for one-shot learning or few-shot learning.
{ "source": [ "https://stats.stackexchange.com/questions/394118", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/107213/" ] }
394,206
I have a data set for a variable, for which I have run some unit-root tests: ADF (constant/without trend): t-stat=-1.0816, p-val=0.7218 - DNR ADF (constant & trend): t-stat=-4.5203, p-val=0.0021 - REJECT @5% level PP (constant/without trend): t-stat=-1.3507, p-val=0.6044 - DNR PP (constant & trend): t-stat=-3.6030, p-val=0.0334 - REJECT @5% level (Note I have also run all of the above tests with the first-differenced values, and they ALL reject the null of the presence of a unit root.) As I understand, I fail-to-reject the null on both the ADF & PP tests with ONLY a constant (no trend), but statistically reject the null for both tests when a trend is included. What should I conclude about the data series - is this indicative of any statistical property I may be ignoring/oblivious to? What does this indicate as to the presence of a unit-root in the series given the contradictory results of the tests? EDIT: Just have run into the opposite problem on another data series: ADF & PP (no trend) Rejects the null but ADF & PP WITH trend Does Not Reject the null. What would this case of the problem mean? I feel like I'm missing something - the Alternative Hypotheses? Thanks
I caution against expecting strong resemblance between biological and artificial neural networks. I think the name "neural networks" is a bit dangerous, because it tricks people into expecting that neurological processes and machine learning should be the same. The differences between biological and artificial neural networks outweigh the similarities. As an example of how this can go awry, you can also turn the reasoning in the original post on its head. You can train a neural network to learn to recognize cars in an afternoon, provided you have a reasonably fast computer and some amount of training data. You can make this a binary task (car/not car) or a multi-class task (car/tram/bike/airplane/boat) and still be confident in a high level of success. By contrast, I wouldn't expect a child to be able to pick out a car the day - or even the week - after it's born, even after it has seen "so many training examples." Something is obviously different between a two-year-old and an infant that accounts for the difference in learning ability, whereas a vanilla image classification neural network is perfectly capable of picking up object classification immediately after "birth." I think that there are two important differences: (1) the relative volumes of training data available and (2) a self-teaching mechanism that develops over time because of abundant training data. The original post exposes two questions. The title and body of the question ask why neural networks need "so many examples." Relative to a child's experience, neural networks trained using common image benchmarks have comparatively little data. I will re-phrases the question in the title to "How does training a neural network for a common image benchmark compare & contrast to the learning experience of a child?" For the sake of comparison I'll consider the CIFAR-10 data because it is a common image benchmark. The labeled portion is composed of 10 classes of images with 6000 images per class. Each image is 32x32 pixels. If you somehow stacked the labeled images from CIFAR-10 and made a standard 48 fps video, you'd have about 20 minutes of footage. A child of 2 years who observes the world for 12 hours daily has roughly 263000 minutes (more than 4000 hours) of direct observations of the world, including feedback from adults (labels). (These are just ballpark figures -- I don't know how many minutes a typical two-year-old has spent observing the world.) Moreover, the child will have exposure to many, many objects beyond the 10 classes that comprise CIFAR-10. So there are a few things at play. One is that the child has exposure to more data overall and a more diverse source of data than the CIFAR-10 model has. Data diversity and data volume are well-recognized as pre-requisites for robust models in general. In this light, it doesn't seem surprising that a neural network is worse at this task than the child, because a neural network trained on CIFAR-10 is positively starved for training data compared to the two-year-old. The image resolution available to a child is better than the 32x32 CIFAR-10 images, so the child is able to learn information about the fine details of objects. The CIFAR-10 to two-year-old comparison is not perfect because the CIFAR-10 model will likely be trained with multiple passes over the same static images, while the child will see, using binocular vision, how objects are arranged in a three-dimensional world while moving about and with different lighting conditions and perspectives on the same objects. The anecdote about OP's child implies a second question, "How can neural networks become self-teaching?" A child is endowed with some talent for self-teaching, so that new categories of objects can be added over time without having to start over from scratch. OP's remark about transfer-learning names one kind of model adaptation in the machine learning context. In comments, other users have pointed out that one- and few-shot learning* is another machine learning research area. Additionally, reinforcement-learning addresses self-teaching models from a different perspective, essentially allowing robots to undertake trial-and-error experimentation to find optimal strategies for solving specific problems (e.g. playing chess). It's probably true that all three of these machine learning paradigms are germane to improving how machines adapt to new computer vision tasks. Quickly adapting machine learning models to new tasks is an active area of research. However, because the practical goals of these projects (identify new instances of malware, recognize imposters in passport photos, index the internet) and criteria for success differ from the goals of a child learning about the world, and the fact that one is done in a computer using math and the other is done in organic material using chemistry, direct comparisons between the two will remain fraught. As an aside, it would be interesting to study how to flip the CIFAR-10 problem around and train a neural network to recognize 6000 objects from 10 examples of each. But even this wouldn't be a fair comparison to 2-year-old, because there would still be a large discrepancy in the total volume, diversity and resolution of the training data. *We don't presently have a tags for one-shot learning or few-shot learning.
{ "source": [ "https://stats.stackexchange.com/questions/394206", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/229369/" ] }
394,648
While studying Bayesian statistics, somehow I am facing a problem to understand the differences between prior distribution and prior predictive distribution. Prior distribution is sort of fine to understand but I have found it vague to understand the use of prior predictive distribution and why it is different from prior distribution.
Predictive here means predictive for observations. The prior distribution is a distribution for the parameters whereas the prior predictive distribution is a distribution for the observations. If $X$ denotes the observations and we use the model (or likelihood) $p(x \mid \theta)$ for $\theta \in \Theta$ then a prior distribution is a distribution for $\theta$ , for example $p_\beta(\theta)$ where $\beta$ is a set of hyperparameters. Note that there's no conditioning on $\beta$ , and therefore the hyperparameters are considered fixed, which is not the case in hierarchical models but this not the point here. The prior predictive distribution is the distribution of $X$ "averaged" over all possible values of $\theta$ : \begin{align*} p_\beta(x) &= \int_\Theta p(x , \theta) d\theta \\ &= \int_\Theta p(x \mid \theta) p_\beta(\theta) d\theta \end{align*} This distribution is prior as it does not rely on any observation. We can also define in the same way the posterior predictive distribution , that is if we have a sample $X = (X_1, \dots, X_n)$ , the posterior predictive distribution is: \begin{align*} p_\beta(x \mid X) &= \int_\Theta p(x ,\theta \mid X) d\theta \\ &= \int_\Theta p(x \mid \theta,X) p_\beta(\theta \mid X)d\theta \\ &= \int_\Theta p(x \mid \theta) p_\beta(\theta \mid X)d\theta. \end{align*} The last line is based on the assumption that the upcoming observation is independent of $X$ given $\theta$ . Thus the posterior predictive distribution is constructed the same way as the prior predictive distribution but while in the latter we weight with $p_\beta(\theta)$ in the former we weight with $p_\beta(\theta \mid X)$ that is with our "updated" knowledge about $\theta$ . Example : Beta-Binomial Suppose our model is $X \mid \theta \sim {\rm Bin}(n,\theta)$ i.e $P(X = x \mid \theta) = \theta^x(1-\theta)^{n-x}$ . Here $\Theta = [0,1]$ . We also assume a beta prior distribution for $\theta$ , $\beta(a,b)$ , where $(a,b)$ is the set of hyper parameters. The prior predictive distribution , $p_{a,b}(x)$ , is the beta-binomial distribution with parameters $(n,a,b)$ . This discrete distribution gives the probability of getting $k$ successes out of $n$ trials given the hyper-parameters $(a,b)$ on the probability of success. Now suppose we observe $n_1$ draws $(x_1, \dots, x_{n_1})$ with $m$ successes. Since the binomial and beta distributions are conjugate distributions we have: \begin{align*} p(\theta \mid X=m) &\propto \theta^m (1 - \theta)^{n_1-m} \times \theta^{a-1}(1-\theta)^{b-1}\\ &\propto \theta^{a+m-1}(1-\theta)^{n_1+b-m-1} \\ &\propto \beta(a+m,n_1+b-m) \end{align*} Thus $\theta \mid X$ follows a beta distribution with parameters $(a+m,n_1+b-m)$ . Then, $p_{a,b}(x \mid X = m)$ is also a beta-binomial distribution but this time with parameters $(n_2,a+m,b+n_1-m)$ rather than $(n_2,a,b)$ . Upon a $\beta(a,b)$ prior distribution and a ${\rm Bin}(n,\theta)$ likelihood, if we observe $m$ successes out of $n_1$ trials, the posterior predictive distribution is a beta-binomial with parameters $(n_2,a+x,b+n_1-x)$ . Note that $n_2$ and $n_1$ play different roles here, since the posterior predictive distribution is about: Given my current knowledge on $\theta$ after observing $m$ successes out of $n_1$ trials, i.e $\beta(n_1,a+x,n+b-x)$ , what probability do I have of observing $k$ successes out of $n_2$ additional trials? I hope this is useful and clear.
{ "source": [ "https://stats.stackexchange.com/questions/394648", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/239221/" ] }
395,697
I'm learning XGBoost. The following is the code I used and below that is the tree #0 and #1 in the XGBoost model I built. I'm having a hard time understanding the meanings of the leaf values. Some answer I found indicates that the values are "Conditional Probabilities" for a data sample to be on that leaf . But I also found negative values on some leaves. How can probability be negative? Can someone provide a intuitive explanation for the leaf values? # prepare dataset import numpy as np import pandas as pd train_set = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', header = None) test_set = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test', skiprows = 1, header = None) # Make sure to skip a row for the test set # since the downloaded data has no header, I need to add the headers manually col_labels = ['age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'sex', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'wage_class'] train_set.columns = col_labels test_set.columns = col_labels # 1. replace ' ?' with nan # 2. drop all nan train_noNan = train_set.replace(' ?', np.nan).dropna() test_noNan = test_set.replace(' ?', np.nan).dropna() # replace ' <=50K.' with ' <=50K', and ' >50K.' with ' >50K' in wage_class test_noNan['wage_class'] = test_noNan.wage_class.replace( {' <=50K.' : ' <=50K', ' >50K.' : ' >50K' }) # encode training and test dataset together combined_set = pd.concat([train_noNan, test_noNan], axis=0) # for feature in combined_set.columns: # cetegorical feature columns will have dtype = object if combined_set[feature].dtype == 'object': combined_set[feature] = pd.Categorical(combined_set[feature]).codes # replace string with integer; this simply counts the # of unique values in a column and maps it to an integer combined_set.head() # separate train and test final_train = combined_set[:train_noNan.shape[0]] final_test = combined_set[train_noNan.shape[0]:] # separate feature and label y_train = final_train.pop('wage_class') y_test = final_test.pop('wage_class') import xgboost as xgb from xgboost import plot_tree from sklearn.model_selection import GridSearchCV # XGBoost has built-in CV, which can use early-stopping to prevent overfiting, therefore improve accuracy ## if not using sklearn, I can convert the data into DMatrix, a XGBoost specific data structure for training and testing. It is said DMatrix can improve the efficiency of the algorithm xgdmat = xgb.DMatrix(final_train, y_train) our_params = { 'eta' : 0.1, # aka. learning_rate 'seed' : 0, 'subsample' : 0.8, 'colsample_bytree': 0.8, 'objective' : 'binary:logistic', 'max_depth' :3, # how many features to use before reach leaf 'min_child_weight':1} # Grid Search CV optimized settings # create XGBoost object using the parameters final_gb = xgb.train(our_params, xgdmat, num_boost_round = 432) import seaborn as sns sns.set(font_scale = 1.5) xgb.plot_importance(final_gb) # after printing the importance of the features, we need to put human insights and try to explain why each feature is important/not important # visualize the tree # import matplotlib.pyplot as plt # xgb.plot_tree(final_gb, num_trees = 0) # plt.rcParams['figure.figsize'] = [600, 300] # define the figure size... # plt.show() graph_to_save = xgb.to_graphviz(final_gb, num_trees = 0) graph_to_save.format = 'png' graph_to_save.render('tree_0_saved') # a tree_saved.png will be saved in the root directory graph_to_save = xgb.to_graphviz(final_gb, num_trees = 1) graph_to_save.format = 'png' graph_to_save.render('tree_1_saved') Below is the dumped tree #0 and #1.
A gradient boosting machine (GBM), like XGBoost, is an ensemble learning technique where the results of the each base-learner are combined to generate the final estimate. That said, when performing a binary classification task, by default, XGBoost treats it as a logistic regression problem. As such the raw leaf estimates seen here are log-odds and can be negative. Refresher : Within the context of logistic regression, the mean of the binary response is of the form $\mu(X) = Pr(Y = 1|X)$ and relates to the predictors $X_1, ..., X_p$ through the logit function: $\log( \frac{\mu(X)}{1-\mu(X)})$ $=$ $\beta_0 +$ $\beta_1 X_1 +$ $... +$ $\beta_p X_p$ . As a consequence, to get probability estimates we need to use the inverse logit (i.e. the logistic) link $\frac{1}{1 +e^{-(\beta_0 + \beta_1 X_1 + ... + \beta_p X_p)}}$ . In addition to that, we need to remember that boosting can be presented as a generalised additive model (GAM). In the case of a simple GAM our final estimates are of the form: $g[\mu(X)]$ $=$ $\alpha +$ $f_1(X_1) +$ $... +$ $f_p(X_p)$ , where $g$ is our link function and $f$ is a set of elementary basis functions (usually cubic splines). When boosting through, we change $f$ and instead of some particular basis function family, we use the individual base-learners we mentioned originally! (See Hastie et al. 2009, Elements of Statistical Learning Chapt. 4.4 " Logistic Regression " and Chapt. 10.2 " Boosting Fits an Additive Model " for more details.) In the case of a GBM therefore, the result from each individual tree are indeed combined together, but they are not probabilities (yet) but rather the estimates of the score before performing the logistic transformation done when performing logistic regression. For that reason the individual as well as the combined estimates show can naturally be negative; the negative sign simply implies "less" chance. OK, talk is cheap, show me the code . Let's assume we have only two base-learners, that are simple stumps: our_params = { 'eta' : 0.1, # aka. learning_rate 'seed' : 0, 'subsample' : 0.8, 'colsample_bytree': 0.8, 'objective' : 'binary:logistic', 'max_depth' : 1, # Stumps 'min_child_weight': 1} # create XGBoost object using the parameters final_gb = xgb.train(our_params, xgdmat, num_boost_round = 2) And that we aim to predict the first four entries of our test-set. xgdmat4 = xgb.DMatrix(final_test.iloc[0:4,:], y_test[0:4]) mypreds4 = final_gb.predict(data = xgdmat4) # array([0.43447325, 0.46945405, 0.46945405, 0.5424156 ], dtype=float32) Plotting the two (sole) trees used: graph_to_save = xgb.to_graphviz(final_gb, num_trees = 0) graph_to_save.format = 'png' graph_to_save.render('tree_0_saved') graph_to_save = xgb.to_graphviz(final_gb, num_trees = 1) graph_to_save.format = 'png' graph_to_save.render('tree_1_saved') Gives us the following two tree diagrams: Based on these diagrams and we can check that based on our initial sample: final_test.iloc[0:4,:][['capital_gain','relationship']] # capital_gain relationship #0 0 3 #1 0 0 #2 0 0 #3 7688 0 We can directly calculate our own estimates manually based on the logistic function: 1/(1+ np.exp(-(-0.115036212 + -0.148587108))) # First entry # 0.4344732254087043 1/(1+ np.exp(-(-0.115036212 + -0.007299904))) # Second entry # 0.4694540577007751 1/(1+ np.exp(-(-0.115036212 + -0.007299904))) # Third entry # 0.4694540577007751 1/(1+ np.exp(-(+0.177371055 + -0.007299904))) # Fourth entry # 0.5424156005710725 It can be easily seen that our manual estimates match (up to 7 digits) the ones we got directly from predict . So to recap, the leaves contain the estimates from their respective base-learner on the domain of the function where the gradient boosting procedure takes place. For the presented binary classification task, the link used is the logit so these estimates represent log-odds; in terms of log-odds, negative values are perfectly normal. To get probability estimates we simply use the logistic function, which is the inverse of the logit function. Finally, please note that we need to first compute our final estimate in the gradient boosting domain and then transform it back. Tranforming the output of each base-learner individually and then combining these outputs is wrong because the linearity relation shown does not (necessarily) hold in the domain of the response variable. For more information about the logit I would suggest reading the excellent CV.SE thread on Interpretation of simple predictions to odds ratios in logistic regression .
{ "source": [ "https://stats.stackexchange.com/questions/395697", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/239473/" ] }
396,815
This is the definition for statistic on wikipedia More formally, statistical theory defines a statistic as a function of a sample where the function itself is independent of the sample's distribution; that is, the function can be stated before realization of the data. The term statistic is used both for the function and for the value of the function on a given sample. I think I understand most of this definition, however the part - where the function is independent of the sample's distribution I haven't been able to sort out. My understanding of statistic so far A sample is a set of realizations of some number of independent, identically distributed (iid) random variables with distribution F (10 realizations of a roll of a 20-sided fair dice, 100 realizations of 5 rolls of a 6-sided fair dice, randomly draw 100 people from a population). A function, whose domain is that set, and whose range is the real numbers (or maybe it can produce other things, like a vector or other mathematical object...) would be considered a statistic . When I think of examples, mean, median, variance all make sense in this context. They are a function on set of realizations (blood pressure measurements from a random sample). I can also see how a linear regression model could be considered a statistic $y_{i} = \alpha + \beta \cdot x_{i}$ - is this not just a function on a set of realizations? Where I'm confused Assuming that my understanding from above is correct, I haven't been able to understand where a function might not be independent of the sample's distribution. I've been trying to think of an example to make sense of it, but no luck. Any insight would be much appreciated!
That definition is a somewhat awkward way to state it. A "statistic" is any function of the observable values. All that definition means is that a statistic is a function only of the observable values, not a function of the distribution or any of its parameters. For example, if $X_1, X_2, ..., X_n \sim \text{N}(\mu, 1)$ then a statistic would be any function $T(X_1,...,X_n)$ whereas a function $H(X_1,....,X_n, \mu)$ would not be a statistic, since it depends on $\mu$ . Here are some further examples: $$\begin{equation} \begin{aligned} \text{Statistic} & & & & & \bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i, \\[12pt] \text{Statistic} & & & & & S_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X}_n)^2, \\[12pt] \text{Not a statistic} & & & & & D_n = \bar{X}_n - \mu, \\[12pt] \text{Not a statistic} & & & & & p_i = \text{N}(x_i | \mu, 1), \\[12pt] \text{Not a statistic} & & & & & Q = 10 \mu. \\[12pt] \end{aligned} \end{equation}$$ Every statistic is a function only of the observable values, and not of their distribution or its parameters. So there are no examples of a statistic that is a function of the distribution or its parameters (any such function would not be a statistic). However, it is important to note that the distribution of a statistic (as opposed to the statistic itself) will generally depend on the underlying distribution of the values. (This is true for all statistics other than ancillary statistics .) What about a function where the parameters are known? In the comments below, Alecos asks an excellent follow-up question. What about a function that uses a fixed hypothesised value of the parameter? For example, what about the statistic $\sqrt{n} (\bar{x} - \mu)$ where $\mu = \mu_0$ is taken to be equal to a known hypothesised value $\mu_0 \in \mathbb{R}$ . Here the function is indeed a statistic, so long as it is defined on the appropriately restricted domain. So the function $H_0: \mathbb{R}^n \rightarrow \mathbb{R}$ with $H_0(x_1,...,x_n) = \sqrt{n} (\bar{x} - \mu_0)$ would be a statistic, but the function $H: \mathbb{R}^{n+1} \rightarrow \mathbb{R}$ with $H(x_1,...,x_n, \mu) = \sqrt{n} (\bar{x} - \mu)$ would not be a statistic.
{ "source": [ "https://stats.stackexchange.com/questions/396815", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/176104/" ] }
397,258
When I read about how to setup your data, one thing I have often come across is that transforming some continuous data into categorical data is not a good idea, since you may very well make the wrong conclusion if the thresholds are poorly determined. However, I currently have some data (PSA values for prostate cancer patients), where I think the common consensus is that if you are below 4 you probably don't have it, if you are above you are at risk, and then something like above 10 and 20, you probably have it. Something like that. In that case, would it still be incorrect to categorize my continuous PSA values into groups of let's say 0-4, 4-10, and >10 ? Or is it actually okay since the thresholds are "well determined" so to speak.
Is there a sharp discontinuity at your thresholds? For instance, suppose you have two patients A and B with values 3.9 and 4.1, and another two patients C and D with values 6.7 and 6.9. Is the difference in the likelihood for cancer between A and B much larger than the corresponding difference between C and D? If yes, then discretizing makes sense. If not, then your thresholds may make sense in understanding your data, but they are not "well determined" in a statistically meaningful sense. Don't discretize. Instead, use your test scores "as-is", and if you suspect some kind of nonlinearity, use splines . This is very much recommended.
{ "source": [ "https://stats.stackexchange.com/questions/397258", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/117518/" ] }
398,273
I have a numeric variable which turns out not significant in a multivariate logistic regression model. However, when I categorize it into groups, suddenly it becomes significant. This is very counter-intuitive to me: when categorizing a variable, we give some information up. How can this be?
One possible explanation would be nonlinearities in the relationship between your outcome and the predictor. Here is a little example. We use a predictor that is uniform on $[-1,1]$ . The outcome, however, does not linearly depend on the predictor, but on the square of the predictor: TRUE is more likely for both $x\approx-1$ and $x\approx 1$ , but less likely for $x\approx 0$ . In this case, a linear model will come up insignificant, but cutting the predictor into intervals makes it significant. > set.seed(1) > nn <- 1e3 > xx <- runif(nn,-1,1) > yy <- runif(nn)<1/(1+exp(-xx^2)) > > library(lmtest) > > model_0 <- glm(yy~1,family="binomial") > model_1 <- glm(yy~xx,family="binomial") > lrtest(model_1,model_0) Likelihood ratio test Model 1: yy ~ xx Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 2 -676.72 2 1 -677.22 -1 0.9914 0.3194 > > xx_cut <- cut(xx,c(-1,-0.3,0.3,1)) > model_2 <- glm(yy~xx_cut,family="binomial") > lrtest(model_2,model_0) Likelihood ratio test Model 1: yy ~ xx_cut Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 3 -673.65 2 1 -677.22 -2 7.1362 0.02821 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 However, this does not mean that discretizing the predictor is the best approach. (It almost never is.) Much better to model the nonlinearity using splines or similar.
{ "source": [ "https://stats.stackexchange.com/questions/398273", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/204952/" ] }
398,286
I want to do a (multinomial) logistic regression to predict 5 different physical activity classes based on different variables extracted for each subject. However, I have one variable (i.e., time after disease onset) in my dataset with missing values for some of the subjects, i.e. for the healthy control subjects. These missing values are thus not missing at random, but don't make sense for healthy controls. How would you deal with this variable? I could remove this variable completely from the model, but it does have a strong predicting power. Or would you simply set a to an arbitrary value, e.g. -100?
One possible explanation would be nonlinearities in the relationship between your outcome and the predictor. Here is a little example. We use a predictor that is uniform on $[-1,1]$ . The outcome, however, does not linearly depend on the predictor, but on the square of the predictor: TRUE is more likely for both $x\approx-1$ and $x\approx 1$ , but less likely for $x\approx 0$ . In this case, a linear model will come up insignificant, but cutting the predictor into intervals makes it significant. > set.seed(1) > nn <- 1e3 > xx <- runif(nn,-1,1) > yy <- runif(nn)<1/(1+exp(-xx^2)) > > library(lmtest) > > model_0 <- glm(yy~1,family="binomial") > model_1 <- glm(yy~xx,family="binomial") > lrtest(model_1,model_0) Likelihood ratio test Model 1: yy ~ xx Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 2 -676.72 2 1 -677.22 -1 0.9914 0.3194 > > xx_cut <- cut(xx,c(-1,-0.3,0.3,1)) > model_2 <- glm(yy~xx_cut,family="binomial") > lrtest(model_2,model_0) Likelihood ratio test Model 1: yy ~ xx_cut Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 3 -673.65 2 1 -677.22 -2 7.1362 0.02821 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 However, this does not mean that discretizing the predictor is the best approach. (It almost never is.) Much better to model the nonlinearity using splines or similar.
{ "source": [ "https://stats.stackexchange.com/questions/398286", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/90215/" ] }
398,646
The title of the Comment in Nature Scientists rise up against statistical significance begins with: Valentin Amrhein, Sander Greenland, Blake McShane and more than 800 signatories call for an end to hyped claims and the dismissal of possibly crucial effects. and later contains statements like: Again, we are not advocating a ban on P values, confidence intervals or other statistical measures — only that we should not treat them categorically. This includes dichotomization as statistically significant or not, as well as categorization based on other statistical measures such as Bayes factors. I think I can grasp that the image below does not say that the two studies disagree because one "rules out" no effect while the other does not. But the article seems to go into much more depth than I can understand. Towards the end there seems to be a summary in four points. Is it possible to summarize these in even simpler terms for those of us who read statistics rather than write it? When talking about compatibility intervals, bear in mind four things. First, just because the interval gives the values most compatible with the data, given the assumptions, it doesn’t mean values outside it are incompatible; they are just less compatible... Second, not all values inside are equally compatible with the data, given the assumptions... Third, like the 0.05 threshold from which it came, the default 95% used to compute intervals is itself an arbitrary convention... Last, and most important of all, be humble: compatibility assessments hinge on the correctness of the statistical assumptions used to compute the interval...
The first three points, as far as I can tell, are a variation on a single argument. Scientists often treat uncertainty measurements ( $12 \pm 1 $ , for instance) as probability distributions that look like this: When actually, they are much more likely to look like this : As a former chemist, I can confirm that many scientists with non-mathematical backgrounds (primarily non-physical chemists and biologists) don't really understand how uncertainty (or error, as they call it) is supposed to work. They recall a time in undergrad physics where they maybe had to use them, possibly even having to calculate a compound error through several different measurements, but they never really understood them. I too was guilty of this, and assumed all measurements had to come within the $\pm$ interval. Only recently (and outside academia), did I find out that error measurements usually refer to a certain standard deviation, not an absolute limit. So to break down the numbered points in the article: Measurements outside the CI still have a chance of happening, because the real (likely gaussian) probability is non-zero there (or anywhere for that matter, although they become vanishingly small when you get far out). If the values after the $\pm$ do indeed represent one s.d., then there is still a 32% chance of a data point falling outside of them. The distribution is not uniform (flat topped, as in the first graph), it is peaked. You are more likely to get a value in the middle than you are at the edges. It's like rolling a bunch of dice, rather than a single die. 95% is an arbitrary cutoff, and coincides almost exactly with two standard deviations. This point is more of a comment on academic honesty in general. A realisation I had during my PhD is that science isn't some abstract force, it is the cumulative efforts of people attempting to do science. These are people who are trying to discover new things about the universe, but at the same time are also trying to keep their kids fed and keep their jobs, which unfortunately in modern times means some form of publish or perish is at play. In reality, scientists depend on discoveries that are both true and interesting , because uninteresting results don't result in publications. Arbitrary thresholds such as $p < 0.05$ can often be self-perpetuating, especially among those who don't fully understand statistics and just need a pass/fail stamp on their results. As such, people do sometimes half-jokingly talk about 'running the test again until you get $p < 0.05$ '. It can be very tempting, especially if a Ph.D/grant/employment is riding on the outcome, for these marginal results to be, jiggled around until the desired $p = 0.0498$ shows up in the analysis. Such practices can be detrimental to the science as a whole, especially if it is done widely, all in the pursuit of a number which is in the eyes of nature, meaningless. This part in effect is exhorting scientists to be honest about their data and work, even when that honesty is to their detriment.
{ "source": [ "https://stats.stackexchange.com/questions/398646", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/163067/" ] }
398,649
I am looking for a statistical method to calculate the conditional hazard before the diagnoses or study has started; in other words, time is not a variable. For example, here is a hypothetical graph: where x is a continuous predictor/covariate i.e.: humidity damage is analogous to death i.e.: a banana going completely brown h(damage|x) is the instantaneous hazard rate given $x$ is at a specific value i.e.: the hazard rate that a banana goes fully brown given that humidity = 6 g/m^3 Here is a motivating image So, the goal is to statistically quantify the force that humidity exerts on the banana to make it turn brown. A specific level might make the banana go immediately brown (instant death) and some other level has no effect or even preserving effect. Once again, this is to be determined before we begin the survival study (in time) for one or many bananas. This is very important. All my online searches online take me to Cox Proportional Model which is not what I am looking for. One promising formula is the hazard ratio from probability as shown in this wiki $$HR=\frac{p}{1-p}$$ However, $p$ is left undefined. Is it some kind of conditional probability? Does it apply to other models other than Cox's..? I don't know but I like the formula since it gives me hope that I can use basic probability methods like the maximum likelihood, some parametric model, etc. Edit: I found a formula in a paper which they also called odds ratio and this makes more sense $$O(t)=\frac{F(t)}{S(t)}$$ where $F(t)$ and $S(t)$ are the the cumulative and survival function respectively. Also, I believe at time zero there is no conditioning; hence, the hazard reduces to the probability of dying in the first $\delta t$ Another thought is to let $HR$ itself be a random variable, take enough measurements (not sure if it is possible to measure an HR observation) to fit a distribution, then derive the the continuous conditional hazard at each level. Any help would be greatly appreciated.
The first three points, as far as I can tell, are a variation on a single argument. Scientists often treat uncertainty measurements ( $12 \pm 1 $ , for instance) as probability distributions that look like this: When actually, they are much more likely to look like this : As a former chemist, I can confirm that many scientists with non-mathematical backgrounds (primarily non-physical chemists and biologists) don't really understand how uncertainty (or error, as they call it) is supposed to work. They recall a time in undergrad physics where they maybe had to use them, possibly even having to calculate a compound error through several different measurements, but they never really understood them. I too was guilty of this, and assumed all measurements had to come within the $\pm$ interval. Only recently (and outside academia), did I find out that error measurements usually refer to a certain standard deviation, not an absolute limit. So to break down the numbered points in the article: Measurements outside the CI still have a chance of happening, because the real (likely gaussian) probability is non-zero there (or anywhere for that matter, although they become vanishingly small when you get far out). If the values after the $\pm$ do indeed represent one s.d., then there is still a 32% chance of a data point falling outside of them. The distribution is not uniform (flat topped, as in the first graph), it is peaked. You are more likely to get a value in the middle than you are at the edges. It's like rolling a bunch of dice, rather than a single die. 95% is an arbitrary cutoff, and coincides almost exactly with two standard deviations. This point is more of a comment on academic honesty in general. A realisation I had during my PhD is that science isn't some abstract force, it is the cumulative efforts of people attempting to do science. These are people who are trying to discover new things about the universe, but at the same time are also trying to keep their kids fed and keep their jobs, which unfortunately in modern times means some form of publish or perish is at play. In reality, scientists depend on discoveries that are both true and interesting , because uninteresting results don't result in publications. Arbitrary thresholds such as $p < 0.05$ can often be self-perpetuating, especially among those who don't fully understand statistics and just need a pass/fail stamp on their results. As such, people do sometimes half-jokingly talk about 'running the test again until you get $p < 0.05$ '. It can be very tempting, especially if a Ph.D/grant/employment is riding on the outcome, for these marginal results to be, jiggled around until the desired $p = 0.0498$ shows up in the analysis. Such practices can be detrimental to the science as a whole, especially if it is done widely, all in the pursuit of a number which is in the eyes of nature, meaningless. This part in effect is exhorting scientists to be honest about their data and work, even when that honesty is to their detriment.
{ "source": [ "https://stats.stackexchange.com/questions/398649", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/87106/" ] }
399,094
Is there a conceptual or procedural difference between fitting a model to data and fitting data to model? An example of the first wording can be seen in https://courses.washington.edu/matlab1/ModelFitting.html , and of the second in https://reference.wolfram.com/applications/eda/FittingDataToLinearModelsByLeast-SquaresTechniques.html .
Pretty much every source or person I've ever interacted with except the Wolfram source you linked refers to the process as fitting a model to data . This makes sense, since the model is the dynamic object and the data is static (a.k.a. fixed and constant). To put a point on it, I like Larry Wasserman's approach to this. In his telling, a statistical model is a collection of distributions. For example, the collection of all normal distributions: $$ \{ \text{Normal}(\mu, \sigma) : \mu, \sigma \in R, \sigma > 0 \} $$ or the set of all Poisson distributions: $$ \{ \text{Poisson}(\lambda) : \lambda \in R, \lambda > 0 \} $$ Fitting a distribution to data is any algorithm that combines a statistical model with a set of data (the data is fixed), and chooses exactly one of the distributions from the model as the one that "best" reflects the data. The model is the thing that changes (sort of): we are collapsing it from an entire collection of possibilities into a single best choice. The data is just the data; nothing happens to it at all.
{ "source": [ "https://stats.stackexchange.com/questions/399094", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/100939/" ] }
401,480
I thought of this problem in the shower, it was inspired by investment strategies. Let's say there was a magic money tree. Every day, you can offer an amount of money to the money tree and it will either triple it, or destroy it with 50/50 probability. You immediately notice that on average you will gain money by doing this and are eager to take advantage of the money tree. However, if you offered all your money at once, you would have a 50% of losing all your money. Unacceptable! You are a pretty risk-averse person, so you decide to come up with a strategy. You want to minimize the odds of losing everything, but you also want to make as much money as you can! You come up with the following: every day, you offer 20% of your current capital to the money tree. Assuming the lowest you can offer is 1 cent, it would take a 31 loss streak to lose all your money if you started with 10 dollars. What's more, the more cash you earn, the longer the losing streak needs to be for you to lose everything, amazing! You quickly start earning loads of cash. But then an idea pops into your head: you can just offer 30% each day and make way more money! But wait, why not offer 35%? 50%? One day, with big dollar signs in your eyes you run up to the money tree with all your millions and offer up 100% of your cash, which the money tree promptly burns. The next day you get a job at McDonalds. Is there an optimal percentage of your cash you can offer without losing it all? (sub) questions: If there is an optimal percentage you should offer, is this static (i.e. 20% every day) or should the percentage grow as your capital increases? By offering 20% every day, do the odds of losing all your money decrease or increase over time? Is there a percentage of money from where the odds of losing all your money increase over time?
This is a well-known problem. It is called a Kelly bet. The answer, by the way, is 1/3rd. It is equivalent to maximizing the log utility of wealth. Kelly began with taking time to infinity and then solving backward. Since you can always express returns in terms of continuous compounding, then you can also reverse the process and express it in logs. I am going to use the log utility explanation, but the log utility is a convenience. If you are maximizing wealth as $n\to\infty$ then you will end up with a function that works out to be the same as log utility. If $b$ is the payout odds, and $p$ is the probability of winning, and $X$ is the percentage of wealth invested, then the following derivation will work. For a binary bet, $E(\log(X))=p\log(1+bX)+(1-p)\log(1-X)$ , for a single period and unit wealth. $$\frac{d}{dX}{E[\log(x)]}=\frac{d}{dX}[p\log(1+bX)+(1-p)\log(1-X)]$$ $$=\frac{pb}{1+bX}-\frac{1-p}{1-X}$$ Setting the derivative to zero to find the extrema, $$\frac{pb}{1+bX}-\frac{1-p}{1-X}=0$$ Cross multiplying, you end up with $$pb(1-X)-(1-p)(1+bX)=0$$ $$pb-pbX-1-bX+p+pbX=0$$ $$bX=pb-1+p$$ $$X=\frac{bp-(1-p)}{b}$$ In your case, $$X=\frac{3\times\frac{1}{2}-(1-\frac{1}{2})}{3}=\frac{1}{3}.$$ You can readily expand this to multiple or continuous outcomes by solving the expected utility of wealth over a joint probability distribution, choosing the allocations and subject to any constraints. Interestingly, if you perform it in this manner, by including constraints, such as the ability to meet mortgage payments and so forth, then you have accounted for your total set of risks and so you have a risk-adjusted or at least risk-controlled solution. Desiderata The actual purpose of the original research had to do with how much to gamble based on a noisy signal. In the specific case, how much to gamble on a noisy electronic signal where it indicated the launch of nuclear weapons by the Soviet Union. There have been several near launches by both the United States and Russia, obviously in error. How much do you gamble on a signal?
{ "source": [ "https://stats.stackexchange.com/questions/401480", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/243765/" ] }
402,333
There seem to be two different worlds in statistics. On one hand, there are the practitioners which run the same tests again and again. On the other hand, there is this overwhelming and seemingly endless world of statistics and machine learning where one gets lost easily in specific questions - just like here on Cross Validated. So my question is: What do you consider a statician/ ML professional must to know about statistics and machine learning? I know there will be comments that it depends on the area where you work. But still, there are things all statisticians (should) know like multicollinearity, power analysis or linear regression. I really would love to have a profound foundation in statistics, but for me it is hard to tell where to go next. So if statistics and machine learning were a craft occupation what knowledge and what tests / methods would you put in your toolbox? The answers to my question can give a feeling for what needs to be known to be a statistician to many people that are at the beginning of their career.
The two worlds that you describe aren't really two different kinds of statistician , but rather: "statistics on rails," to coin a phrase: an attempt to teach non-technical people enough to be able to use statistics in a few narrow contexts. statistics proper, as understood by mathematicians, statisticians, data scientists, etc. The deal is this. To understand statistics in even moderate depth, you need to know a considerable amount of mathematics. You need to be comfortable with set theory, outer product spaces, functions between high dimensional spaces, a bit of linear algebra, a bit of calculus, and a smidgen of measure theory. It's not as bad as it sounds: all this is usually covered adequately in the first 2-3 years of undergraduate for hard science majors. But for other majors... I can't even formally define a random variable or the normal distribution for someone who doesn't have those prerequisites. Yet, most people only need to know how to conduct a simple A/B test or the like. And the fact is, we can give someone without those prerequisites a set of formulas and look-up tables and tell them to plug-and-chug. Or today, more commonly a user-friendly GUI program like SPSS. As long as they follow some reasonable rules of experiment design and follow a step-by-step procedure, they will be able to accomplish what they need to. The problem is that without a fairly in-depth understanding, they: are very likely to misuse statistics can't stray from the garden path Issue one is so common it even gets its own Wikipedia article , and issue two can only really be addressed by going back to fundamentals and explaining where those tests came from in the first place. Or by continually exhorting people to stay within the lines, follow the checklist, and consult with a statistician if anything seems weird. The following poem comes to mind: A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again. - Alexander Pope, A Little Learning I would liken the "on rails" version of statistics that you see in AP stats or early undergraduate classes for non-majors as the difference between WebMD articles and going to med school. The information in the WebMD article is the most essential conclusion and summary of current medical recommendations. But its not intended as a replacement for medical school, and I wouldn't call someone who had read an WebMD article "Doctor." What do you consider as must to know in statistics and machine learning? The Kolmogorov axioms , the definition of a random variable (including random vectors, matrices, etc.) the algebra of random variables , the concept of a distribution and the various theorems that tie these together. You should know about moments . You should know the law of large numbers , the various inequality theorems such as Chebyshev's inequality and the central limit theorems , although if you want to know how to prove them (optional) you will also need to learn about characteristic functions , which can occasionally be useful in their own right if you ever need to calculate exact closed form distributions for say, a ratio distribution . This stuff would usually be covered in the first (or maybe second?) semester of a class on mathematical statistics. There is also a reasonably good and completely free online textbook which I mainly use for reference but which does develop the topic starting from first principles. There are a few crucial distributions everyone must know: Normal, Binomial, Beta, Chi-Squared, F, Student's t, Multivariate Normal. Possibly also Poisson and Exponential for Poisson processes, Multivariate/Dirichlet if you work with multi-class data a lot, and others as needed. Oh, and Uniform - can't forget Uniform! At this point, you're ready to learn the basic structure of a hypothesis test ; which is to say, what a " sample " is, and about null hypothesis and critical values, etc. You will be able to use the algebra of random variables and integrals involving distributions to derive pretty much all of the statistical hypothesis tests you've seen in AP stats. But you're not really done, in fact we're just getting to the good part: fitting models to data. There are various procedures, but the first one to learn is MLE . For me personally, this is the only reason why developed all the above machinery. The key thing to understand about fitting models is that we pose each one as an optimization problem where we (or rather, very powerful computers) find the "best" possible set of "parameters" for the model that "fit" a sample. The resulting model can be validated, examined and interpreted in various ways. The first two models to learn are linear regression and logistic regression , although if you've come through the hard way you might as well study the GLM (generalized linear model) which includes them both and more besides. A very good book on using logistic regression in practice is Hosmer et al. . Understanding these models in detail is very demanding, and encompasses ANVOA , regularization and many other useful techniques. If you're going to go around calling yourself a statistician, you will definitely want to complement all that theoretical knowledge with a solid, thorough understanding of the design of experiments and power analysis. This is one of the most common thing statisticians are asked to provide input on. Depending on how much model building you're doing, you may also need to know about cross validation , feature selection, model selection, etc. Although maybe I'm biased towards model building and you could get away without this stuff? In any case, a reasonably good book, especially if you're using R, is Applied Predictive Modeling by Max Kuhn. At this point you'll have the "must know" knowledge you asked about. But you'll also have learned that inventing a new model is as easy as adding a new term to a loss function, and consequently a huge number of models and approaches exist. No one can learn them all. Sometimes it seems as if which ones are in fashion in a given field is completely arbitrary, or an accident of history. Instead of trying to learn them all, rest assured that you can you the foundation to built to understand any particular model you need if a few hours of study, and focus on those that are commonly used in your field or which seem promising to you. What tests/ methods would you put in your toolbox? All right, laundry list time! A lot of these come from The Elements of Statistical Learning, by Hastie, Tibshirani, and Friedman which is a very good book by three highly respected authors. Another good resource is scikit-learn , which tends to most of the most mature and popular models. Ditto for R's caret package , although it's really focused on predictive modeling. Others are just models I've seen mentioned and/or used frequently. In roughly descending order of popularity: Ridge, Lasso, and ElasticNet Regression Local Regression (LOESS) Kernel Density Estimates PCA Factor Analysis K-means GMM (and other mixture models) Decision Trees, Random Forest, and XGBoost Time Series Analysis: ARIMA, possible exponential smoothing SVM (Support Vector Machines) Hidden Markov Models GAM (General Additive Models) Bayes Networks and Structual Equation Modeling Robust Regression Imputation Neural Nets, CNNs (for images), RNN (for sequences). See the Deep Learning Book by Goodfellow, Bengio, and Courville. Bayesian Inference with MCMC a la Stan Survival Analysis (Cox PH, Kaplan-Meier estimator, etc.) Extreme value theory Vapnik–Chervonenkis theory Causality Pairwise/Perference modling e.g. Bradley-Terry IRT (item response theory, used for surveys and tests) Martingales Copulas This is a pretty idiosyncratic list. Certainly I don't know everything on that, and even where I do my knowledge level varies from superficial to long experience. That's going to be true for everyone. Everyone is going to have their own additions to this list, and above all their own priorities. Some people will tell you to dive right in to neural nets and ignore the rest. Some people (actuaries) spend their entire career focusing on survival analysis and extreme value theory. I can't give you any real guidance except to study techniques that are used in your field and apply to your problems.
{ "source": [ "https://stats.stackexchange.com/questions/402333", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
402,618
$\sin(x)$ seems to zero centered which is a desirable property for activation functions. Even the gradient won't vanish at any point. I am not sure if the oscillating nature of the function or its gradient can cause any issue during backpropagation.
Here's a paper dedicated to this very question: Parascandolo and Virtanen (2016) . Taming the waves: sine as activation function in deep neural networks. Some key points from the paper: Sinusoidal activation functions have been largely ignored, and are considered difficult to train. They review past work that has used sinusoidal activation functions. Most of this is earlier work (before the modern 'deep learning' boom). But, there are a couple more recent papers. The periodic nature of sinusoidal activation functions can give rise to a 'rippling' cost function with bad local minima, which may make training difficult. The problem may not be so bad when the data is dominated by low-frequency components (which is expected for many real-world datasets). Learning is easier in this regime, but is sensitive to how network parameters are initialized. They show that networks with sinusoidal activation functions can perform reasonably well on a couple real-world datasets. But, after training, the networks don't really use the periodic nature of the activation function. Rather, they only use the central region near zero, where the sinusoid is nearly equivalent to the more traditional $\tanh$ activation function. They trained recurrent networks on a synthetic task where periodic structure is expected to be helpful. Networks learn faster and are more accurate using $\sin$ compared to $\tanh$ activation functions. But, the difference is bigger for vanilla RNNs than LSTMs. Here's another relevant paper: Ramachandran, Zoph, Le (2017) . Searching for Activation Functions. They performed a large-scale, automatic search over activation functions to find new variants that perform well. Some of the activation functions they discovered use sinusoidal components (but they're not pure sinusoids--they also tend to have a monotonic component). The paper doesn't discuss these variants much, except to say that they're an interesting future research direction.
{ "source": [ "https://stats.stackexchange.com/questions/402618", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/243881/" ] }
402,633
Lets say I have a word embedding model, a set of documents and N categories. Lets say the categories are "cars" and "planes". I want to categorize the documents as either being about cars or planes. So, I pick a bag of words from the document from which I remove stop words and the like. I take the resulting list of words and average out the distance between the word vector for "car" and the words in my list. I do the same for the word "plane". Finally, I categorize the document as belonging to the category which gives me the smallest average distance between the category and the words in the document. The reason I am asking this is because I see that "cosine similarity" is a standard feature in gensim. Is that a better way to do classification of documents? I always thought like words in a well-trained word embedding model would have shorter euclidean distance than words that are different definition-wise. When is it better to use cosine similarity? Also, are there other methods I should consider?
Here's a paper dedicated to this very question: Parascandolo and Virtanen (2016) . Taming the waves: sine as activation function in deep neural networks. Some key points from the paper: Sinusoidal activation functions have been largely ignored, and are considered difficult to train. They review past work that has used sinusoidal activation functions. Most of this is earlier work (before the modern 'deep learning' boom). But, there are a couple more recent papers. The periodic nature of sinusoidal activation functions can give rise to a 'rippling' cost function with bad local minima, which may make training difficult. The problem may not be so bad when the data is dominated by low-frequency components (which is expected for many real-world datasets). Learning is easier in this regime, but is sensitive to how network parameters are initialized. They show that networks with sinusoidal activation functions can perform reasonably well on a couple real-world datasets. But, after training, the networks don't really use the periodic nature of the activation function. Rather, they only use the central region near zero, where the sinusoid is nearly equivalent to the more traditional $\tanh$ activation function. They trained recurrent networks on a synthetic task where periodic structure is expected to be helpful. Networks learn faster and are more accurate using $\sin$ compared to $\tanh$ activation functions. But, the difference is bigger for vanilla RNNs than LSTMs. Here's another relevant paper: Ramachandran, Zoph, Le (2017) . Searching for Activation Functions. They performed a large-scale, automatic search over activation functions to find new variants that perform well. Some of the activation functions they discovered use sinusoidal components (but they're not pure sinusoids--they also tend to have a monotonic component). The paper doesn't discuss these variants much, except to say that they're an interesting future research direction.
{ "source": [ "https://stats.stackexchange.com/questions/402633", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/135866/" ] }
408,408
Marginal generally refers to something that's a small effect, something that's on the outside of a bigger system. It tends to diminish the importance of whatever is described as "marginal". So how does that apply to the probability of a subset of random variables? Assuming that words get used because of their meaning can be a risky proposition in mathematics, so I know there isn't necessarily an answer here, but sometimes the answer to this sort of question can help you to gain genuine insight, hence why I'm asking.
Consider the table below (copied from this website ) representing joint probabilities of outcomes from rolling two dice: In this common and natural way of showing the distribution, the marginal probabilities of the outcomes from the individual dice are written literally in the margins of the table (the highlighted row/column). Of course we can't really construct such tables for continuous random variables, but anyway I'd guess that this is the origin of the term.
{ "source": [ "https://stats.stackexchange.com/questions/408408", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/247953/" ] }
409,843
I'm confused with this term: covariate . What is it? Is is just the observed outcomes of some random variables that contain information that could help us enhance our prediction of another random variable that we haven't observed yet? Why is it named so? Also there seems to be another: independent variable. Independent of what? Why is it named so?
From Wikipedia : Depending on the context, an independent variable is sometimes called a "predictor variable", regressor, covariate , "controlled variable", "manipulated variable", "explanatory variable", exposure variable (see reliability theory), "risk factor" (see medical statistics), "feature" (in machine learning and pattern recognition) or "input variable." In econometrics, the term "control variable" is usually used instead of " covariate ". Answering (some of) your questions: Assume that you are solving linear regression, where you are trying to find a relation $\textbf{y} = f(\textbf{X})$ . In this case, $\textbf{X}$ are independent variables and $\textbf{y}$ is the dependent variable. Typically, $\textbf{X}$ consists of multiple variables which may have some relations between them, i.e. they "co-vary" -- hence the term "covariate". Let's take a concrete example. Suppose you wish to predict the price of a house in a neighborhood, $\textbf{y}$ using the following "co-variates", $\textbf{X}$ : Width, $x_1$ Breadth, $x_2$ Number of floors, $x_3$ Area of the house, $x_4$ Distance to downtown, $x_5$ Distance to hospital, $x_6$ For a linear regression problem, $\textbf{y} = f(\textbf{X})$ the price of the house is dependent on all co-variates, i.e. $\textbf{y}$ is dependent on $\textbf{X}$ . Do any of the co-variates depend on the price of the house? In other words, is $\textbf{X}$ dependent on $\textbf{y}$ ? The answer is NO. Hence, $\textbf{X}$ is the independent variable and $\textbf{y}$ is the dependent variable. This encapsulates a cause and effect relationship. If the independent variable changes, its effect is seen on the dependent variable. Now, are all the co-variates independent of each other ? The answer is NO! A better answer is, well it depends! The area of the house ( $x_4$ ) is dependent on the width ( $x_1$ ), breadth ( $x_2$ ) and the number of floors ( $x_3$ ), whereas, distances to downtown ( $x_5$ ) and hospital ( $x_6$ ) are independent of the area of the house ( $x_4$ ). Hope that helps!
{ "source": [ "https://stats.stackexchange.com/questions/409843", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/100507/" ] }
411,699
To calculate the confidence-interval (CI) for mean with unknown population standard deviation (sd) we estimate the population standard deviation by employing the t-distribution. Notably, $CI=\bar{X} \pm Z_{95\% }\sigma_{\bar X}$ where $\sigma_{\bar X} = \frac{\sigma}{\sqrt n}$ . But because, we do not have point estimate of the standard deviation of the population, we estimate through the approximation $CI=\bar{X} \pm t_{95\% }(se)$ where $se = \frac{s}{\sqrt n}$ Contrastingly, for population proportion, to calculate the CI, we approximate as $CI = \hat{p} \pm Z_{95\% }(se)$ where $se = \sqrt\frac{\hat{p}(1-\hat{p})}{n}$ provided $n \hat{p} \ge 15$ and $n(1-\hat{p}) \ge 15$ My question is, why are we complacent with standard distribution for population proportion?
Both the standard Normal and Student t distributions are rather poor approximations to the distribution of $$Z = \frac{\hat p - p}{\sqrt{\hat p(1-\hat p)/n}}$$ for small $n,$ so poor that the error dwarfs the differences between these two distributions. Here is a comparison of all three distributions (omitting the cases where $\hat p$ or $1-\hat p$ are zero, where the ratio is undefined) for $n=10, p=1/2:$ The "empirical" distribution is that of $Z,$ which must be discrete because the estimates $\hat p$ are limited to the finite set $\{0, 1/n, 2/n, \ldots, n/n\}.$ The $t$ distribution appears to do a better job of approximation. For $n=30$ and $p=1/2,$ you can see the difference between the standard Normal and Student t distributions is completely negligible: Because the Student t distribution is more complicated than the standard Normal (it's really an entire family of distributions indexed by the "degrees of freedom," formerly requiring entire chapters of tables rather than a single page), the standard Normal is used for almost all approximations.
{ "source": [ "https://stats.stackexchange.com/questions/411699", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/89571/" ] }
411,893
So here I am studying inference. I would like that someone could enumerate the advantages of the exponential family. By exponential family, I mean the distributions which are given as \begin{align*} f(x|\theta) = h(x)\exp\left\{\eta(\theta)T(x) - B(\theta)\right\} \end{align*} whose support doesn't depend on the parameter $\theta$ . Here are some advantages I found out: (a) It incorporates a wide variety of distributions. (b) It offers a natural sufficient statistics $T(x)$ according to the Neyman-Fisher theorem. (c) It makes possible to provide a nice formula for the moment generating function of $T(x)$ . (d) It makes it easy to decouple the relationship between the response and predictor from the conditional distribution of the response (via link functions). Can anyone provide any other advantage?
...why should we study it and use it? I think your list of advantages effectively answers your own question, but let me offer some meta-mathematical commentary that might elucidate this topic. Generally speaking, mathematicians like to generalise concepts and results up to the maximal point that they can, to the limits of their usefulness . That is, when mathematicians develop a concept, and find that one or more useful theorems apply to that concept, they will generally seek to generalise the concept and results more and more, until they get to the point where further generalisation would render the results inapplicable or no longer useful. As can be seen from your list, the exponential family has a number of useful theorems attached to it, and it encompasses a wide class of distributions. This is sufficient to make it a worthy object of study, and a useful mathematical class in practice. Can anyone provide any other advantage? This class has various good properties in Bayesian analysis. In particular, the exponential family distributions always have conjugate priors, and the resulting posterior predictive distribution has a simple form. This makes is an extremely useful class of distributions in Bayesian statistics. Indeed, it allows you to undertake Bayesian analysis using conjugate priors at an extremely high level of generality, encompassing all the distributional families in the exponential family.
{ "source": [ "https://stats.stackexchange.com/questions/411893", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
413,326
First of all, I'm not asking this: Why does zero correlation not imply independence? This is addressed (rather nicely) here: https://math.stackexchange.com/questions/444408/why-does-zero-correlation-not-imply-independence What I'm asking is the opposite...say two variables are entirely independent of one another. Couldn't they have a tiny bit of correlation by accident? Shouldn't it be...independence implies VERY LITTLE correlation?
By the definition of the correlation coefficient, if two variables are independent their correlation is zero. So, it couldn't happen to have any correlation by accident! $$\rho_{X,Y}=\frac{\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y]}{\sqrt{\operatorname{E}[X^2]-[\operatorname{E}[X]]^2}~\sqrt{\operatorname{E}[Y^2]- [\operatorname{E}[Y]]^2}}$$ If $X$ and $Y$ are independent, means $\operatorname{E}[XY]= \operatorname{E}[X]\operatorname{E}[Y]$ . Hence, the numerator of $\rho_{X,Y}$ is zero in this case. So, if you don't change the meaning of correlation, as mentioned here, it is not possible. Unless, clarify your defintion from what the correlation is.
{ "source": [ "https://stats.stackexchange.com/questions/413326", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/217385/" ] }
413,606
I'm working on logistic regression model. I checked the summary of the model which is built on 5 independent variables out which one is not significant with a P-value of 0.74.I wish to know that do we directly remove the variable or is there any other way to check for it's significance? A senior of mine suggested to do logarithmic transformation of the insignificant variable & look for correlation then. Will that count towards checking it's significance. model <- glm(Buy ~ a_score + b_score+ c_score+lb+p, data = history, family = binomial) All variables come out to be significant with 2 or 3 stars apart from a_score which is shown insignificant.
Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you should be externally validating your model via a validation/test prodecedure on unseen data. If, instead, you are interested in examining which factors contribute to the probability of a customer buying, then there is no need to remove variables which fail to reject the null (especially in a stepwise sort of manner). Presumably, you included a variable in your model because you thought (from past experience or expert opinion) that it played an important part in a customer deciding if they will buy. That the variable failed to reject the null doesn't make your model a bad one, it just means that your sample didin't detect an effect of that variable. That's perfectly ok.
{ "source": [ "https://stats.stackexchange.com/questions/413606", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
414,349
I've fitted my model and am trying to understand whether it's any good. I've calculated the recommended metrics to assess it ( $R^2$ / AUC / accuracy / prediction error / etc) but do not know how to interpret them. In short, how do I tell if my model is any good based on the metric? Is an $R^2$ of 0.6 (for example) sufficient to let me proceed to draw inferences or base scientific/business decisions? This question is intentionally broad, to cover a wide variety of situations that members frequently encounter; such questions could be closed as duplicates of this one. Edits to broaden the scope beyond the metrics mentioned here are welcome, as are additional answers - particularly those that offer insight about other classes of metrics.
This answer will mostly focus on $R^2$ , but most of this logic extends to other metrics such as AUC and so on. This question can almost certainly not be answered well for you by readers at CrossValidated. There is no context-free way to decide whether model metrics such as $R^2$ are good or not . At the extremes, it is usually possible to get a consensus from a wide variety of experts: an $R^2$ of almost 1 generally indicates a good model, and of close to 0 indicates a terrible one. In between lies a range where assessments are inherently subjective. In this range, it takes more than just statistical expertise to answer whether your model metric is any good. It takes additional expertise in your area, which CrossValidated readers probably do not have. Why is this? Let me illustrate with an example from my own experience (minor details changed). I used to do microbiology lab experiments. I would set up flasks of cells at different levels of nutrient concentration, and measure the growth in cell density (i.e. slope of cell density against time, though this detail is not important). When I then modelled this growth/nutrient relationship, it was common to achieve $R^2$ values of >0.90. I am now an environmental scientist. I work with datasets containing measurements from nature. If I try to fit the exact same model described above to these ‘field’ datasets, I’d be surprised if I the $R^2$ was as high as 0.4. These two cases involve exactly the same parameters, with very similar measurement methods, models written and fitted using the same procedures - and even the same person doing the fitting! But in one case, an $R^2$ of 0.7 would be worryingly low, and in the other it would be suspiciously high. Furthermore, we would take some chemistry measurements alongside the biological measurements. Models for the chemistry standard curves would have $R^2$ around 0.99, and a value of 0.90 would be worryingly low . What leads to these big differences in expectations? Context. That vague term covers a vast area, so let me try to separate it into some more specific factors (this is likely incomplete): 1. What is the payoff / consequence / application? This is where the nature of your field are likely to be most important. However valuable I think my work is, bumping up my model $R^2$ s by 0.1 or 0.2 is not going to revolutionize the world. But there are applications where that magnitude of change would be a huge deal! A much smaller improvement in a stock forecast model could mean tens of millions of dollars to the firm that develops it. This is even easier to illustrate for classifiers, so I’m going to switch my discussion of metrics from $R^2$ to accuracy for the following example (ignoring the weakness of the accuracy metric for the moment). Consider the strange and lucrative world of chicken sexing . After years of training, a human can rapidly tell the difference between a male and female chick when they are just 1 day old. Males and females are fed differently to optimize meat & egg production, so high accuracy saves huge amounts in misallocated investment in billions of birds. Till a few decades ago, accuracies of about 85% were considered high in the US. Nowadays, the value of achieving the very highest accuracy, of around 99%? A salary that can apparently range as high as 60,000 to possibly 180,000 dollars per year (based on some quick googling). Since humans are still limited in the speed at which they work, machine learning algorithms that can achieve similar accuracy but allow sorting to take place faster could be worth millions. (I hope you enjoyed the example – the alternative was a depressing one about very questionable algorithmic identification of terrorists). 2. How strong is the influence of unmodelled factors in your system? In many experiments, you have the luxury of isolating the system from all other factors that may influence it (that’s partly the goal of experimentation, after all). Nature is messier. To continue with the earlier microbiology example: cells grow when nutrients are available but other things affect them too – how hot it is, how many predators there are to eat them, whether there are toxins in the water. All of those covary with nutrients and with each other in complex ways. Each of those other factors drives variation in the data that is not being captured by your model. Nutrients may be unimportant in driving variation relative to the other factors, and so if I exclude those other factors, my model of my field data will necessarily have a lower $R^2$ . 3. How precise and accurate are your measurements? Measuring the concentration of cells and chemicals can be extremely precise and accurate. Measuring (for example) the emotional state of a community based on trending twitter hashtags is likely to be…less so. If you cannot be precise in your measurements, it is unlikely that your model can ever achieve a high $R^2$ . How precise are measurements in your field? We probably do not know. 4. Model complexity and generalizability If you add more factors to your model, even random ones, you will on average increase the model $R^2$ (adjusted $R^2$ partly addresses this). This is overfitting . An overfit model will not generalize well to new data i.e. will have higher prediction error than expected based on the fit to the original (training) dataset. This is because it has fit the noise in the original dataset. This is partly why models are penalized for complexity in model selection procedures, or subjected to regularization. If overfitting is ignored or not successfully prevented, the estimated $R^2$ will be biased upward i.e. higher than it ought to be. In other words, your $R^2$ value can give you a misleading impression of your model’s performance if it is overfit. IMO, overfitting is surprisingly common in many fields. How best to avoid this is a complex topic, and I recommend reading about regularization procedures and model selection on this site if you are interested in this. 5. Data range and extrapolation Does your dataset extend across a substantial portion of the range of X values you are interested in? Adding new data points outside the existing data range can have a large effect on estimated $R^2$ , since it is a metric based on the variance in X and Y. Aside from this, if you fit a model to a dataset and need to predict a value outside the X range of that dataset (i.e. extrapolate ), you might find that its performance is lower than you expect. This is because the relationship you have estimated might well change outside the data range you fitted. In the figure below, if you took measurements only in the range indicated by the green box, you might imagine that a straight line (in red) described the data well. But if you attempted to predict a value outside that range with that red line, you would be quite incorrect. [The figure is an edited version of this one , found via a quick google search for 'Monod curve'.] 6. Metrics only give you a piece of the picture This is not really a criticism of the metrics – they are summaries , which means that they also throw away information by design. But it does mean that any single metric leaves out information that can be crucial to its interpretation. A good analysis takes into consideration more than a single metric. Suggestions, corrections and other feedback welcome. And other answers too, of course.
{ "source": [ "https://stats.stackexchange.com/questions/414349", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/121522/" ] }
414,367
A cigarette producer sent two samples, one for each laboratory, which he thinks to be identical. The laboratories determined the concentration (in mg) of nicotine in each sample and they obtained the following results: (1) 24, 27, 26, 21, 24 and (2) 27,28,23,31,26. Is there any difference in the measurements? Assume normality and common variance and significance level $\alpha = 0.05$ . MY ATTEMPT As fas as I have understood, we are interested in testing the hypothesis: $H_{0}: \mu_{1} = \mu_{2}$ . My question is: how do we tackle this problem if we don't know the variances? Precisely speaking, I'd like to know which statistics should we use to solve the problem.
This answer will mostly focus on $R^2$ , but most of this logic extends to other metrics such as AUC and so on. This question can almost certainly not be answered well for you by readers at CrossValidated. There is no context-free way to decide whether model metrics such as $R^2$ are good or not . At the extremes, it is usually possible to get a consensus from a wide variety of experts: an $R^2$ of almost 1 generally indicates a good model, and of close to 0 indicates a terrible one. In between lies a range where assessments are inherently subjective. In this range, it takes more than just statistical expertise to answer whether your model metric is any good. It takes additional expertise in your area, which CrossValidated readers probably do not have. Why is this? Let me illustrate with an example from my own experience (minor details changed). I used to do microbiology lab experiments. I would set up flasks of cells at different levels of nutrient concentration, and measure the growth in cell density (i.e. slope of cell density against time, though this detail is not important). When I then modelled this growth/nutrient relationship, it was common to achieve $R^2$ values of >0.90. I am now an environmental scientist. I work with datasets containing measurements from nature. If I try to fit the exact same model described above to these ‘field’ datasets, I’d be surprised if I the $R^2$ was as high as 0.4. These two cases involve exactly the same parameters, with very similar measurement methods, models written and fitted using the same procedures - and even the same person doing the fitting! But in one case, an $R^2$ of 0.7 would be worryingly low, and in the other it would be suspiciously high. Furthermore, we would take some chemistry measurements alongside the biological measurements. Models for the chemistry standard curves would have $R^2$ around 0.99, and a value of 0.90 would be worryingly low . What leads to these big differences in expectations? Context. That vague term covers a vast area, so let me try to separate it into some more specific factors (this is likely incomplete): 1. What is the payoff / consequence / application? This is where the nature of your field are likely to be most important. However valuable I think my work is, bumping up my model $R^2$ s by 0.1 or 0.2 is not going to revolutionize the world. But there are applications where that magnitude of change would be a huge deal! A much smaller improvement in a stock forecast model could mean tens of millions of dollars to the firm that develops it. This is even easier to illustrate for classifiers, so I’m going to switch my discussion of metrics from $R^2$ to accuracy for the following example (ignoring the weakness of the accuracy metric for the moment). Consider the strange and lucrative world of chicken sexing . After years of training, a human can rapidly tell the difference between a male and female chick when they are just 1 day old. Males and females are fed differently to optimize meat & egg production, so high accuracy saves huge amounts in misallocated investment in billions of birds. Till a few decades ago, accuracies of about 85% were considered high in the US. Nowadays, the value of achieving the very highest accuracy, of around 99%? A salary that can apparently range as high as 60,000 to possibly 180,000 dollars per year (based on some quick googling). Since humans are still limited in the speed at which they work, machine learning algorithms that can achieve similar accuracy but allow sorting to take place faster could be worth millions. (I hope you enjoyed the example – the alternative was a depressing one about very questionable algorithmic identification of terrorists). 2. How strong is the influence of unmodelled factors in your system? In many experiments, you have the luxury of isolating the system from all other factors that may influence it (that’s partly the goal of experimentation, after all). Nature is messier. To continue with the earlier microbiology example: cells grow when nutrients are available but other things affect them too – how hot it is, how many predators there are to eat them, whether there are toxins in the water. All of those covary with nutrients and with each other in complex ways. Each of those other factors drives variation in the data that is not being captured by your model. Nutrients may be unimportant in driving variation relative to the other factors, and so if I exclude those other factors, my model of my field data will necessarily have a lower $R^2$ . 3. How precise and accurate are your measurements? Measuring the concentration of cells and chemicals can be extremely precise and accurate. Measuring (for example) the emotional state of a community based on trending twitter hashtags is likely to be…less so. If you cannot be precise in your measurements, it is unlikely that your model can ever achieve a high $R^2$ . How precise are measurements in your field? We probably do not know. 4. Model complexity and generalizability If you add more factors to your model, even random ones, you will on average increase the model $R^2$ (adjusted $R^2$ partly addresses this). This is overfitting . An overfit model will not generalize well to new data i.e. will have higher prediction error than expected based on the fit to the original (training) dataset. This is because it has fit the noise in the original dataset. This is partly why models are penalized for complexity in model selection procedures, or subjected to regularization. If overfitting is ignored or not successfully prevented, the estimated $R^2$ will be biased upward i.e. higher than it ought to be. In other words, your $R^2$ value can give you a misleading impression of your model’s performance if it is overfit. IMO, overfitting is surprisingly common in many fields. How best to avoid this is a complex topic, and I recommend reading about regularization procedures and model selection on this site if you are interested in this. 5. Data range and extrapolation Does your dataset extend across a substantial portion of the range of X values you are interested in? Adding new data points outside the existing data range can have a large effect on estimated $R^2$ , since it is a metric based on the variance in X and Y. Aside from this, if you fit a model to a dataset and need to predict a value outside the X range of that dataset (i.e. extrapolate ), you might find that its performance is lower than you expect. This is because the relationship you have estimated might well change outside the data range you fitted. In the figure below, if you took measurements only in the range indicated by the green box, you might imagine that a straight line (in red) described the data well. But if you attempted to predict a value outside that range with that red line, you would be quite incorrect. [The figure is an edited version of this one , found via a quick google search for 'Monod curve'.] 6. Metrics only give you a piece of the picture This is not really a criticism of the metrics – they are summaries , which means that they also throw away information by design. But it does mean that any single metric leaves out information that can be crucial to its interpretation. A good analysis takes into consideration more than a single metric. Suggestions, corrections and other feedback welcome. And other answers too, of course.
{ "source": [ "https://stats.stackexchange.com/questions/414367", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
416,129
Recently, two different co-workers have used a kind of argument about differences between conditions that seems incorrect to me. Both of these co-workers use statistics, but they are not statisticians. I am a novice in statistics. In both cases, I argued that, because there was no significant difference between two conditions in an experiment, it was incorrect to make a general claim about these groups with regard to the manipulation. Note that "making a general claim" means something like writing: "Group A used X more often than group B". My co-workers retorted with: "even though there is no significant difference, the trend is still there" and "even though there is no significant difference, there is still a difference". To me, both of these sound like an equivocation, i.e., they changed the meaning of "difference" from: "a difference that is likely to be the result of something other than chance" (i.e., statistical significance), to "any non-zero difference in measurement between groups". Was the response of my co-workers correct? I did not take it up with them because they outrank me.
This is a great question; the answer depends a lot on context. In general I would say you are right : making an unqualified general claim like "group A used X more often than group B" is misleading. It would be better to say something like in our experiment group A used X more often than group B, but we're very uncertain how this will play out in the general population or although group A used X 13% more often than group B in our experiment, our estimate of the difference in the general population is not clear : the plausible values range from A using X 5% less often than group B to A using X 21% more often than group B or group A used X 13% more often than group B, but the difference was not statistically significant (95% CI -5% to 21%; p=0.75) On the other hand: your co-workers are right that in this particular experiment , group A used X more often than group B. However, people rarely care about the participants in a particular experiment; they want to know how your results will generalize to a larger population, and in this case the general answer is that you can't say with confidence whether a randomly selected group A will use X more or less often than a randomly selected group B. If you needed to make a choice today about whether to use treatment A or treatment B to increase the usage of X, in the absence of any other information or differences in costs etc., then choosing A would be your best bet. But if you wanted be comfortable that you were probably making the right choice, you would need more information. Note that you should not say "there is no difference between group A and group B in their usage of X", or "group A and group B use X the same amount". This is true neither of the participants in your experiment (where A used X 13% more) or in the general population; in most real-world contexts, you know that there must really be some effect (no matter how slight) of A vs. B; you just don't know which direction it goes.
{ "source": [ "https://stats.stackexchange.com/questions/416129", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/252813/" ] }
416,204
Under mixture of two normal distributions: https://en.wikipedia.org/wiki/Multimodal_distribution#Mixture_of_two_normal_distributions "A mixture of two normal distributions has five parameters to estimate: the two means, the two variances and the mixing parameter. A mixture of two normal distributions with equal standard deviations is bimodal only if their means differ by at least twice the common standard deviation." I am looking for a derivation or intuitive explanation as to why this is true. I believe it may be able to be explained in the form of a two sample t test: $$\frac{\mu_1-\mu_2}{\sigma_p}$$ where $\sigma_p$ is the pooled standard deviation.
This figure from the the paper linked in that wiki article provides a nice illustration: The proof they provide is based on the fact that normal distributions are concave within one SD of their mean (the SD being the inflection point of the normal pdf, where it goes from concave to convex). Thus, if you add two normal pdfs together (in equal proportions), then as long as their means differ by less than two SDs, the sum-pdf (i.e. the mixture) will be concave in the region between the two means, and therefore the global maximum must be at the point exactly between the two means. Reference: Schilling, M. F., Watkins, A. E., & Watkins, W. (2002). Is Human Height Bimodal? The American Statistician, 56(3), 223–229. doi:10.1198/00031300265
{ "source": [ "https://stats.stackexchange.com/questions/416204", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/248258/" ] }
416,304
Could anyone give me some practical examples of the Cauchy Distribution? What makes it so popular?
In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that practitioners in finance are wary of using models that have light-tailed distributions (e.g., the normal distribution) on their returns, and they generally prefer to go the other way and use a distribution with very heavy tails (e.g., the Cauchy). The history of finance is littered with catastrophic predictions based on models that did not have heavy enough tails in their distributions. The Cauchy distribution has sufficiently heavy tails that its moments do not exist, and so it is an ideal candidate to give an error term with extremely heavy tails. Note that this issue of the fatness of tails in error terms in finance models was one of the main contents of the popular critique by Taleb (2007) . In that book, Taleb points out instances where financial models have used the normal distribution for error terms, and he notes that this underestimates the true probability of extreme events, which are particularly important in finance. (In my view this book gives an exaggerated critique, since models using heavy-tailed deviations are in fact quite common in finance. In any case, the popularity of this book shows the importance of the issue.)
{ "source": [ "https://stats.stackexchange.com/questions/416304", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/246805/" ] }
418,417
I think, I have already understood the mathematical definition of a consistent estimator. Correct me if I'm wrong: $W_n$ is an consistent estimator for $\theta$ if $\forall \epsilon>0$ $$\lim_{n\to\infty} P(|W_n - \theta|> \epsilon) = 0, \quad \forall\theta \in \Theta$$ Where, $\Theta$ is the Parametric Space.  But I want to understand the need for an estimator to be consistent. Why an estimator that is not consistent is bad? Could you give me some examples? I accept simulations in R or python.
If the estimator is not consistent, it won't converge to the true value in probability . In other words, there is always a probability that your estimator and true value will have a difference, no matter how many data points you have. This is actually bad, because even if you collect immense amount of data, your estimate will always have a positive probability of being some $\epsilon>0$ different from the true value. Practically, you can consider this situation as if you're using an estimator of a quantity such that even surveying all the population, instead of a small sample of it, won't help you.
{ "source": [ "https://stats.stackexchange.com/questions/418417", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/236318/" ] }
418,803
What is the difference between extrapolation and interpolation, and what is the most precise way of using these terms? For example, I have seen a statement in a paper using interpolation as: "The procedure interpolates the shape of the estimated function between the bin points" A sentence that uses both extrapolation and interpolation is, for example: The previous step where we extrapolated the interpolated function using the Kernel method to the left and right temperature tails. Can someone provide a clear and easy way to distinguish them and guide how to use these terms correctly with an example?
To add a visual explanation to this: let's consider a few points that you plan to model. They look like they could be described well with a straight line, so you fit a linear regression to them: This regression line lets you both interpolate (generate expected values in between your data points) and extrapolate (generate expected values outside the range of your data points). I've highlighted the extrapolation in red and the biggest region of interpolation in blue. To be clear, even the tiny regions between the points are interpolated, but I'm only highlighting the big one here. Why is extrapolation generally more of a concern? Because you're usually much less certain about the shape of the relationship outside the range of your data. Consider what might happen when you collect a few more data points (hollow circles): It turns out that the relationship was not captured well with your hypothesized relationship after all. The predictions in the extrapolated region are way off. Even if you had guessed the precise function that describes this nonlinear relationship correctly, your data did not extend over enough of a range for you to capture the nonlinearity well, so you may still have been pretty far off. Note that this is a problem not just for linear regression, but for any relationship at all - this is why extrapolation is considered dangerous. Predictions in the interpolated region are also incorrect because of the lack of nonlinearity in the fit, but their prediction error is much lower. There's no guarantee that you won't have an unexpected relationship in between your points (i.e. the region of interpolation), but it's generally less likely. I will add that extrapolation is not always a terrible idea - if you extrapolate a tiny bit outside the range of your data, you're probably not going to be very wrong (though it is possible!). Ancients who had no good scientific model of the world would not have been far wrong if they forecast that the sun would rise again the next day and the day after that (though one day far into the future, even this will fail). And sometimes, extrapolation can even be informative - for example, simple short-term extrapolations of the exponential increase in atmospheric CO $_2$ have been reasonably accurate over the past few decades. If you were a student who didn't have scientific expertise but wanted a rough, short-term forecast, this would have given you fairly reasonable results. But the farther away from your data you extrapolate, the more likely your prediction is likely to fail, and fail disastrously, as described very nicely in this great thread: What is wrong with extrapolation? (thanks to @J.M.isnotastatistician for reminding me of that). Edit based on comments: whether interpolating or extrapolating, it's always best to have some theory to ground expectations. If theory-free modelling must be done, the risk from interpolation is usually less than that from extrapolation. That said, as the gap between data points increases in magnitude, interpolation also becomes more and more fraught with risk.
{ "source": [ "https://stats.stackexchange.com/questions/418803", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/153107/" ] }
419,557
Usually a probability distribution over discrete variables is described using a probability mass function (PMF): When working with continuous random variables, we describe probability distributions using a probability density function (PDF) rather than a probability mass function. -- Deep Learning by Goodfellow, Bengio, and Courville However, Wolfram Mathworld is using PDF to describe the probability distribution over discrete variables: Is this a mistake? or it does not much matter?
It is not a mistake In the formal treatment of probability, via measure theory, a probability density function is a derivative of the probability measure of interest, taken with respect to a "dominating measure" (also called a "reference measure"). For discrete distributions over the integers, the probability mass function is a density function with respect to counting measure . Since a probability mass function is a particular type of probability density function, you will sometimes find references like this that refer to it as a density function, and they are not wrong to refer to it this way. In ordinary discourse on probability and statistics, one often avoids this terminology, and draws a distinction between "mass functions" (for discrete random variables) and "density functions" (for continuous random variables), in order to distinguish discrete and continuous distributions. In other contexts, where one is stating holistic aspects of probability, it is often better to ignore the distinction and refer to both as "density functions".
{ "source": [ "https://stats.stackexchange.com/questions/419557", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/247356/" ] }
419,751
Applying the softmax function on a vector will produce "probabilities" and values between $0$ and $1$ . But we can also divide each value by the sum of the vector and that will produce probabilities and values between $0$ and $1$ . I read the answer on here but it says that the reason is because it's differentiable, although Both functions are differentiable.
The function you propose has a singularity whenever the sum of the elements is zero. Suppose your vector is $[-1, \frac{1}{3}, \frac{2}{3}]$ . This vector has a sum of 0, so division is not defined. The function is not differentiable here. Additionally, if one or more of the elements of the vector is negative but the sum is nonzero, your result is not a probability. Suppose your vector is $[-1, 0, 2]$ . This has a sum of 1, so applying your function results in $[-1, 0, 2]$ , which is not a probability vector because it has negative elements, and elements exceeding 1. Taking a wider view, we can motivate the specific form of the softmax function from the perspective of extending binary logistic regression to the case of three or more categorical outcomes. Doing things like taking absolute values or squares, as suggested in comments, means that $-x$ and $x$ have the same predicted probability; this means the model is not identified . By contrast, $\exp(x)$ is monotonic and positive for all real $x$ , so the softmax result is (1) a probability vector and (2) the multinomial logistic model is identified.
{ "source": [ "https://stats.stackexchange.com/questions/419751", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/116480/" ] }
420,525
I'm writing an example and have made up some data. I want it to be clear to the reader this is not real data, but I also don't want to give the impression of malice, since it just serves as an example. There is no (pseudo)random component to this particular data, so it seems to me that 'simulated' is not appropriate. If I call it fictitious or fabricated, does that give the impression of fraudulent data? Is 'made-up' a word that would fit in a scientific context? What is the terminology in statistical literature for non-simulated made-up data?
I would probably call this "synthetic" or "artificial" data, though I might also call it "simulated" (the simulation is just very simple).
{ "source": [ "https://stats.stackexchange.com/questions/420525", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/176202/" ] }
421,676
We roll a 6-sided die a large number of times. Calculating the difference (absolute value) between a roll and its preceding roll, are the differences expected to be uniformly distributed? To illustrate with 10 rolls: roll num result diff 1 1 0 2 2 1 3 1 1 4 3 2 5 3 0 6 5 2 7 1 4 8 6 5 9 4 2 10 4 0 Would the diff values be uniformly distributed?
No it is not uniform You can count the $36$ equally likely possibilities for the absolute differences second 1 2 3 4 5 6 first 1 0 1 2 3 4 5 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 5 4 3 2 1 0 1 6 5 4 3 2 1 0 which gives a probability distribution for the absolute differences of 0 6/36 1/6 1 10/36 5/18 2 8/36 2/9 3 6/36 1/6 4 4/36 1/9 5 2/36 1/18
{ "source": [ "https://stats.stackexchange.com/questions/421676", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/220452/" ] }
421,691
I want to run ols regression for time series data in R, but my data is short that is annual from 2000-2009. There are only 9 variables(2000-2009) and i collected data for inflation and exchange rate that how these variables between 2000-2009 affect gdp growth between 2000-2018. Is it ok for regression?
No it is not uniform You can count the $36$ equally likely possibilities for the absolute differences second 1 2 3 4 5 6 first 1 0 1 2 3 4 5 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 5 4 3 2 1 0 1 6 5 4 3 2 1 0 which gives a probability distribution for the absolute differences of 0 6/36 1/6 1 10/36 5/18 2 8/36 2/9 3 6/36 1/6 4 4/36 1/9 5 2/36 1/18
{ "source": [ "https://stats.stackexchange.com/questions/421691", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/256128/" ] }
421,935
How should one understand the keys, queries, and values that are often mentioned in attention mechanisms? I've tried searching online, but all the resources I find only speak of them as if the reader already knows what they are. Judging by the paper written by Bahdanau ( Neural Machine Translation by Jointly Learning to Align and Translate ), it seems as though values are the annotation vector $h$ but it's not clear as to what is meant by "query" and "key." The paper that I mentioned states that attention is calculated by $$c_i = \sum^{T_x}_{j = 1} \alpha_{ij} h_j$$ with $$ \begin{align} \alpha_{ij} & = \frac{e^{e_{ij}}}{\sum^{T_x}_{k = 1} e^{ik}} \\\\ e_{ij} & = a(s_{i - 1}, h_j) \end{align} $$ Where are people getting the key, query, and value from these equations? Thank you.
The key/value/query formulation of attention is from the paper Attention Is All You Need . How should one understand the queries, keys, and values The key/value/query concept is analogous to retrieval systems. For example, when you search for videos on Youtube, the search engine will map your query (text in the search bar) against a set of keys (video title, description, etc.) associated with candidate videos in their database, then present you the best matched videos ( values ). The attention operation can be thought of as a retrieval process as well. As mentioned in the paper you referenced ( Neural Machine Translation by Jointly Learning to Align and Translate ), attention by definition is just a weighted average of values, $$c=\sum_{j}\alpha_jh_j$$ where $\sum \alpha_j=1$ . If we restrict $\alpha$ to be a one-hot vector, this operation becomes the same as retrieving from a set of elements $h$ with index $\alpha$ . With the restriction removed, the attention operation can be thought of as doing "proportional retrieval" according to the probability vector $\alpha$ . It should be clear that $h$ in this context is the value . The difference between the two papers lies in how the probability vector $\alpha$ is calculated. The first paper (Bahdanau et al. 2015) computes the score through a neural network $$e_{ij}=a(s_i,h_j), \qquad \alpha_{i,j}=\frac{\exp(e_{ij})}{\sum_k\exp(e_{ik})}$$ where $h_j$ is from the encoder sequence, and $s_i$ is from the decoder sequence. One problem of this approach is, say the encoder sequence is of length $m$ and the decoding sequence is of length $n$ , we have to go through the network $m*n$ times to acquire all the attention scores $e_{ij}$ . A more efficient model would be to first project $s$ and $h$ onto a common space, then choose a similarity measure (e.g. dot product) as the attention score, like $$e_{ij}=f(s_i)g(h_j)^T$$ so we only have to compute $g(h_j)$ $m$ times and $f(s_i)$ $n$ times to get the projection vectors and $e_{ij}$ can be computed efficiently by matrix multiplication. This is essentially the approach proposed by the second paper (Vaswani et al. 2017), where the two projection vectors are called query (for decoder) and key (for encoder), which is well aligned with the concepts in retrieval systems. (There are later techniques to further reduce the computational complexity, for example Reformer , Linformer .) How are the queries, keys, and values obtained The proposed multihead attention alone doesn't say much about how the queries, keys, and values are obtained, they can come from different sources depending on the application scenario. $$ \begin{align}\text{MultiHead($Q$, $K$, $V$)} & = \text{Concat}(\text{head}_1, \dots, \text{head}_h) W^{O} \\ \text{where head$_i$} & = \text{Attention($QW_i^Q$, $KW_i^K$, $VW_i^V$)} \end{align}$$ Where the projections are parameter matrices: $$ \begin{align} W_i^Q & \in \mathbb{R}^{d_\text{model} \times d_k}, \\ W_i^K & \in \mathbb{R}^{d_\text{model} \times d_k}, \\ W_i^V & \in \mathbb{R}^{d_\text{model} \times d_v}, \\ W_i^O & \in \mathbb{R}^{hd_v \times d_{\text{model}}}. \end{align}$$ For unsupervised language model training like GPT , $Q, K, V$ are usually from the same source, so such operation is also called self-attention. For the machine translation task in the second paper, it first applies self-attention separately to source and target sequences, then on top of that it applies another attention where $Q$ is from the target sequence and $K, V$ are from the source sequence. For recommendation systems, $Q$ can be from the target items, $K, V$ can be from the user profile and history.
{ "source": [ "https://stats.stackexchange.com/questions/421935", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/211707/" ] }
423,735
I've been reading EIA report and this plot captured my attention. I now want to be able to create the same type of plot. It shows the energy productivity evolution between two years (1990-2015) and adds the change value between this two periods. What is the name of this type of plot? How can I create the same plot (with different countries) in excel?
Some call it a (horizontal) lollipop plot with two groups. Here is how to make this plot in Python using matplotlib and seaborn (only used for the style), adapted from https://python-graph-gallery.com/184-lollipop-plot-with-2-groups/ and as requested by the OP in the comments. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import io sns.set(style="whitegrid") # set style data = io.StringIO(""""Country" 1990 2015 "Russia" 71.5 101.4 "Canada" 74.4 102.9 "Other non-OECD Europe/Eurasia" 60.9 135.2 "South Korea" 127.0 136.2 "China" 58.5 137.1 "Middle East" 170.9 158.8 "United States" 106.8 169.0 "Australia/New Zealand" 123.6 170.9 "Brazil" 208.5 199.8 "Japan" 181.0 216.7 "Africa" 185.4 222.0 "Other non-OECD Asia" 202.7 236.0 "OECD Europe" 173.8 239.9 "Other non-OECD Americas" 193.1 242.3 "India" 173.8 260.6 "Mexico/Chile" 221.1 269.8""") df = pd.read_csv(data, sep="\s+", quotechar='"') df = df.set_index("Country").sort_values("2015") df["change"] = df["2015"] / df["1990"] - 1 plt.figure(figsize=(12,6)) y_range = np.arange(1, len(df.index) + 1) colors = np.where(df['2015'] > df['1990'], '#d9d9d9', '#d57883') plt.hlines(y=y_range, xmin=df['1990'], xmax=df['2015'], color=colors, lw=10) plt.scatter(df['1990'], y_range, color='#0096d7', s=200, label='1990', zorder=3) plt.scatter(df['2015'], y_range, color='#003953', s=200 , label='2015', zorder=3) for (_, row), y in zip(df.iterrows(), y_range): plt.annotate(f"{row['change']:+.0%}", (max(row["1990"], row["2015"]) + 4, y - 0.25)) plt.legend(ncol=2, bbox_to_anchor=(1., 1.01), loc="lower right", frameon=False) plt.yticks(y_range, df.index) plt.title("Energy productivity in selected countries and regions, 1990 and 2015\nBillion dollars GDP per quadrillion BTU", loc='left') plt.xlim(50, 300) plt.gcf().subplots_adjust(left=0.35) plt.tight_layout() plt.show()
{ "source": [ "https://stats.stackexchange.com/questions/423735", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91060/" ] }
427,418
Why is autocorrelation so important? I've understood the principle of it (I guess..) but as there are also examples where no autocorrelation occurs I wonder: Isn't everything in nature somehow autocorrelated? The last aspect is more aiming at a general understanding of the autocorrelation itself because, as I mentioned, isn't every state in the universe dependent on the previous one?
An attempt at an answer. Autocorrelation is no different than any other relationship between predictors. It's just that the predictor and the dependent variable happen to be the same time series, just lagged. isn't every state in the universe dependent on the previous one? Yes indeed. Just as every object's state in the universe depends on every other object's, via all kinds of physical forces. The question just is whether the relationship is strong enough to be detectable, or strong enough to help us in predicting states. And the very same thing applies to autocorrelation. It's always there. The question is whether we need to model it, or whether modeling it just introduces additional uncertainty (the bias-variance trade-off), making us worse off than not modeling it. An example from my personal work: I forecast supermarket sales. My household's consumption of milk is fairly regular. If I haven't bought any milk in three or four days, chances are high I'll come in today or tomorrow to buy milk. If the supermarket wants to forecast my household's demand for milk, they should by all means take this autocorrelation into account. However, I am not the only customer in my supermarket. There are maybe another 2,000 households that buy their groceries there. Each one's milk consumption is again autocorrelated. But since everyone's rate of consumption is different, the autocorrelation at the aggregate is so much attenuated that it may not make sense to model it any more. It has disappeared into the general daily demand, i.e., the intercept. And since the supermarket doesn't care who it sells milk to, it will model aggregate demand, and probably not include autocorrelation. (Yes, there is intra-weekly seasonality. Which is a kind of autocorrelation, but it really depends on the day of the week, not on the demand on the same weekday one week earlier, so it's more a weekday effect than seasonal autocorrelation.)
{ "source": [ "https://stats.stackexchange.com/questions/427418", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/163146/" ] }
428,169
I do not understand the intuition behind why the median is the best estimate if we are going to judge prediction accuracy using the Mean Absolute Error. Let's say you have a random variable $X$ and you want to predict what the next $X$ is. Let's denote your prediction as d. Under Mean Squared Error, which is: $\text{MSE} = (X - d)^2$ We know that expected MSE, or sum of MSEs, is minimized when $d$ is equal to the mean or $E[X]$ . This makes sense intuitively. The best predictor of a random variable is its mean. However, under Mean Absolute Error, which is: $\text{MAE} = |X - d|$ The expected MAE or sum of MAEs is minimized when $d$ is equal to the median of the random variable. While the book I am reading has a fancy proof to show why this is the case, intuitively I don't understand why the median would be the best predictor. I also don't understand why the mean (or median) wouldn't be the best choice for both.
Here is an intuitive argument with light math. Let's say we have a $d$ claiming to be minimizing the MAE of points $x_i$ . And, let's say we have $n_l$ and $n_r$ points on its left and right. If we move $d$ slightly left, i.e. an amount of $\Delta$ , then all the absolute differences on the left will decrease by $\Delta$ , and all the absolute differences on the right will increase by $\Delta$ , leading to a net decrease of $(n_l-n_r)\Delta$ in MAE. If $n_l\neq n_r$ , $d$ always have incentive to move either left or right, because each move either decreases or increases the MAE. For example, if $n_r<n_l$ , then we move left because the net decrease in MAE is $(n_l-n_r)\Delta$ , and if $n_l<n_r$ we move right because the net decrease will be $(n_r-n_l)\Delta$ . This continues until we reach $n_l=n_r$ , which is satisfied by the median .
{ "source": [ "https://stats.stackexchange.com/questions/428169", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/245759/" ] }
430,085
This post says A PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. Is it true? this is the PDF of the standard normal distribution. $$\varphi(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}$$ plug in x=0 into the formula above, I can get the probability of taking on one value. Does that post mean the PDF could be used both for point and interval?
The citation is true. When you plug $x=0$ to the PDF function, you do NOT get the probability of taking this particular value. The resulting number is probability density which is not a probability. The probability of taking exactly $x=0$ is zero (consider the infinite number of similarly-likely values in the tiny interval $x\in[0,10^{-100}]$ ). To further convince yourself that this $\varphi(x)$ cannot be a probability, consider decreasing the standard deviation of your normal distribution from $\sigma = 1$ to $\sigma = \frac{1}{100}$ . Now, $\varphi(0)=\frac{100}{\sqrt{2\pi}}$ - much more than one. Not a probability.
{ "source": [ "https://stats.stackexchange.com/questions/430085", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/250190/" ] }
430,184
What are their benefits? I know sparse parameters are a different story than sparse representations, but I want to know how each of these can benefit us and which one is more important than the other one.
The citation is true. When you plug $x=0$ to the PDF function, you do NOT get the probability of taking this particular value. The resulting number is probability density which is not a probability. The probability of taking exactly $x=0$ is zero (consider the infinite number of similarly-likely values in the tiny interval $x\in[0,10^{-100}]$ ). To further convince yourself that this $\varphi(x)$ cannot be a probability, consider decreasing the standard deviation of your normal distribution from $\sigma = 1$ to $\sigma = \frac{1}{100}$ . Now, $\varphi(0)=\frac{100}{\sqrt{2\pi}}$ - much more than one. Not a probability.
{ "source": [ "https://stats.stackexchange.com/questions/430184", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14306/" ] }
430,259
In mathematics, there are branches such as algebra, analysis, topology, etc. In machine learning there is supervised, unsupervised, and reinforcement learning. Within each of these branches, there are finer branches that further divide the methods. I am having trouble drawing a parallel with statistics. What would be the main branches of statistics (and sub-branches)? A perfect partition is likely not possible, but anything is better than a big blank map. Visual examples:
I find these classification systems extremely unhelpful and contradictory. For example: neural networks is a form of supervised learning Calculus is used in differential geometry Probability theory can be formalized as a part of set theory and so on. There are no unambiguous "branches" of mathematics, and nor should there be of statistics.
{ "source": [ "https://stats.stackexchange.com/questions/430259", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/228809/" ] }
430,391
Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or divider lines, or pointers to individual numbers? I suppose the same question would apply for quartiles or any quantile. I have read that the index of a number at a particular percentile(p), given n items, is i = (p / 100) * n That suggests to me that there are 100 percentiles.. because supposing you have 100 numbers(i=1 to i=100), then each would have an index(1 to 100). If you had 200 numbers, there'd be 100 percentiles, but would each refer to a group of two numbers. Or 100 dividers excluding either the far left or far right divider 'cos otherwise you'd get 101 dividers. Or pointers to individual numbers so the first percentile would refer to the second number, (1/100)*200=2 And the hundredth percentile would refer to the 200th number (100/100)*200=200 I have sometimes heard of there being 99 percentiles though.. Google shows the oxford dictionary that says of percentile- "each of the 100 equal groups into which a population can be divided according to the distribution of values of a particular variable." and "each of the 99 intermediate values of a random variable which divide a frequency distribution into 100 such groups." Wikipedia says "the 20th percentile is the value below which 20% of the observations may be found" But does it actually mean "the value below or equal to which, 20% of the observations may be found" i.e. "the value for which 20% of the values are <= to it". If it were just < and not <=, then By that reasoning, the 100th percentile would be the value below which 100% of the values may be found. I have heard that as an argument that there can be no 100th percentile, because you can't have a number where there are 100% of the numbers below it. But I think maybe that argument that you can't have a 100th percentile is incorrect and is based an error that the definition of a percentile involves <= not <. (or >= not >). So the hundredth percentile would be the final number and would be >= 100% of the numbers.
Both of these senses of percentile , quartile , and so on are in widespread use. It’s easiest to illustrate the difference with quartiles: the “divider” sense — there are 3 quartiles, which are the values dividing the distribution (or sample) into 4 equal parts: 1 2 3 ---|---|---|--- (Sometimes this is used with max and min values included, so there are 5 quartiles numbered 0–4; note this doesn’t conflict with the numbering above, it just extends it.) the “bin” sense: there are 4 quartiles, the subsets into which those 3 values divide the distribution (or sample) 1 2 3 4 ---|---|---|--- Neither usage can reasonably be called “wrong”: both are used by many experienced practitioners, and both appear in plenty of authoritative sources (textbooks, technical dictionaries, and the like). With quartiles, the sense being used is usually clear from context: speaking of a value in the third quartile can only be the “bin” sense, while speaking of all values below the third quartile most likely means the “divider” sense. With percentiles, the distinction is more often unclear, but it’s also not so significant for most purposes, since 1% of a distribution is so small — a narrow strip is approximately a line. Speaking of everyone above the 80th percentile might mean the top 20% or the top 19%, but in an informal context that’s not a major difference, and in rigorous work, the meaning needed should be presumably clarified by the rest of the context. (Parts of this answer are adapted from https://math.stackexchange.com/questions/1419609/are-there-3-or-4-quartiles-99-or-100-percentiles , which also gives quotations + references.)
{ "source": [ "https://stats.stackexchange.com/questions/430391", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/261785/" ] }
431,370
It seems very counter intuitive to many people that a given diagnostic test with very high accuracy (say 99%) can generate massively more false positives than true positives in some situations, namely where the population of true positives is very small compared to whole population. I see people making this mistake often e.g. when arguing for wider public health screenings, or wider anti-crime surveillance measures etc but I am at a loss for how to succinctly describe the mistake people are making. Does this phenomenon / statistical fallacy have a name? Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person. Apologies if this is the wrong forum to ask this. If so please direct me to a more appropriate one.
Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox . There is even a wikipedia article about it: see here
{ "source": [ "https://stats.stackexchange.com/questions/431370", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/262594/" ] }
431,851
According to this Wikipedia entry, "Mu was derived from the Egyptian hieroglyphic symbol for water , which had been simplified by the Phoenicians and named after their word for water". So, my question is, why the early statisticians decided to use the letter Mu to denote the population mean / expected value? Was it because that letter was not occupied for denoting other concepts at the time or there was a deeper idea behind the choice?
The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existing tradition to use greek letters in mathematical abbrevation. To satisfy certain individuals craving for actual historical research and assuming that the webpage here is credible I can now confirm that the assumption that it comes from English turns out to be valid. Fisher wrote the normal density with $m$ for the mean (see section 12 of his Statistical Methods for Research Workers) until the mid-1930s when he replaced $m$ with $\mu$ . The new symbol appears in The Fiducial Argument in Statistical Inference (1935) and it went into the 1936 (sixth) edition of the Statistical Methods for Research Workers .
{ "source": [ "https://stats.stackexchange.com/questions/431851", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/49251/" ] }
432,866
Are there any well-known statistical studies that were originally published and thought to be valid, but later had to be thrown out due to a confounding variable that wasn't taken into account? I'm looking for something easy to understand that could be explained to and appreciated by a quantitative literacy class that has zero pre-requisites.
Coffee Drinking & Lung Cancer My favorite example is that supposedly, "coffee drinkers have a greater risk of lung cancer", despite most coffee drinkers... well... drinking coffee, rather than inhaling it. There have been various studies about this, but the consensus remains that studies with this conclusion usually just have a larger proportion of smoking coffee drinkers, than non-smoking coffee drinkers. In other words, the effect of smoking confounds the effect of coffee consumption , if not included in the model. The most recent article on this I could find is a meta analysis by Vania Galarraga and Paolo Boffetta (2016). $^\dagger$ The Obesity Paradox Another example that plagues clinical research, is the claim that obesity can be beneficial for certain diseases. Specifically, many articles, still to this day (just do a quick search for obesity paradox on pubmed and be amazed), claim the following: While a higher BMI increases the risk of diabetes, cardiovascular disease and certain types of cancer, once a patient already has the disease, a higher BMI is associated with lower rates of major adversarial events or death. Why does this happen? Obesity is defined as excess fat negatively affecting health, yet we classify obesity based on BMI. BMI is just calculated as: $$\text{BMI} = \frac{\text{weight in kg}}{(\text{height in m})^2},$$ so the most direct way to combat obesity is through weight loss (or by growing taller somehow). Regimes that focus on loss of weight rather than fat , tend to result in a proportionally large loss of muscle. This is likely what causes lower BMI to be associated with a higher rate of major adversarial events. Because many studies do not include measures of body fat (percentage), but only BMI as a proxy, the amount of body fat confounds the effect of BMI on health. A nice review of this phenomenon was written by Steven G. Chrysant (2018). $^\ddagger$ He ends with: [B]ased on the recent evidence, the obesity paradox is a misnomer and could convey the wrong message to the general public that obesity is not bad. Followed by: Journals [should] no longer accept articles about the 'obesity paradox'. $\dagger$ : Vania Galarraga and Paolo Boffetta (2016): Coffee Drinking and Risk of Lung Cancer—A Meta-Analysis. Cancer Epidemiol Biomarkers Prev June 1 2016 (25) (6) 951-957; DOI: 10.1158/1055-9965.EPI-15-0727 $\ddagger$ : Steven G. Chrysant (2018): Obesity is bad regardless of the obesity paradox for hypertension and heart disease. J Clin Hypertens (Greenwich). 2018 May;20(5):842-846. doi: 10.1111/jch.13281. Epub 2018 Apr 17. Examples of (poor) studies claiming to have demonstrated the obesity paradox: McAuley et al. (2018): Exercise Capacity and the Obesity Paradox in Heart Failure: The FIT (Henry Ford Exercise Testing) Project Weatherald et al. (2018): The association between body mass index and obesity with survival in pulmonary arterial hypertension Patel et al. (2018): The obestiy paradox: the protective effect of obesity on right ventricular function using echocardiographic strain imaging in patients with pulmonary hypertension Articles refuting the obesity paradox as a mere confounding effect of body fat: Lin et al. (2017): Impact of Misclassification of Obesity by Body Mass Index on Mortality in Patients With CKD Leggio et al. (2018): High body mass index, healthy metabolic profile and low visceral adipose tissue: The paradox is to call it obesity again Medina-Inojosa et al. (2018): Association Between Adiposity and Lean Mass With Long-Term Cardiovascular Events in Patients With Coronary Artery Disease: No Paradox Flegal & Ioannidis (2018): The Obesity Paradox: A Misleading Term That Should Be Abandoned Articles about the obesity paradox in cancer: Cespedes et al. (2018): The Obesity Paradox in Cancer: How Important Is Muscle? Caan et al. (2018): The Importance of Body Composition in Explaining the Overweight Paradox in Cancer-Counterpoint
{ "source": [ "https://stats.stackexchange.com/questions/432866", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/263559/" ] }
432,868
I'm working in a scenario in which forecasters make probabilistic forecasts of events with binary outcomes. I know how to calculate a Brier score and can easily do that for any of my forecasters. However, I notice that the Wikipedia page doesn't mention anything about Confidence Intervals, and googling produced various hits, but as far as I could tell no clear answer. If it's not reasonable to put a confidence interval around an individual forecaster's Brier Score, I'd be interested to know why. There's a related and unanswered question on this site here . My question is different as I'm mostly interested in the formula and some intuition behind it. However a description of how to do it in a program such as R would also be appreciated.
Coffee Drinking & Lung Cancer My favorite example is that supposedly, "coffee drinkers have a greater risk of lung cancer", despite most coffee drinkers... well... drinking coffee, rather than inhaling it. There have been various studies about this, but the consensus remains that studies with this conclusion usually just have a larger proportion of smoking coffee drinkers, than non-smoking coffee drinkers. In other words, the effect of smoking confounds the effect of coffee consumption , if not included in the model. The most recent article on this I could find is a meta analysis by Vania Galarraga and Paolo Boffetta (2016). $^\dagger$ The Obesity Paradox Another example that plagues clinical research, is the claim that obesity can be beneficial for certain diseases. Specifically, many articles, still to this day (just do a quick search for obesity paradox on pubmed and be amazed), claim the following: While a higher BMI increases the risk of diabetes, cardiovascular disease and certain types of cancer, once a patient already has the disease, a higher BMI is associated with lower rates of major adversarial events or death. Why does this happen? Obesity is defined as excess fat negatively affecting health, yet we classify obesity based on BMI. BMI is just calculated as: $$\text{BMI} = \frac{\text{weight in kg}}{(\text{height in m})^2},$$ so the most direct way to combat obesity is through weight loss (or by growing taller somehow). Regimes that focus on loss of weight rather than fat , tend to result in a proportionally large loss of muscle. This is likely what causes lower BMI to be associated with a higher rate of major adversarial events. Because many studies do not include measures of body fat (percentage), but only BMI as a proxy, the amount of body fat confounds the effect of BMI on health. A nice review of this phenomenon was written by Steven G. Chrysant (2018). $^\ddagger$ He ends with: [B]ased on the recent evidence, the obesity paradox is a misnomer and could convey the wrong message to the general public that obesity is not bad. Followed by: Journals [should] no longer accept articles about the 'obesity paradox'. $\dagger$ : Vania Galarraga and Paolo Boffetta (2016): Coffee Drinking and Risk of Lung Cancer—A Meta-Analysis. Cancer Epidemiol Biomarkers Prev June 1 2016 (25) (6) 951-957; DOI: 10.1158/1055-9965.EPI-15-0727 $\ddagger$ : Steven G. Chrysant (2018): Obesity is bad regardless of the obesity paradox for hypertension and heart disease. J Clin Hypertens (Greenwich). 2018 May;20(5):842-846. doi: 10.1111/jch.13281. Epub 2018 Apr 17. Examples of (poor) studies claiming to have demonstrated the obesity paradox: McAuley et al. (2018): Exercise Capacity and the Obesity Paradox in Heart Failure: The FIT (Henry Ford Exercise Testing) Project Weatherald et al. (2018): The association between body mass index and obesity with survival in pulmonary arterial hypertension Patel et al. (2018): The obestiy paradox: the protective effect of obesity on right ventricular function using echocardiographic strain imaging in patients with pulmonary hypertension Articles refuting the obesity paradox as a mere confounding effect of body fat: Lin et al. (2017): Impact of Misclassification of Obesity by Body Mass Index on Mortality in Patients With CKD Leggio et al. (2018): High body mass index, healthy metabolic profile and low visceral adipose tissue: The paradox is to call it obesity again Medina-Inojosa et al. (2018): Association Between Adiposity and Lean Mass With Long-Term Cardiovascular Events in Patients With Coronary Artery Disease: No Paradox Flegal & Ioannidis (2018): The Obesity Paradox: A Misleading Term That Should Be Abandoned Articles about the obesity paradox in cancer: Cespedes et al. (2018): The Obesity Paradox in Cancer: How Important Is Muscle? Caan et al. (2018): The Importance of Body Composition in Explaining the Overweight Paradox in Cancer-Counterpoint
{ "source": [ "https://stats.stackexchange.com/questions/432868", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9162/" ] }
432,876
Why ought IQRs, and anything that relies on them like boxplots, be used when they ignore data? I replicated Wikipedia's graphs .
Coffee Drinking & Lung Cancer My favorite example is that supposedly, "coffee drinkers have a greater risk of lung cancer", despite most coffee drinkers... well... drinking coffee, rather than inhaling it. There have been various studies about this, but the consensus remains that studies with this conclusion usually just have a larger proportion of smoking coffee drinkers, than non-smoking coffee drinkers. In other words, the effect of smoking confounds the effect of coffee consumption , if not included in the model. The most recent article on this I could find is a meta analysis by Vania Galarraga and Paolo Boffetta (2016). $^\dagger$ The Obesity Paradox Another example that plagues clinical research, is the claim that obesity can be beneficial for certain diseases. Specifically, many articles, still to this day (just do a quick search for obesity paradox on pubmed and be amazed), claim the following: While a higher BMI increases the risk of diabetes, cardiovascular disease and certain types of cancer, once a patient already has the disease, a higher BMI is associated with lower rates of major adversarial events or death. Why does this happen? Obesity is defined as excess fat negatively affecting health, yet we classify obesity based on BMI. BMI is just calculated as: $$\text{BMI} = \frac{\text{weight in kg}}{(\text{height in m})^2},$$ so the most direct way to combat obesity is through weight loss (or by growing taller somehow). Regimes that focus on loss of weight rather than fat , tend to result in a proportionally large loss of muscle. This is likely what causes lower BMI to be associated with a higher rate of major adversarial events. Because many studies do not include measures of body fat (percentage), but only BMI as a proxy, the amount of body fat confounds the effect of BMI on health. A nice review of this phenomenon was written by Steven G. Chrysant (2018). $^\ddagger$ He ends with: [B]ased on the recent evidence, the obesity paradox is a misnomer and could convey the wrong message to the general public that obesity is not bad. Followed by: Journals [should] no longer accept articles about the 'obesity paradox'. $\dagger$ : Vania Galarraga and Paolo Boffetta (2016): Coffee Drinking and Risk of Lung Cancer—A Meta-Analysis. Cancer Epidemiol Biomarkers Prev June 1 2016 (25) (6) 951-957; DOI: 10.1158/1055-9965.EPI-15-0727 $\ddagger$ : Steven G. Chrysant (2018): Obesity is bad regardless of the obesity paradox for hypertension and heart disease. J Clin Hypertens (Greenwich). 2018 May;20(5):842-846. doi: 10.1111/jch.13281. Epub 2018 Apr 17. Examples of (poor) studies claiming to have demonstrated the obesity paradox: McAuley et al. (2018): Exercise Capacity and the Obesity Paradox in Heart Failure: The FIT (Henry Ford Exercise Testing) Project Weatherald et al. (2018): The association between body mass index and obesity with survival in pulmonary arterial hypertension Patel et al. (2018): The obestiy paradox: the protective effect of obesity on right ventricular function using echocardiographic strain imaging in patients with pulmonary hypertension Articles refuting the obesity paradox as a mere confounding effect of body fat: Lin et al. (2017): Impact of Misclassification of Obesity by Body Mass Index on Mortality in Patients With CKD Leggio et al. (2018): High body mass index, healthy metabolic profile and low visceral adipose tissue: The paradox is to call it obesity again Medina-Inojosa et al. (2018): Association Between Adiposity and Lean Mass With Long-Term Cardiovascular Events in Patients With Coronary Artery Disease: No Paradox Flegal & Ioannidis (2018): The Obesity Paradox: A Misleading Term That Should Be Abandoned Articles about the obesity paradox in cancer: Cespedes et al. (2018): The Obesity Paradox in Cancer: How Important Is Muscle? Caan et al. (2018): The Importance of Body Composition in Explaining the Overweight Paradox in Cancer-Counterpoint
{ "source": [ "https://stats.stackexchange.com/questions/432876", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
433,314
For a uniformly distributed variable between 0 and 1 generated using rand(1,10000) this returns 10,000 random numbers between 0 and 1. If you take the mean, it is 0.5, while if you take the log of that sample, then take the mean of the result: mean(log(rand(1,10000))) I would expect that the result to be $\log 0.5=-.6931$ , but instead the answer is -1. Why is this so?
Consider two values symmetrically placed around $0.5$ - like $0.4$ and $0.6$ or $0.25$ and $0.75$ . Their logs are not symmetric around $\log(0.5)$ . $\log(0.5-\epsilon)$ is further from $\log(0.5)$ than $\log(0.5+\epsilon)$ is. So when you average them you get something less than $\log(0.5)$ . Similarly, if you take a teeny interval around a collection of such pairs of symmetrically placed values, you still get the average of the logs of each pair being below $\log(0.5)$ ... and it's a simple matter to move from that observation to the definition of the expectation of the log. Indeed, usually, $E(t(X))\neq t(E(X))$ unless $t$ is linear.
{ "source": [ "https://stats.stackexchange.com/questions/433314", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/261626/" ] }
433,964
I am wondering why we care so much about overfitting. We know that statistical models are tools to tell us some information, but they are not even fully accurate.
The quote by Box is along the lines of "All models are wrong, but some are useful." If we have bad overfitting, our model will not be useful in making predictions on new data.
{ "source": [ "https://stats.stackexchange.com/questions/433964", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/207267/" ] }
437,174
Several places ( 1 , 2 , 3 ) quote someone named Gail Gasram as saying "Nothing is random, only uncertain" but a Google search turns up no info, just more places with this quote! Generally, it's in the context of random number generation, such as the Dieharder suite. Who is Gail Gasram?
It looks like "Gail Gasram" is "Marsaglia G" ( George Marsaglia 's surname and first initial) spelled backwards.
{ "source": [ "https://stats.stackexchange.com/questions/437174", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/117731/" ] }
438,060
Does this code demonstrate the central limit theorem? This is not a homework assignment! Au contraire, I'm a faculty teaching some methods to non-stats students. library(tidyverse) #Make fake data population<-rnorm(1000000, mean=100, sd=10) #Draw 100 samples of size 5 map(1:100, ~sample(population, size=5)) %>% #calculate their mean map(., mean) %>% #unlist unlist() %>% #draw histogram of sample means hist(, xlim=c(80,120)) #Repeat but with sample size 500 map(1:100, ~sample(population, size=500)) %>% map(., mean) %>% unlist() %>% hist(., xlim=c(80,120)) #Repeat but with sample size 1000 map(1:100, ~sample(population, size=1000)) %>% map(., mean) %>% unlist() %>% hist(., xlim=c(80,120))
Here's a complete study in a few lines. For a given set of sample sizes n and underlying distribution r , it generates n.sim independent samples of each size from that distribution, standardizes the empirical distribution of their means, plots the histogram, and overplots the standard Normal density in red. The CLT says that when the underlying distribution has finite variance, the red curve more and more closely approximates the histogram. The first three rows illustrate the process for sample sizes of $10,20,100,500$ and underlying Normal, Gamma, and Bernoulli distributions. As sample size increases the approximation grows noticeably better. The bottom row uses a Cauchy distribution. Because a key assumption of the CLT (finite variance) does not hold in this case, its conclusion doesn't hold, which is pretty clear. Execution time is about one second. f <- function(n, r=rnorm, n.sim=1e3, name="Normal", ...) { sapply(n, function(n) { x <- scale(colMeans(matrix(r(n*n.sim, ...), n))) # Sample, take mean, standardize hist(x, sub=name, main=n, freq=FALSE, breaks=30) # Plot distribution curve(dnorm(x), col="Red", lwd=2, add=TRUE) # Compare to standard Normal }) } n <- c(5,20,100,500) mfrow.old <- par(mfrow=c(4,length(n))) f(n) f(n, rgamma, shape=1/2, name="Gamma(1/2)") f(n, function(n) runif(n) < 0.9, name="Bernoulli(9/10)") f(n, rt, df=1, name="Cauchy") par(mfrow=mfrow.old)
{ "source": [ "https://stats.stackexchange.com/questions/438060", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/43288/" ] }
438,928
I am interested in an inferential test available in R which tests whether Pearson's $r = 1$ instead of whether $r = 0$ . It would be good if the test allowed the correlation matrix and number of participants as input variables.
I would argue that there is not any testing to do. If the sample correlation is not 1, then you reject $H_0: \rho=1$ with certainty. Having a correlation of 1 means that the points cannot deviate from a diagonal line the way that they can when $\vert \rho \vert < 1$ . EDIT set.seed(2019) x <- rexp(1000) y <- 3*x plot(x,y) V <- rep(NA,10000) for (i in 1:length(V)){ print(i) idx <- sample(seq(1,length(x),1),replace=T) V[i] <- cor(x[idx],y[idx]) } summary(V) With the points of the scatterplot locked to the diagonal line $y=3x$ , every single sample correlation is 1. You can try this out with other distributions and sample sizes. Where this gets interesting---and I'm not completely sure of the math at the population level---is when I set a Gaussian copula to have a parameter of 1. library(copula) set.seed(2019) gc <-ellipCopula("normal", param = 1, dim = 2)#, dispstr = "un") norm_exp <- mvdc(gc,c("norm","exp"),list(list(mean=0,sd=1),list(rate=1))) V <- rep(NA,10000) for (i in 1:length(V)){ print(i) D_ne <- rMvdc(1000, norm_exp) x <- D_ne[,1] y <- D_ne[,2] V[i] <- cor(x[idx],y[idx]) } plot(x,y) summary(V) I still don't think this relationship gives a population Pearson correlation of 1 (the relationship is perfectly monotonic but not linear), but this result surprised me. I expected another plot of a straight line. To defend my assertion that the population Pearson correlation is not 1, I refer to theorem 4.5.7 on pg. 172 of the second edition of Casella & Berger's Statistial Inference : " $\vert \rho_{XY}\vert=1$ if and only if there exist numbers $a\ne0$ and $b$ such that $P(Y = aX+b)=1$ ." Since the relationship between my $X$ (the normal variable) and $Y$ (exponential) is nonlinear, there can be no such $a$ and $b$ . Casella, George, and Roger L. Berger. Statistical Inference. 2nd ed., Cengage Learning & Wadsworth, 2002.
{ "source": [ "https://stats.stackexchange.com/questions/438928", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/137404/" ] }
440,004
I was hoping someone could provide clarity surrounding the following scenario. You are asked "What is the expected number of observed heads and tails if you flip a fair coin 1000 times". Knowing that coin flips are i.i.d. events, and relying on the law of large numbers you calculate it to be: $$N_{heads} = 500 \; N_{tails} = 500$$ Now, let us have observed/realized the first 500 flips to all be heads . We want to know the updated expected number of realizations of the remaining 500 flips. Because the first 500 events have been realized and they do not effect the underlying physical coin flipping process, we know that the expected number of heads and tails of the remaining 500 flips are: $$N_{heads} = 250 \; N_{tails} = 250$$ So, here is my question/confusion: I understand that each coin flip is independent and that any single individual coin flip has a probability of $\frac{1}{2}$ coming up heads. However, based on the law of large numbers we know that the (if we value tails as 0 and heads as 1) mean of the tosses will approach $0.5$ as the number of tosses approaches $\infty$ . So, based on that, if we have observed 500 heads in a row, why do we not statistically expect to realize more tails going forward? I fully realize the following thought is incorrect, but it feels like we are (statistically) due for a tails and that the probability of tails should be raised and heads lowered. Since this is not the case, it feels as though this is conflicting with the original expectation of $N_{heads} = 500$ and $N_{tails} = 500$ . Again, I realize that this thinking is incorrect, but I am hoping someone can help me understand why this past information (500 realizations of heads in a row) does not provide any new, updated information that updates the probability for the remaining flips? Clearly the coin does not know that it just came up heads $500$ times, so is the correct way to think about this that the law of large numbers doesn't imply that in the following 500 flips tails is more likely, but rather that as $N \rightarrow \infty$ we expect 50% of realizations to be heads and 50% to be tails. In which case my error in reasoning is based on applying a limit theorem that applies in the asymptote to a preasymptotic situation? I also feel like this has to deal with a bit of confusion between single events (a single coin toss coming up heads), and the collective action of a set of events (1000 coin tosses) that exhibit nonrandom properties. After searching I came across a wonderful quote by Kolmogorov $^1$ : "In reality, however, the epistemological value of the theory of probability is revealed only by limit theorems. ... In fact, all epistemological value of the theory of probability is based on this: that large-scale random phenomena in their collective action create strict, nonrandom regularity. The very concept of mathematical probability would be fruitless if it did not find its realization in the frequency of occurrence of events under large-scale repetition and uniform conditions." I believe this quote clears up some of my confusion, but if anyone could elaborate on why realizations (based on a known statistical process) cannot be used to update subsequent probabilities, I would greatly appreciate it! B. V. Gnedenko and A. N. Kolmogorov: Limit distributions for sums of independent random variables. Addison-Wesley Mathematics Series
If you "know" that the coin is fair then we still expect the long run proportion of heads to tend to $0.5$ . This is not to say that we should expect more (than 50%) of the next flips to be tails, but rather that the initial $500$ flips become irrelevant as $n\rightarrow\infty$ . A streak of $500$ heads may seem like a lot (and practically speaking it is), but if $250$ of the next $500$ flips are heads then the sample proportion becomes $$\hat p = \frac{500 + 250}{1000} = 0.75.$$ if $250$ of the next $500$ flips are heads then... $$\hat p = \frac{500+250+250}{1500} \approx 0.67$$ if $100000$ of the next $200000$ flips are heads then... $$\hat p = \cdots \approx 0.501.$$ This is the Law of Large Numbers. On the other hand... if I were to flip a coin in real life and see $500$ heads in a row, I would start to seriously doubt that the coin is actually fair. (Interesting side note, it is hard (impossible?) to actually bias a coin in real life. The only realistic values of $p$ are $0$ , $0.5$ and $1$ , but we will ignore this for the sake of an answer). To account for this possibility, we could use a Bayesian procedure from the outset. Rather than assume $p=1/2$ , suppose we specify the prior distribution $$p \sim \text{Beta}(\alpha, \alpha).$$ This is a symmetric distribution, which encodes my a priori belief that the coin is fair, i.e. $E(p) = \frac{1}{2}$ . How strongly I believe in this notion is specified through the choice of $\alpha$ , since $Var(p) = \frac{1}{8(\alpha+0.5)}$ . $\alpha = 1$ corresponds to a uniform prior over $(0,1)$ . $\alpha = 0.5$ is Jeffrey's prior - another popular non-informative choice. Choosing a large value of $\alpha$ gives more credence to the belief that $p=1/2$ . In fact, setting $\alpha = \infty$ implies that $Pr(p=1/2) = 1$ . Applying Bayes rule directly, the posterior distribution for $p$ is $$p|y \sim \text{Beta}(\alpha+y, \alpha+n-y)$$ where $y = \text{number of heads}$ and $n = \text{number of flips}$ . For instance, if you choose $\alpha = 1$ and observe $n=y=500$ , the posterior distribution becomes $\text{Beta}(501, 1)$ and $$E(p|y) = \frac{\alpha + y}{2\alpha + n} = \frac{501}{502} \approx 0.998$$ indicating that I should bet on heads for the next flip (since it is highly improbable that the coin is fair). This updating process can be applied after each flip, using the posterior distribution after $n$ flips as the prior for flip $n+1$ . If it turns out that the $500$ heads was just a (astronomically) improbable event and the coin really is fair, the posterior distribution will eventually capture this (using a similar argument to the previous section). Intuition for choosing $\alpha$ : To help understand the role of $\alpha$ in the Bayesian procedure, we can use the following argument. The mean of the posterior distribution is equivalent to the maximum likelihood estimate of $p$ , if we were to augment the data with a series of $2\alpha$ "hypothetical flips", where $\alpha$ of these flips are heads and $\alpha$ of these flips are tails. Choosing $\alpha=1$ (as we did above) suggests that the augmented data is $501$ heads and $1$ tails. Choosing a larger value of $\alpha$ suggests that more evidence is required to change our beliefs. Still, for any finite choice of $\alpha$ , these "hypothetical flips" will eventually become irrelevant as $n\rightarrow\infty$ .
{ "source": [ "https://stats.stackexchange.com/questions/440004", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/191079/" ] }
442,337
I just rigorously learned that OLS is a special case of MLE. It surprises me because the popular and "reliable" sources such as researchgate and this do not mention this most important connection between MLE and OLS! I am not sure if there are any simple regression or estimation method that does not belong to MLE.
Least squares is indeed maximum likelihood if the errors are iid normal, but if they aren't iid normal, least squares is not maximum likelihood. For example if my errors were logistic, least squares wouldn't be a terrible idea but it wouldn't be maximum likelihood. Lots of estimators are not maximum likelihood estimators; while maximum likelihood estimators typically have a number of useful and attractive properties they're not the only game in town (and indeed not even always a great idea). A few examples of other estimation methods would include method of moments (this involves equating enough sample and population moments to solve for parameter estimates; sometimes this turns out to be maximum likelihood but usually it doesn't) For example, equating first and second moments to estimate the parameters of a gamma distribution or a uniform distribution; not maximum likelihood in either case. method of quantiles (equating sufficient sample and population quantiles to solve for parameter estimates; occasionally this is maximum likelihood but usually it isn't), minimizing some other measure of lack of fit than $-\log\mathcal{L}$ (e.g. minimum chi-square, minimum K-S distance). With fitting linear regression type models, you could for example look at robust regression (some of which do correspond to ML methods for some particular error distribution but many of which do not). In the case of simple linear regression, I show an example of two methods of fitting lines that are not maximum likelihood here - there estimating slope by setting to 0 some other measure of correlation (i.e. other than the usual Pearson) between residuals and the predictor. Another example would be the Tukey's resistant line/Tukey's three group line (e.g. see ?line in R). There are many other possibilities, though many of them don't generalize readily to the multiple regression situation.
{ "source": [ "https://stats.stackexchange.com/questions/442337", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/164775/" ] }
442,352
In the context of machine learning, I often hear the term latent space, sometimes qualified with the word "high dimensional" or "low dimensional" latent space. I am a bit puzzled by this term (as it is almost never defined rigorously). Can someone please provide a definition or motivation to the concept of a latent space ?
Latent space refers to an abstract multi-dimensional space containing feature values that we cannot interpret directly, but which encodes a meaningful internal representation of externally observed events. Just as we, humans, have an understanding of a broad range of topics and the events belonging to those topics, latent space aims to provide a similar understanding to a computer through a quantitative spatial representation/modeling. The motivation to learn a latent space (set of hidden topics/ internal representations) over the observed data (set of events) is that large differences in observed space/events could be due to small variations in latent space (for the same topic). Hence, learning a latent space would help the model make better sense of observed data than from observed data itself, which is a very large space to learn from. Some examples of latent space are: 1) Word Embedding Space - consisting of word vectors where words similar in meaning have vectors that lie close to each other in space (as measured by cosine-similarity or euclidean-distance) and words that are unrelated lie far apart (Tensorflow's Embedding Projector provides a good visualization of word embedding spaces). 2) Image Feature Space - CNNs in the final layers encode higher-level features in the input image that allows it to effectively detect, for example, the presence of a cat in the input image under varying lighting conditions, which is a difficult task in the raw pixel space. 3) Topic Modeling methods such as LDA , PLSA use statistical approaches to obtain a latent set of topics from an observed set of documents and word distribution. ( PyLDAvis provides a good visualization of topic models) 4) VAEs & GANs aim to obtain a latent space/distribution that closely approximates the real latent space/distribution of the observed data. In all the above examples, we quantitatively represent the complex observation space with a (relatively simple) multi-dimensional latent space that approximates the real latent space of the observed data. The terms "high dimensional" and "low dimensional" help us define how specific or how general the kinds of features we want our latent space to learn and represent. High dimensional latent space is sensitive to more specific features of the input data and can sometimes lead to overfitting when there isn't sufficient training data. Low dimensional latent space aims to capture the most important features/aspects required to learn and represent the input data (a good example is a low-dimensional bottleneck layer in VAEs). If this answer helped, please don't forget to up-vote it :)
{ "source": [ "https://stats.stackexchange.com/questions/442352", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/58450/" ] }
443,092
I remember reading somewhere in another post about the different viewpoints between people from statistics and from machine learning or neural networks, where one user was mentioning this idea as an example of bad practice. Even then, I cannot find anyone asking this question, so I guess there is something evident I am missing. I can only think of two hypothetical scenarios where regularization would not be preferred: The researcher is interested in unbiasedness of the estimates. Due to a large volume of real-time data, one looks to minimize computation time. In the former case, I am not convinced there is any practical reason for a researcher to look for unbiasedness over a lower error, specially considering a single study. In the latter, I am not even convinced there is a relevant gain in computation time. What am I missing?
In short, regularization changes the distribution of the test statistic, rendering tests of hypothesis moot. In instances where we want to use regression to make inferences about interventions, we want unbiasedness. Not everything to do with data is a prediction problem.
{ "source": [ "https://stats.stackexchange.com/questions/443092", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/180158/" ] }
443,445
I need to generate 100 random integers in R , where each integer is between 1 and 4 (hence 1,2,3,4) and the mean is equal to a specific value. If I draw random uniform numbers between 1 and 5 and get floor , I have a mean of 2.5. x = floor(runif(100,min=1, max=5)) I need to fix the mean to 1.9 or 2.93 for example. I guess I can generate random integers that add to 100 * mean but I don't know how to restrict to random integers between 1 and 4.
I agree with X'ian that the problem is under-specified. However, there is an elegant, scalable, efficient, effective, and versatile solution worth considering. Because the product of the sample mean and sample size equals the sample sum, the problem concerns generating a random sample of $n$ values in the set $\{1,2,\ldots, k\}$ that sum to $s$ (assuming $n \le s \le kn,$ of course). To explain the proposed solution and, I hope, justify the claim of elegance, I offer a graphical interpretation of this sampling scheme. Lay out a grid of $k$ rows and $n$ columns. Select every cell in the first row. Randomly (and uniformly) select $s-n$ of the remaining cells in rows $2$ through $k.$ The value of observation $i$ in the sample is the number of cells selected in column $i:$ This $4\times 100$ grid is represented by black dots at the unselected cells and colored patches at the selected cells. It was generated to produce a mean value of $2,$ so $s=200.$ Thus, $200-100=100$ cells were randomly selected among the top $k-1=3$ rows. The colors represent the numbers of selected cells in each column. There are $28$ ones, $47$ twos, $22$ threes, and $3$ fours. The ordered sample corresponds to the sequence of colors from column $1$ through column $n=100.$ To demonstrate scalability and efficiency, here is an R command to generate a sample according to this scheme. The question concerns the case $k=4, n=100$ and $s$ is $n$ times the desired average of the sample: tabulate(sample.int((k-1)*n, s-n) %% n + 1, n) + 1 Because sample.int requires $O(s-n)$ time and $O((k-1)n)$ space, and tabulate requires $O(n)$ time and space, this algorithm requires $O(\max(s-n,n))$ time and $O(kn)$ space: that's scalable . With $k=4$ and $n=100$ my workstation takes only 12 microseconds to perform this calculation: that's efficient . (Here's a brief explanation of the code. Note that integers $x$ in $\{1,2,\ldots, (k-1)n\}$ can be expressed uniquely as $x = nj + i$ where $j \in \{0,1,\ldots, k-2\}$ and $i\in\{1,2,\ldots, n\}.$ The code takes a sample of such $x,$ converts them to their $(i,j)$ grid coordinates, counts how many times each $i$ appears (which will range from $0$ through $k-1$ ) and adds $1$ to each count.) Why can this be considered effective ? One reason is that the distributional properties of this sampling scheme are straightforward to work out: It is exchangeable: all permutations of any sample are equally likely. The chance that the value $x \in\{1,2,\ldots, k\}$ appears at position $i,$ which I will write as $\pi_i(x),$ is obtained through a basic hypergeometric counting argument as $$\pi_i(x) = \frac{\binom{k-1}{x-1}\binom{(n-1)(k-1)}{s-n-x+1}}{\binom{n(k-1)}{ s-n}}.$$ For example, with $k=4,$ $n=100,$ and a mean of $2.0$ (so that $s=200$ ) the chances are $\pi = (0.2948, 0.4467, 0.2222, 0.03630),$ closely agreeing with the frequencies in the foregoing sample. Here are graphs of $\pi_1(1), \pi_1(2), \pi_1(3),$ and $\pi_1(4)$ as a function of the sum: The chance that the value $x$ appears at position $i$ while the value $y$ appears at position $j$ is similarly found as $$\pi_{ij}(x,y) = \frac{\binom{k-1}{x-1}\binom{k-1}{y-1}\binom{(n-1)(k-1)}{s-n-x-y+2}}{\binom{n(k-1)}{ s-n}}.$$ These probabilities $\pi_i$ and $\pi_{ij}$ enable one to apply the Horvitz-Thompson estimator to this probability sampling design as well as to compute the first two moments of the distributions of various statistics. Finally, this solution is versatile insofar as it permits simple, readily-analyzable variations to control the sampling distribution. For instance, you could select cells on the grid with specified but unequal probabilities in each row, or with an urn-like model to modify the probabilities as sampling proceeds, thereby controlling the frequencies of the column counts.
{ "source": [ "https://stats.stackexchange.com/questions/443445", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/142039/" ] }
445,578
I have read in several places that the use of DAGs can help to reduce bias due to Confounding Differential Selection Mediation Conditioning on a collider I also see the term “backdoor path” a lot. How do we use DAGs to reduce these biases, and how does it relate to backdoor paths ? Extra points (I will award a bounty) for real world examples of the above
A DAG is a D irected A cyclic G raph. A “ G raph” is a structure with nodes (which are usually variables in statistics) and arcs (lines) connecting nodes to other nodes. “ D irected” means that all the arcs have a direction, where one end of the arc has an arrow head, and the other does not, which usually refers to causation. “ A cyclic” means that the graph is not cyclic – that means there can be no path from any node that leads back to the same node. In statistics a DAG is a very powerful tool to aid in causal inference – to estimate the causal effect of one variable (often called the main exposure) on another (often called the outcome) in the presence of other variables which may be competing exposures, confounders or mediators. The DAG can be used to identify a minimal sufficient set of variables to be used in a multivariable regression model for the estimation of said causal effect. For example it is usually a very bad idea to condition on a mediator (a variable that lies on the causal path between the main exposure and the outcome), while it is usually a very good idea to condition on a confounder (a variable that is a cause, or a proxy for a cause, of both the main exposure and the outcome). It is also a bad idea to condition on a collider (to be defined below). But first, what is the problem we want to overcome? This is what a multivariable regression model looks like to your statistical software: The software does not “know” which variables are our main exposure, competing exposures, confounders or mediators. It treats them all the same. In the real world it is far more common for the variables to be inter-related. For example, knowledge of the particular area of research may indicate a structure such as: Note that it is the researchers job to specify the causal paths, using expert knowledge about the subject at hand. DAGs represent a set of (often abstracted) causal beliefs pertinent to specific causal relationships. One researcher's DAG may be different to another researcher's DAG, for the same relationship(s), and that is completely OK. In the same way, a researcher may have more than one DAG for the same causal relationships, and using DAGs in a principled way as described below is one way to gather knowledge about, or support for a particular hypothesis. Let’s suppose that our interest is in the causal effect of $X7$ on $Y$ . What are we to do? A very naive approach is simply to put all the variables into a regression model, and take the estimated coefficient for $X7$ as our “answer”. This would be a big mistake. It turns out that the only variable that should be adjusted for in this DAG is $X3$ , because it is a confounder. But what if our interest was in the effect of $X3$ , not $X7$ ? Do we simply use the same model (also containing $X7$ ) and just take the estimate of $X3$ as our “answer”? No! In this case, we do not adjust for $X7$ because it is a mediator. No adjustment is needed at all. In both cases, we may also adjust for $X1$ because this is a competing exposure and will improve the precision of our casual inferences in both models. In both models we should not adjust for $X2$ , $X4$ , $X5$ and $X6$ because all of them are mediators for the effect of $X7$ on $Y$ . So, getting back to the question, how do DAGs actually enable us to do this? First we need to establish a few ground truths. A collider is a variable which has more than 1 cause – that is, at least 2 arrows are pointing at it (hence the incoming arrows “collide”). $X5$ in the above DAG is a collider If there are no variables being conditioned on, a path is blocked if and only if it contains a collider. The path $X4 \rightarrow X5 \leftarrow X6$ is blocked by the collider $X5$ . Note: when we talk about "conditioning" on a variable this could refer to a few things, for example stratifying, but perhaps more commonly including the variable as a covariate in a multivariable regression model. Other synonymous terms are "controlling for" and "adjusting for". Any path that contains a non-collider that has been conditioned on is blocked. The path $Y \leftarrow X3 \rightarrow X7$ will be blocked if we condition on $X3$ . A collider (or a descendant of a collider) that has been conditioned on does not block a path. If we condition on $X5$ we will open the path $X4 \rightarrow X5 \leftarrow X6$ A backdoor path is a non-causal path between an outcome and a cause. It is non-causal because it contains an arrow pointing at both the cause and the outcome. For example the path $Y \leftarrow X3 \rightarrow X7$ is a backdoor path from $Y$ to $X3$ . Confounding of a causal path occurs where a common cause for both variables is present. In other words confounding occurs where an unblocked backdoor path is present. Again, $Y \leftarrow X3 \rightarrow X7$ is such a path. So, armed with this knowledge, let’s see how DAGs help us with removing bias: Confounding The definition of confounding is 6 above. If we apply 4 and condition on the confounder we will block the backdoor path from the outcome to the cause, thereby removing confounding bias. The example is the association of carrying a lighter and lung cancer: Carrying a lighter has no causal effect on lung cancer, however, they share a common cause - smoking - so applying rule 5 above, a backdoor path from Lung cancer to carrying a lighter is present which induces an association between carrying a lighter and Lung cancer. Conditioning on Smoking will remove this association, which can be demonstrate with a simple simulation where I use continuous variables for simplicity: > set.seed(15) > N <- 100 > Smoking <- rnorm(N, 10, 2) > Cancer <- Smoking + rnorm(N) > Lighter <- Smoking + rnorm(N) > summary(lm(Cancer ~ Lighter)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.66263 0.76079 0.871 0.386 Lighter 0.91076 0.07217 12.620 <2e-16 *** which shows the spurious association between Lighter and Cancer, but now when we condition on Smoking: > summary(lm(Cancer ~ Lighter + Smoking)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.42978 0.60363 -0.712 0.478 Lighter 0.07781 0.11627 0.669 0.505 Smoking 0.95215 0.11658 8.168 1.18e-12 *** ...the bias is removed. Mediation A mediator is a variable that lies on the causal path between the cause and the outcome. This means that the outcome is a collider. Therefore, applying rule 3 means that we should not condition on the mediator otherwise the indirect effect of the cause on the outcome (i.e., that mediated by the mediator) will be blocked. A good example example is the grades of a student and their happiness. A mediating variable is self-esteem: Here, Grades has a direct effect on Happiness, but it also has an indirect effect mediated by self-esteem. We want to estimate the total causal effect of Grades on Happiness. Rule 3 says that a path that contains a non-collider that has been conditioned on is blocked. Since we want the total effect (i.e., including the indirect effect) we should not condition on self-esteem otherwise the mediated path will be blocked, as we can see in the following simulation: > set.seed(15) > N <- 100 > Grades <- rnorm(N, 10, 2) > SelfEsteem <- Grades + rnorm(N) > Happiness <- Grades + SelfEsteem + rnorm(N) So the total effect should be 2: > summary(m0 <- lm(Happiness ~ Grades)) # happy times Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.05650 0.79509 1.329 0.187 Grades 1.90003 0.07649 24.840 <2e-16 *** which is what we do find. But if we now condition on self esteem: > summary(m0 <- lm(Happiness ~ Grades + SelfEsteem Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.39804 0.50783 2.753 0.00705 ** Grades 0.81917 0.10244 7.997 2.73e-12 *** SelfEsteem 1.05907 0.08826 11.999 < 2e-16 *** only the direct effect for grades is estimated, due to blocking the indirect effect by conditioning on the mediator SelfEsteem . Collider bias This is probably the most difficult one to understand, but with the aid of a very simple DAG we can easily see the problem: Here, there is no causal path between X and Y. However, both cause C, the collider. If we condition on C, then applying rule 4 above we will invoke collider bias by opening up the (non causal) path between X, and Y. This may be a little hard to grasp at first, but it should become apparent by thinking in terms of equations. We have X + Y = C. Let X and Y be binary variables taking the values 1 or zero. Hence, C can only take the values of 0, 1 or 2. Now, when we condition on C we fix its value. Say we fix it at 1. This immediately means that if X is zero then Y must be 1, and if Y is zero then X must be one. That is, X = -Y, so they are perfectly (negatively) correlated, conditional on C= 1. We can also see this in action with the following simulation: > set.seed(16) > N <- 100 > X <- rnorm(N, 10, 2) > Y <- rnorm(N, 15, 3) > C <- X + Y + rnorm(N) So, X and Y are independent so we should find no association: > summary(m0 <- lm(Y ~ X)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 14.18496 1.54838 9.161 8.01e-15 *** X 0.08604 0.15009 0.573 0.568 and indeed no association is found. But now condition on C > summary(m1 <- lm(Y ~ X + C)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.10461 0.61206 1.805 0.0742 . X -0.92633 0.05435 -17.043 <2e-16 *** C 0.92454 0.02881 32.092 <2e-16 *** and now we have a spurious association between X and Y. Now let’s consider a slightly more complex situation: Here we are interested in the causal effect of Activity on Cervical Cancer. Hypochondria is an unmeasured variable which is a psychological condition that is characterized by fears of minor and sometimes non-existent medical symptoms being an indication of major illness. Lesion is also an unobserved variable that indicates the presence of a pre-cancerous lesion. Test is a diagnostic test for early stage cervical cancer. Here we hypothesise that both the unmeasured variables affect Test, obviously in the case of Lesion, and by making frequent visits to the doctor in the case of Hypochondria. Lesion also (obviously causes Cancer) and Hypochondria causes more physical activity (because persons with hypochondria are worried about a sedentary lifestyle leading to disease in later life. First notice that if the collider, Test, was removed and replace with an arc either from Lesion to Hypochondria or vice versa, then our causal path of interest, Activity to Cancer, would be confounded, but due to rule 2 above, the collider blocks the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$ , as we can see with a simple simulation: > set.seed(16) > N <- 100 > Lesion <- rnorm(N, 10, 2) > Hypochondria <- rnorm(N, 10, 2) > Test <- Lesion + Hypochondria + rnorm(N) > Activity <- Hypochondria + rnorm(N) > Cancer <- Lesion + 0.25 * Activity + rnorm(N) where we hypothesize a much smaller effect of Activity on Cancer than Lesion on Cancer > summary(lm(Cancer ~ Activity)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 10.47570 1.01150 10.357 <2e-16 *** Activity 0.21103 0.09667 2.183 0.0314 * And indeed we obtain a reasonable estimate. Now, also observe the association of Activity and Cancer with Test (due to their common, but unmeasured causes: > cor(Test, Activity); cor(Test, Cancer) [1] 0.6245565 [1] 0.7200811 The traditional definition of confounding is that a confounder is variable that is associated with both the exposure and the outcome. So, we might mistakenly think that Test is a confounder and condition on it. However, we then open up the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$ , and introduce confounding which would otherwise not be present, as we can see from: > summary(lm(Cancer ~ Activity + Test)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.77204 0.98383 1.801 0.0748 . Activity -0.37663 0.07971 -4.725 7.78e-06 *** Test 0.72716 0.06160 11.804 < 2e-16 *** Now not only is the estimate for Activity biased, but it is of larger magnitude and of the opposite sign! Selection bias The preceding example can also be used to demonstrate selection bias. A researcher may identify Test as a potential confounder, and then only conduct the analysis on those that have tested negative (or positive). > dtPos <- data.frame(Lesion, Hypochondria, Test, Activity, Cancer) > dtNeg <- dtPos[dtPos $Test < 22, ] > dtPos <- dtPos[dtPos$ Test >= 22, ] > summary(lm(Cancer ~ Activity, data = dtPos)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 13.15915 3.07604 4.278 0.000242 *** Activity 0.08662 0.25074 0.345 0.732637 So for those that test positive we obtain a very small positive effect, that is not statistically significant at the 5% level > summary(lm(Cancer ~ Activity, data = dtNeg)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.18865 1.12071 10.876 <2e-16 *** Activity -0.01553 0.11541 -0.135 0.893 And for those that test negative we obtain a very small negative association which is also not significant.
{ "source": [ "https://stats.stackexchange.com/questions/445578", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6884/" ] }
445,595
I have a general question about survival models. Survival models take the time dependence into account in a natural way. But what is exactly the difference in inference if we would model the surival as a binary variable with time as the independent variable instead of a survival model? Are there any gains in modelling data as a survival model instead of some classification model? Thanks in advance!
A DAG is a D irected A cyclic G raph. A “ G raph” is a structure with nodes (which are usually variables in statistics) and arcs (lines) connecting nodes to other nodes. “ D irected” means that all the arcs have a direction, where one end of the arc has an arrow head, and the other does not, which usually refers to causation. “ A cyclic” means that the graph is not cyclic – that means there can be no path from any node that leads back to the same node. In statistics a DAG is a very powerful tool to aid in causal inference – to estimate the causal effect of one variable (often called the main exposure) on another (often called the outcome) in the presence of other variables which may be competing exposures, confounders or mediators. The DAG can be used to identify a minimal sufficient set of variables to be used in a multivariable regression model for the estimation of said causal effect. For example it is usually a very bad idea to condition on a mediator (a variable that lies on the causal path between the main exposure and the outcome), while it is usually a very good idea to condition on a confounder (a variable that is a cause, or a proxy for a cause, of both the main exposure and the outcome). It is also a bad idea to condition on a collider (to be defined below). But first, what is the problem we want to overcome? This is what a multivariable regression model looks like to your statistical software: The software does not “know” which variables are our main exposure, competing exposures, confounders or mediators. It treats them all the same. In the real world it is far more common for the variables to be inter-related. For example, knowledge of the particular area of research may indicate a structure such as: Note that it is the researchers job to specify the causal paths, using expert knowledge about the subject at hand. DAGs represent a set of (often abstracted) causal beliefs pertinent to specific causal relationships. One researcher's DAG may be different to another researcher's DAG, for the same relationship(s), and that is completely OK. In the same way, a researcher may have more than one DAG for the same causal relationships, and using DAGs in a principled way as described below is one way to gather knowledge about, or support for a particular hypothesis. Let’s suppose that our interest is in the causal effect of $X7$ on $Y$ . What are we to do? A very naive approach is simply to put all the variables into a regression model, and take the estimated coefficient for $X7$ as our “answer”. This would be a big mistake. It turns out that the only variable that should be adjusted for in this DAG is $X3$ , because it is a confounder. But what if our interest was in the effect of $X3$ , not $X7$ ? Do we simply use the same model (also containing $X7$ ) and just take the estimate of $X3$ as our “answer”? No! In this case, we do not adjust for $X7$ because it is a mediator. No adjustment is needed at all. In both cases, we may also adjust for $X1$ because this is a competing exposure and will improve the precision of our casual inferences in both models. In both models we should not adjust for $X2$ , $X4$ , $X5$ and $X6$ because all of them are mediators for the effect of $X7$ on $Y$ . So, getting back to the question, how do DAGs actually enable us to do this? First we need to establish a few ground truths. A collider is a variable which has more than 1 cause – that is, at least 2 arrows are pointing at it (hence the incoming arrows “collide”). $X5$ in the above DAG is a collider If there are no variables being conditioned on, a path is blocked if and only if it contains a collider. The path $X4 \rightarrow X5 \leftarrow X6$ is blocked by the collider $X5$ . Note: when we talk about "conditioning" on a variable this could refer to a few things, for example stratifying, but perhaps more commonly including the variable as a covariate in a multivariable regression model. Other synonymous terms are "controlling for" and "adjusting for". Any path that contains a non-collider that has been conditioned on is blocked. The path $Y \leftarrow X3 \rightarrow X7$ will be blocked if we condition on $X3$ . A collider (or a descendant of a collider) that has been conditioned on does not block a path. If we condition on $X5$ we will open the path $X4 \rightarrow X5 \leftarrow X6$ A backdoor path is a non-causal path between an outcome and a cause. It is non-causal because it contains an arrow pointing at both the cause and the outcome. For example the path $Y \leftarrow X3 \rightarrow X7$ is a backdoor path from $Y$ to $X3$ . Confounding of a causal path occurs where a common cause for both variables is present. In other words confounding occurs where an unblocked backdoor path is present. Again, $Y \leftarrow X3 \rightarrow X7$ is such a path. So, armed with this knowledge, let’s see how DAGs help us with removing bias: Confounding The definition of confounding is 6 above. If we apply 4 and condition on the confounder we will block the backdoor path from the outcome to the cause, thereby removing confounding bias. The example is the association of carrying a lighter and lung cancer: Carrying a lighter has no causal effect on lung cancer, however, they share a common cause - smoking - so applying rule 5 above, a backdoor path from Lung cancer to carrying a lighter is present which induces an association between carrying a lighter and Lung cancer. Conditioning on Smoking will remove this association, which can be demonstrate with a simple simulation where I use continuous variables for simplicity: > set.seed(15) > N <- 100 > Smoking <- rnorm(N, 10, 2) > Cancer <- Smoking + rnorm(N) > Lighter <- Smoking + rnorm(N) > summary(lm(Cancer ~ Lighter)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.66263 0.76079 0.871 0.386 Lighter 0.91076 0.07217 12.620 <2e-16 *** which shows the spurious association between Lighter and Cancer, but now when we condition on Smoking: > summary(lm(Cancer ~ Lighter + Smoking)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.42978 0.60363 -0.712 0.478 Lighter 0.07781 0.11627 0.669 0.505 Smoking 0.95215 0.11658 8.168 1.18e-12 *** ...the bias is removed. Mediation A mediator is a variable that lies on the causal path between the cause and the outcome. This means that the outcome is a collider. Therefore, applying rule 3 means that we should not condition on the mediator otherwise the indirect effect of the cause on the outcome (i.e., that mediated by the mediator) will be blocked. A good example example is the grades of a student and their happiness. A mediating variable is self-esteem: Here, Grades has a direct effect on Happiness, but it also has an indirect effect mediated by self-esteem. We want to estimate the total causal effect of Grades on Happiness. Rule 3 says that a path that contains a non-collider that has been conditioned on is blocked. Since we want the total effect (i.e., including the indirect effect) we should not condition on self-esteem otherwise the mediated path will be blocked, as we can see in the following simulation: > set.seed(15) > N <- 100 > Grades <- rnorm(N, 10, 2) > SelfEsteem <- Grades + rnorm(N) > Happiness <- Grades + SelfEsteem + rnorm(N) So the total effect should be 2: > summary(m0 <- lm(Happiness ~ Grades)) # happy times Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.05650 0.79509 1.329 0.187 Grades 1.90003 0.07649 24.840 <2e-16 *** which is what we do find. But if we now condition on self esteem: > summary(m0 <- lm(Happiness ~ Grades + SelfEsteem Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.39804 0.50783 2.753 0.00705 ** Grades 0.81917 0.10244 7.997 2.73e-12 *** SelfEsteem 1.05907 0.08826 11.999 < 2e-16 *** only the direct effect for grades is estimated, due to blocking the indirect effect by conditioning on the mediator SelfEsteem . Collider bias This is probably the most difficult one to understand, but with the aid of a very simple DAG we can easily see the problem: Here, there is no causal path between X and Y. However, both cause C, the collider. If we condition on C, then applying rule 4 above we will invoke collider bias by opening up the (non causal) path between X, and Y. This may be a little hard to grasp at first, but it should become apparent by thinking in terms of equations. We have X + Y = C. Let X and Y be binary variables taking the values 1 or zero. Hence, C can only take the values of 0, 1 or 2. Now, when we condition on C we fix its value. Say we fix it at 1. This immediately means that if X is zero then Y must be 1, and if Y is zero then X must be one. That is, X = -Y, so they are perfectly (negatively) correlated, conditional on C= 1. We can also see this in action with the following simulation: > set.seed(16) > N <- 100 > X <- rnorm(N, 10, 2) > Y <- rnorm(N, 15, 3) > C <- X + Y + rnorm(N) So, X and Y are independent so we should find no association: > summary(m0 <- lm(Y ~ X)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 14.18496 1.54838 9.161 8.01e-15 *** X 0.08604 0.15009 0.573 0.568 and indeed no association is found. But now condition on C > summary(m1 <- lm(Y ~ X + C)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.10461 0.61206 1.805 0.0742 . X -0.92633 0.05435 -17.043 <2e-16 *** C 0.92454 0.02881 32.092 <2e-16 *** and now we have a spurious association between X and Y. Now let’s consider a slightly more complex situation: Here we are interested in the causal effect of Activity on Cervical Cancer. Hypochondria is an unmeasured variable which is a psychological condition that is characterized by fears of minor and sometimes non-existent medical symptoms being an indication of major illness. Lesion is also an unobserved variable that indicates the presence of a pre-cancerous lesion. Test is a diagnostic test for early stage cervical cancer. Here we hypothesise that both the unmeasured variables affect Test, obviously in the case of Lesion, and by making frequent visits to the doctor in the case of Hypochondria. Lesion also (obviously causes Cancer) and Hypochondria causes more physical activity (because persons with hypochondria are worried about a sedentary lifestyle leading to disease in later life. First notice that if the collider, Test, was removed and replace with an arc either from Lesion to Hypochondria or vice versa, then our causal path of interest, Activity to Cancer, would be confounded, but due to rule 2 above, the collider blocks the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$ , as we can see with a simple simulation: > set.seed(16) > N <- 100 > Lesion <- rnorm(N, 10, 2) > Hypochondria <- rnorm(N, 10, 2) > Test <- Lesion + Hypochondria + rnorm(N) > Activity <- Hypochondria + rnorm(N) > Cancer <- Lesion + 0.25 * Activity + rnorm(N) where we hypothesize a much smaller effect of Activity on Cancer than Lesion on Cancer > summary(lm(Cancer ~ Activity)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 10.47570 1.01150 10.357 <2e-16 *** Activity 0.21103 0.09667 2.183 0.0314 * And indeed we obtain a reasonable estimate. Now, also observe the association of Activity and Cancer with Test (due to their common, but unmeasured causes: > cor(Test, Activity); cor(Test, Cancer) [1] 0.6245565 [1] 0.7200811 The traditional definition of confounding is that a confounder is variable that is associated with both the exposure and the outcome. So, we might mistakenly think that Test is a confounder and condition on it. However, we then open up the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$ , and introduce confounding which would otherwise not be present, as we can see from: > summary(lm(Cancer ~ Activity + Test)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.77204 0.98383 1.801 0.0748 . Activity -0.37663 0.07971 -4.725 7.78e-06 *** Test 0.72716 0.06160 11.804 < 2e-16 *** Now not only is the estimate for Activity biased, but it is of larger magnitude and of the opposite sign! Selection bias The preceding example can also be used to demonstrate selection bias. A researcher may identify Test as a potential confounder, and then only conduct the analysis on those that have tested negative (or positive). > dtPos <- data.frame(Lesion, Hypochondria, Test, Activity, Cancer) > dtNeg <- dtPos[dtPos $Test < 22, ] > dtPos <- dtPos[dtPos$ Test >= 22, ] > summary(lm(Cancer ~ Activity, data = dtPos)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 13.15915 3.07604 4.278 0.000242 *** Activity 0.08662 0.25074 0.345 0.732637 So for those that test positive we obtain a very small positive effect, that is not statistically significant at the 5% level > summary(lm(Cancer ~ Activity, data = dtNeg)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.18865 1.12071 10.876 <2e-16 *** Activity -0.01553 0.11541 -0.135 0.893 And for those that test negative we obtain a very small negative association which is also not significant.
{ "source": [ "https://stats.stackexchange.com/questions/445595", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/160339/" ] }
445,633
Unfortunately i do not find anything regarding longterm forecast with SARIMA. What is the problem with longerm forecasts for exmaple 140 data points are used to forecasts another 140 data points? What is the limitation? Reagrds, Max
A DAG is a D irected A cyclic G raph. A “ G raph” is a structure with nodes (which are usually variables in statistics) and arcs (lines) connecting nodes to other nodes. “ D irected” means that all the arcs have a direction, where one end of the arc has an arrow head, and the other does not, which usually refers to causation. “ A cyclic” means that the graph is not cyclic – that means there can be no path from any node that leads back to the same node. In statistics a DAG is a very powerful tool to aid in causal inference – to estimate the causal effect of one variable (often called the main exposure) on another (often called the outcome) in the presence of other variables which may be competing exposures, confounders or mediators. The DAG can be used to identify a minimal sufficient set of variables to be used in a multivariable regression model for the estimation of said causal effect. For example it is usually a very bad idea to condition on a mediator (a variable that lies on the causal path between the main exposure and the outcome), while it is usually a very good idea to condition on a confounder (a variable that is a cause, or a proxy for a cause, of both the main exposure and the outcome). It is also a bad idea to condition on a collider (to be defined below). But first, what is the problem we want to overcome? This is what a multivariable regression model looks like to your statistical software: The software does not “know” which variables are our main exposure, competing exposures, confounders or mediators. It treats them all the same. In the real world it is far more common for the variables to be inter-related. For example, knowledge of the particular area of research may indicate a structure such as: Note that it is the researchers job to specify the causal paths, using expert knowledge about the subject at hand. DAGs represent a set of (often abstracted) causal beliefs pertinent to specific causal relationships. One researcher's DAG may be different to another researcher's DAG, for the same relationship(s), and that is completely OK. In the same way, a researcher may have more than one DAG for the same causal relationships, and using DAGs in a principled way as described below is one way to gather knowledge about, or support for a particular hypothesis. Let’s suppose that our interest is in the causal effect of $X7$ on $Y$ . What are we to do? A very naive approach is simply to put all the variables into a regression model, and take the estimated coefficient for $X7$ as our “answer”. This would be a big mistake. It turns out that the only variable that should be adjusted for in this DAG is $X3$ , because it is a confounder. But what if our interest was in the effect of $X3$ , not $X7$ ? Do we simply use the same model (also containing $X7$ ) and just take the estimate of $X3$ as our “answer”? No! In this case, we do not adjust for $X7$ because it is a mediator. No adjustment is needed at all. In both cases, we may also adjust for $X1$ because this is a competing exposure and will improve the precision of our casual inferences in both models. In both models we should not adjust for $X2$ , $X4$ , $X5$ and $X6$ because all of them are mediators for the effect of $X7$ on $Y$ . So, getting back to the question, how do DAGs actually enable us to do this? First we need to establish a few ground truths. A collider is a variable which has more than 1 cause – that is, at least 2 arrows are pointing at it (hence the incoming arrows “collide”). $X5$ in the above DAG is a collider If there are no variables being conditioned on, a path is blocked if and only if it contains a collider. The path $X4 \rightarrow X5 \leftarrow X6$ is blocked by the collider $X5$ . Note: when we talk about "conditioning" on a variable this could refer to a few things, for example stratifying, but perhaps more commonly including the variable as a covariate in a multivariable regression model. Other synonymous terms are "controlling for" and "adjusting for". Any path that contains a non-collider that has been conditioned on is blocked. The path $Y \leftarrow X3 \rightarrow X7$ will be blocked if we condition on $X3$ . A collider (or a descendant of a collider) that has been conditioned on does not block a path. If we condition on $X5$ we will open the path $X4 \rightarrow X5 \leftarrow X6$ A backdoor path is a non-causal path between an outcome and a cause. It is non-causal because it contains an arrow pointing at both the cause and the outcome. For example the path $Y \leftarrow X3 \rightarrow X7$ is a backdoor path from $Y$ to $X3$ . Confounding of a causal path occurs where a common cause for both variables is present. In other words confounding occurs where an unblocked backdoor path is present. Again, $Y \leftarrow X3 \rightarrow X7$ is such a path. So, armed with this knowledge, let’s see how DAGs help us with removing bias: Confounding The definition of confounding is 6 above. If we apply 4 and condition on the confounder we will block the backdoor path from the outcome to the cause, thereby removing confounding bias. The example is the association of carrying a lighter and lung cancer: Carrying a lighter has no causal effect on lung cancer, however, they share a common cause - smoking - so applying rule 5 above, a backdoor path from Lung cancer to carrying a lighter is present which induces an association between carrying a lighter and Lung cancer. Conditioning on Smoking will remove this association, which can be demonstrate with a simple simulation where I use continuous variables for simplicity: > set.seed(15) > N <- 100 > Smoking <- rnorm(N, 10, 2) > Cancer <- Smoking + rnorm(N) > Lighter <- Smoking + rnorm(N) > summary(lm(Cancer ~ Lighter)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.66263 0.76079 0.871 0.386 Lighter 0.91076 0.07217 12.620 <2e-16 *** which shows the spurious association between Lighter and Cancer, but now when we condition on Smoking: > summary(lm(Cancer ~ Lighter + Smoking)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.42978 0.60363 -0.712 0.478 Lighter 0.07781 0.11627 0.669 0.505 Smoking 0.95215 0.11658 8.168 1.18e-12 *** ...the bias is removed. Mediation A mediator is a variable that lies on the causal path between the cause and the outcome. This means that the outcome is a collider. Therefore, applying rule 3 means that we should not condition on the mediator otherwise the indirect effect of the cause on the outcome (i.e., that mediated by the mediator) will be blocked. A good example example is the grades of a student and their happiness. A mediating variable is self-esteem: Here, Grades has a direct effect on Happiness, but it also has an indirect effect mediated by self-esteem. We want to estimate the total causal effect of Grades on Happiness. Rule 3 says that a path that contains a non-collider that has been conditioned on is blocked. Since we want the total effect (i.e., including the indirect effect) we should not condition on self-esteem otherwise the mediated path will be blocked, as we can see in the following simulation: > set.seed(15) > N <- 100 > Grades <- rnorm(N, 10, 2) > SelfEsteem <- Grades + rnorm(N) > Happiness <- Grades + SelfEsteem + rnorm(N) So the total effect should be 2: > summary(m0 <- lm(Happiness ~ Grades)) # happy times Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.05650 0.79509 1.329 0.187 Grades 1.90003 0.07649 24.840 <2e-16 *** which is what we do find. But if we now condition on self esteem: > summary(m0 <- lm(Happiness ~ Grades + SelfEsteem Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.39804 0.50783 2.753 0.00705 ** Grades 0.81917 0.10244 7.997 2.73e-12 *** SelfEsteem 1.05907 0.08826 11.999 < 2e-16 *** only the direct effect for grades is estimated, due to blocking the indirect effect by conditioning on the mediator SelfEsteem . Collider bias This is probably the most difficult one to understand, but with the aid of a very simple DAG we can easily see the problem: Here, there is no causal path between X and Y. However, both cause C, the collider. If we condition on C, then applying rule 4 above we will invoke collider bias by opening up the (non causal) path between X, and Y. This may be a little hard to grasp at first, but it should become apparent by thinking in terms of equations. We have X + Y = C. Let X and Y be binary variables taking the values 1 or zero. Hence, C can only take the values of 0, 1 or 2. Now, when we condition on C we fix its value. Say we fix it at 1. This immediately means that if X is zero then Y must be 1, and if Y is zero then X must be one. That is, X = -Y, so they are perfectly (negatively) correlated, conditional on C= 1. We can also see this in action with the following simulation: > set.seed(16) > N <- 100 > X <- rnorm(N, 10, 2) > Y <- rnorm(N, 15, 3) > C <- X + Y + rnorm(N) So, X and Y are independent so we should find no association: > summary(m0 <- lm(Y ~ X)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 14.18496 1.54838 9.161 8.01e-15 *** X 0.08604 0.15009 0.573 0.568 and indeed no association is found. But now condition on C > summary(m1 <- lm(Y ~ X + C)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.10461 0.61206 1.805 0.0742 . X -0.92633 0.05435 -17.043 <2e-16 *** C 0.92454 0.02881 32.092 <2e-16 *** and now we have a spurious association between X and Y. Now let’s consider a slightly more complex situation: Here we are interested in the causal effect of Activity on Cervical Cancer. Hypochondria is an unmeasured variable which is a psychological condition that is characterized by fears of minor and sometimes non-existent medical symptoms being an indication of major illness. Lesion is also an unobserved variable that indicates the presence of a pre-cancerous lesion. Test is a diagnostic test for early stage cervical cancer. Here we hypothesise that both the unmeasured variables affect Test, obviously in the case of Lesion, and by making frequent visits to the doctor in the case of Hypochondria. Lesion also (obviously causes Cancer) and Hypochondria causes more physical activity (because persons with hypochondria are worried about a sedentary lifestyle leading to disease in later life. First notice that if the collider, Test, was removed and replace with an arc either from Lesion to Hypochondria or vice versa, then our causal path of interest, Activity to Cancer, would be confounded, but due to rule 2 above, the collider blocks the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$ , as we can see with a simple simulation: > set.seed(16) > N <- 100 > Lesion <- rnorm(N, 10, 2) > Hypochondria <- rnorm(N, 10, 2) > Test <- Lesion + Hypochondria + rnorm(N) > Activity <- Hypochondria + rnorm(N) > Cancer <- Lesion + 0.25 * Activity + rnorm(N) where we hypothesize a much smaller effect of Activity on Cancer than Lesion on Cancer > summary(lm(Cancer ~ Activity)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 10.47570 1.01150 10.357 <2e-16 *** Activity 0.21103 0.09667 2.183 0.0314 * And indeed we obtain a reasonable estimate. Now, also observe the association of Activity and Cancer with Test (due to their common, but unmeasured causes: > cor(Test, Activity); cor(Test, Cancer) [1] 0.6245565 [1] 0.7200811 The traditional definition of confounding is that a confounder is variable that is associated with both the exposure and the outcome. So, we might mistakenly think that Test is a confounder and condition on it. However, we then open up the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$ , and introduce confounding which would otherwise not be present, as we can see from: > summary(lm(Cancer ~ Activity + Test)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.77204 0.98383 1.801 0.0748 . Activity -0.37663 0.07971 -4.725 7.78e-06 *** Test 0.72716 0.06160 11.804 < 2e-16 *** Now not only is the estimate for Activity biased, but it is of larger magnitude and of the opposite sign! Selection bias The preceding example can also be used to demonstrate selection bias. A researcher may identify Test as a potential confounder, and then only conduct the analysis on those that have tested negative (or positive). > dtPos <- data.frame(Lesion, Hypochondria, Test, Activity, Cancer) > dtNeg <- dtPos[dtPos $Test < 22, ] > dtPos <- dtPos[dtPos$ Test >= 22, ] > summary(lm(Cancer ~ Activity, data = dtPos)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 13.15915 3.07604 4.278 0.000242 *** Activity 0.08662 0.25074 0.345 0.732637 So for those that test positive we obtain a very small positive effect, that is not statistically significant at the 5% level > summary(lm(Cancer ~ Activity, data = dtNeg)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.18865 1.12071 10.876 <2e-16 *** Activity -0.01553 0.11541 -0.135 0.893 And for those that test negative we obtain a very small negative association which is also not significant.
{ "source": [ "https://stats.stackexchange.com/questions/445633", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/270817/" ] }
445,863
I am trying to understand why the same data can be normally distributed if expressed in one way, but not normally distributed if expressed in another way. I have a variable that is "time taken to walk 10 meters" (in seconds). This data is not normally distributed (Shapiro-Wilk: W = 0.632, df = 108, Sig. <0.001, +see "histogram 1" below). I expressed this same variable as "speed" (in meters per second). I computed it by dividing 10 meters by the time taken to complete the distance, (ex. 14 sec to complete 10 meters becomes 10/14 = 0.71 m/s). When I now check "speed" for normality, it is very much normally distributed (Shapiro-Wilk: W = 0.984, df = 108, Sig. = 0.234, +see "histogram 2" below). Am I doing something wrong, or is there a logical explanation for this? While adding the tags, stackexchange mentioned "inverse Gaussian distribution" - is this what is happening here?
The image below illustrates intuitively why the transformed variable has a different distribution: I have drawn two parallel lines. On the lowest line I have plotted evenly spaced points at $0.1, 0.2, ..., 1.1, 1.2$ which represent the velocity $v$ . On the upper line I have draw points according to the formula $t=0.1/v$ (note I reversed the axis it has 1.2 on the left and 0 on the right) I have drawn lines connecting the different points. You can see that the evenly spaced points $v$ are not transforming into evenly spaced points $t$ but instead the points are more dense in the low values than in the high values. This squeezing will happen also to the density distribution . The distribution of times $t$ will not be just the same as the distribution of $v$ with a transformed location. Instead you also get a factor that is based on how much the space gets stretched out or squeezed in. For instance: The region $0.1 < v < 0.2$ gets spread out over a region $0.5 < t <1$ which is a region with a larger size. So the same probability to fall into a specific region gets spread out over a region with larger size. Another example: The region $0.4 < v < 0.5$ gets squeezed into a region $0.2 < t <0.25$ which is a region with a smaller size. So the same probability to fall into a specific region gets compressed into a region with smaller size. In the image below these two corresponding regions $0.4 < v < 0.5$ and $0.2 < t <0.25$ and the area under the density curves are colored, the two different colored areas have the same area size . So as the distribution for the times $g(t)$ you do not just take the distribution of the velocity $f(v)$ where you transform the variable $v=0.1/t$ (which actually already make the distribution look different than the normal curve, see the green curve in the image), but you also take into account the spreading/compressing of the probability mass over larger/smaller regions. note: I have taken $t=0.1/v$ instead of $t = 100/v$ because this makes the two scales the same and makes the comparison of the two densities equivalent (when you squeeze an image then this will influence the density). See more about transformations: https://en.wikipedia.org/wiki/Random_variable#Functions_of_random_variables The inverse of a normal distributed variable is more generally: $$t = a/v \quad \text{with} \quad f_V(v) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{1}{2}\frac{(v-\mu)^2}{\sigma^2}}$$ then $$g_T(t) = \frac{1}{\sqrt{2 \pi \sigma^2}} \frac{a}{t^2} e^{-\frac{1}{2}\frac{(a/t-\mu)^2}{\sigma^2}}$$ you can find more about it by looking for the search term 'reciprocal normal distribution' https://math.stackexchange.com/search?q=reciprocal+normal+distribution It is not the same as 'inverse Gaussian distribution', which relates to the waiting time in relation to Brownian motion with drift (which can be described by a Gaussian curve).
{ "source": [ "https://stats.stackexchange.com/questions/445863", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/271531/" ] }
446,184
The power is a continuous function of the parameter. If a statistical test has a power of 0.8, does that mean the power function is 0.8 at all parameters except the null hypothesis? Or does it mean the power function is almost everywhere 0.8 or at most 0.8?
You'll hear "this test has 80% power" as shorthand for a better statement like: "under a bunch of assumptions, including but not limited to this particular sample size and this particular true effect size, this test has an 80% probability of rejecting the null hypothesis with a two-sided alternative at a 5% significance level". Don't try to make sense of a statement like "this test has 80% power" unless there is a lot more detail provided. Any statement about power that is four words long is leaving out so much detail that it's effectively meaningless.
{ "source": [ "https://stats.stackexchange.com/questions/446184", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
446,712
I am trying to calculate the basic reproduction number $R_0$ of the new 2019-nCoV virus by fitting a SIR model to the current data. My code is based on https://arxiv.org/pdf/1605.01931.pdf , p. 11ff: library(deSolve) library(RColorBrewer) #https://en.wikipedia.org/wiki/Timeline_of_the_2019%E2%80%9320_Wuhan_coronavirus_outbreak#Cases_Chronology_in_Mainland_China Infected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515) day <- 0:(length(Infected)-1) N <- 1400000000 #pop of china init <- c(S = N-1, I = 1, R = 0) plot(day, Infected) SIR <- function(time, state, parameters) { par <- as.list(c(state, parameters)) with(par, { dS <- -beta * S * I dI <- beta * S * I - gamma * I dR <- gamma * I list(c(dS, dI, dR)) }) } RSS.SIR <- function(parameters) { names(parameters) <- c("beta", "gamma") out <- ode(y = init, times = day, func = SIR, parms = parameters) fit <- out[ , 3] RSS <- sum((Infected - fit)^2) return(RSS) } lower = c(0, 0) upper = c(0.1, 0.5) set.seed(12) Opt <- optim(c(0.001, 0.4), RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper) Opt$message ## [1] "NEW_X" Opt_par <- Opt$par names(Opt_par) <- c("beta", "gamma") Opt_par ## beta gamma ## 0.0000000 0.4438188 t <- seq(0, 100, length = 100) fit <- data.frame(ode(y = init, times = t, func = SIR, parms = Opt_par)) col <- brewer.pal(4, "GnBu")[-1] matplot(fit$time, fit[ , 2:4], type = "l", xlab = "Day", ylab = "Number of subjects", lwd = 2, lty = 1, col = col) points(day, Infected) legend("right", c("Susceptibles", "Infecteds", "Recovereds"), lty = 1, lwd = 2, col = col, inset = 0.05) R0 <- N * Opt_par[1] / Opt_par[2] names(R0) <- "R0" R0 ## R0 ## 0 I also tried fitting with GAs (as in the paper), also to no avail. My question Am I making any mistakes or is there just not enough data yet? Or is the SIR model too simple? I would appreciate suggestions on how to change the code so that I get some sensible numbers out of it. Addendum I wrote a blog post based on the final model and current data: Epidemiology: How contagious is Novel Coronavirus (2019-nCoV)?
There are several points that you can improve in the code Wrong boundary conditions Your model is fixed to I=1 for time zero. You can either changes this point to the observed value or add a parameter in the model that shifts the time accordingly. init <- c(S = N-1, I = 1, R = 0) # should be init <- c(S = N-Infected[1], I = Infected[1], R = 0) Unequal parameter scales As other people have noted the equation $$I' = \beta \cdot S \cdot I - \gamma \cdot I$$ has a very large value for $S \cdot I$ this makes that the value of the parameter $\beta$ very small and the algorithm which checks whether the step sizes in the iterations reach some point will not vary the steps in $\beta$ and $\gamma$ equally (the changes in $\beta$ will have a much larger effect than changes in $\gamma$ ). You can change scale in the call to the optim function to correct for these differences in size (and checking the hessian allows you to see whether it works a bit). This is done by using a control parameter. In addition you might want to solve the function in segregated steps making the optimization of the two parameters independent from each others (see more here: How to deal with unstable estimates during curve fitting? this is also done in the code below, and the result is much better convergence; although still you reach the limits of your lower and upper bounds) Opt <- optim(c(2*coefficients(mod)[2]/N, coefficients(mod)[2]), RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper, hessian = TRUE, control = list(parscale = c(1/N,1),factr = 1)) more intuitive might be to scale the parameter in the function (note the term beta/N in place of beta ) SIR <- function(time, state, parameters) { par <- as.list(c(state, parameters)) with(par, { dS <- -beta/N * S * I dI <- beta/N * S * I - gamma * I dR <- gamma * I list(c(dS, dI, dR)) }) } Starting condition Because the value of $S$ is in the beginning more or less constant (namely $S \approx N$ ) the expression for the infected in the beginning can be solved as a single equation: $$I' \approx (\beta \cdot N - \gamma) \cdot I $$ So you can find a starting condition using an initial exponential fit: # get a good starting condition mod <- nls(Infected ~ a*exp(b*day), start = list(a = Infected[1], b = log(Infected[2]/Infected[1]))) Unstable, correlation between $\beta$ and $\gamma$ There is a bit of ambiguity how to choose $\beta$ and $\gamma$ for the starting condition. This will also make the outcome of your analysis not so stable. The error in the individual parameters $\beta$ and $\gamma$ will be very large because many pairs of $\beta$ and $\gamma$ will give a more or less similarly low RSS. The plot below is for the solution $\beta = 0.8310849; \gamma = 0.4137507 $ However the adjusted Opt_par value $\beta = 0.8310849-0.2; \gamma = 0.4137507-0.2$ works just as well: Using a different parameterization The optim function allows you to read out the hessian > Opt <- optim(optimsstart, RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper, + hessian = TRUE) > Opt$hessian b b 7371274104 -7371294772 -7371294772 7371315619 The hessian can be related to the variance of the parameters ( In R, given an output from optim with a hessian matrix, how to calculate parameter confidence intervals using the hessian matrix? ). But note that for this purpose you need the Hessian of the log likelihood which is not the same as the the RSS (it differs by a factor, see the code below). Based on this you can see that the estimate of the sample variance of the parameters is very large (which means that your results/estimates are not very accurate). But also note that the error is a lot correlated. This means that you can change the parameters such that the outcome is not very correlated. Some example parameterization would be: $$\begin{array}{} c &=& \beta - \gamma \\ R_0 &=& \frac{\beta}{\gamma} \end{array}$$ such that the old equations (note a scaling by 1/N is used): $$\begin{array}{rccl} S^\prime &=& - \beta \frac{S}{N}& I\\ I^\prime &=& (\beta \frac{S}{N}-\gamma)& I\\ R^\prime &=& \gamma &I \end{array} $$ become $$\begin{array}{rccl} S^\prime &=& -c\frac{R_0}{R_0-1} \frac{S}{N}& I&\\ I^\prime &=& c\frac{(S/N) R_0 - 1}{R_0-1} &I& \underbrace{\approx c I}_{\text{for $t=0$ when $S/N \approx 1$}}\\ R^\prime &=& c \frac{1}{R_0-1}& I& \end{array} $$ which is especially appealing since you get this approximate $I^\prime = cI$ for the beginning. This will make you see that you are basically estimating the first part which is approximately exponential growth. You will be able to very accurately determine the growth parameter, $c = \beta - \gamma$ . However, $\beta$ and $\gamma$ , or $R_0$ , can not be easily determined. In the code below a simulation is made with the same value $c=\beta - \gamma$ but with different values for $R_0 = \beta / \gamma$ . You can see that the data is not capable to allow us differentiate which different scenario's (which different $R_0$ ) we are dealing with (and we would need more information, e.g. the locations of each infected individual and trying to see how the infection spread out). It is interesting that several articles already pretend to have reasonable estimates of $R_0$ . For instance this preprint Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions ( https://doi.org/10.1101/2020.01.23.20018549 ) Some code: #### #### #### library(deSolve) library(RColorBrewer) #https://en.wikipedia.org/wiki/Timeline_of_the_2019%E2%80%9320_Wuhan_coronavirus_outbreak#Cases_Chronology_in_Mainland_China Infected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515) day <- 0:(length(Infected)-1) N <- 1400000000 #pop of china ###edit 1: use different boundary condiotion ###init <- c(S = N-1, I = 1, R = 0) init <- c(S = N-Infected[1], I = Infected[1], R = 0) plot(day, Infected) SIR <- function(time, state, parameters) { par <- as.list(c(state, parameters)) ####edit 2; use equally scaled variables with(par, { dS <- -beta * (S/N) * I dI <- beta * (S/N) * I - gamma * I dR <- gamma * I list(c(dS, dI, dR)) }) } SIR2 <- function(time, state, parameters) { par <- as.list(c(state, parameters)) #### #### use as change of variables variable #### const = (beta-gamma) #### delta = gamma/beta #### R0 = beta/gamma > 1 #### #### beta-gamma = beta*(1-delta) #### beta-gamma = beta*(1-1/R0) #### gamma = beta/R0 with(par, { beta <- const/(1-1/R0) gamma <- const/(R0-1) dS <- -(beta * (S/N) ) * I dI <- (beta * (S/N)-gamma) * I dR <- ( gamma) * I list(c(dS, dI, dR)) }) } RSS.SIR2 <- function(parameters) { names(parameters) <- c("const", "R0") out <- ode(y = init, times = day, func = SIR2, parms = parameters) fit <- out[ , 3] RSS <- sum((Infected - fit)^2) return(RSS) } ### plotting different values R0 # use the ordinary exponential model to determine const = beta - gamma const <- coef(mod)[2] RSS.SIR <- function(parameters) { names(parameters) <- c("beta", "gamma") out <- ode(y = init, times = day, func = SIR, parms = parameters) fit <- out[ , 3] RSS <- sum((Infected - fit)^2) return(RSS) } lower = c(0, 0) upper = c(1, 1) ###adjust limit because different scale 1/N ### edit: get a good starting condition mod <- nls(Infected ~ a*exp(b*day), start = list(a = Infected[1], b = log(Infected[2]/Infected[1]))) optimsstart <- c(2,1)*coef(mod)[2] set.seed(12) Opt <- optim(optimsstart, RSS.SIR, method = "L-BFGS-B", lower = lower, upper = upper, hessian = TRUE) Opt ### estimated covariance matrix of coefficients ### note the large error, but also strong correlation (nearly 1) ## note scaling with estimate of sigma because we need to use Hessian of loglikelihood sigest <- sqrt(Opt $value/(length(Infected)-1)) solve(1/(2*sigest^2)*Opt$ hessian) #### #### using alternative parameters #### for this we use the function SIR2 #### optimsstart <- c(coef(mod)[2],5) lower = c(0, 1) upper = c(1, 10^3) ### adjust limit because we use R0 now which should be >1 set.seed(12) Opt2 <- optim(optimsstart, RSS.SIR2, method = "L-BFGS-B",lower=lower, upper=upper, hessian = TRUE, control = list(maxit = 1000, parscale = c(10^-3,1))) Opt2 # now the estimated variance of the 1st parameter is small # the 2nd parameter is still with large variance # # thus we can predict beta - gamma very well # this beta - gamma is the initial growth coefficient # but the individual values of beta and gamma are not very well known # # also note that hessian is not at the MLE since we hit the lower boundary # sigest <- sqrt(Opt2 $value/(length(Infected)-1)) solve(1/(2*sigest^2)*Opt2$ hessian) #### We can also estimated variance by #### Monte Carlo estimation ## ## assuming data to be distributed as mean +/- q mean ## with q such that mean RSS = 52030 ## ## ## ### Two functions RSS to do the optimization in a nested way RSS.SIRMC2 <- function(const,R0) { parameters <- c(const=const, R0=R0) out <- ode(y = init, times = day, func = SIR2, parms = parameters) fit <- out[ , 3] RSS <- sum((Infected_MC - fit)^2) return(RSS) } RSS.SIRMC <- function(const) { optimize(RSS.SIRMC2, lower=1,upper=10^5,const=const)$objective } getOptim <- function() { opt1 <- optimize(RSS.SIRMC,lower=0,upper=1) opt2 <- optimize(RSS.SIRMC2, lower=1,upper=10^5,const=opt1 $minimum) return(list(RSS=opt2$ objective,const=opt1 $minimum,R0=opt2$ minimum)) } # modeled data that we use to repeatedly generate data with noise Opt_par <- Opt2 $par names(Opt_par) <- c("const", "R0") modInfected <- data.frame(ode(y = init, times = day, func = SIR2, parms = Opt_par))$ I # doing the nested model to get RSS set.seed(1) Infected_MC <- Infected modnested <- getOptim() errrate <- modnested$RSS/sum(Infected) par <- c(0,0) for (i in 1:100) { Infected_MC <- rnorm(length(modInfected),modInfected,(modInfected*errrate)^0.5) OptMC <- getOptim() par <- rbind(par,c(OptMC $const,OptMC$ R0)) } par <- par[-1,] plot(par, xlab = "const",ylab="R0",ylim=c(1,1)) title("Monte Carlo simulation") cov(par) ###conclusion: the parameter R0 can not be reliably estimated ##### End of Monte Carlo estimation ### plotting different values R0 # use the ordinary exponential model to determine const = beta - gamma const <- coef(mod)[2] R0 <- 1.1 # graph plot(-100,-100, xlim=c(0,80), ylim = c(1,N), log="y", ylab = "infected", xlab = "days", yaxt = "n") axis(2, las=2, at=10^c(0:9), labels=c(expression(1), expression(10^1), expression(10^2), expression(10^3), expression(10^4), expression(10^5), expression(10^6), expression(10^7), expression(10^8), expression(10^9))) axis(2, at=rep(c(2:9),9)*rep(10^c(0:8),each=8), labels=rep("",8*9),tck=-0.02) title(bquote(paste("scenario's for different ", R[0])), cex.main = 1) # time t <- seq(0,60,0.1) # plot model with different R0 for (R0 in c(1.1,1.2,1.5,2,3,5,10)) { fit <- data.frame(ode(y = init, times = t, func = SIR2, parms = c(const,R0)))$I lines(t,fit) text(t[601],fit[601], bquote(paste(R[0], " = ",.(R0))), cex=0.7,pos=4) } # plot observations points(day,Infected) How is R0 estimated? The graph above (which is repeated below) showed that there is not much variation in the number of 'infected' as a function of $R_0$ , and the data of the number of infected people is not providing much information about $R_0$ (except whether or not it is above or below zero). However, for the SIR model there is a large variation in the number of recovered or the ratio infected/recovered. This is shown in the image below where the model is plotted not only for the number of infected people but also for the number of recovered people. It is such information (as well additional data like detailed information where and when the people got infected and with whom they had contact) that allows the estimate of $R_0$ . Update In your blog article you write that the fit is leading to a value of $R_0 \approx 2$ . However that is not the correct solution. You find this value only because the optim is terminating early when it has found a good enough solution and the improvements for given stepsize of the vector $\beta, \gamma$ are getting small. When you use the nested optimization then you will find a more precise solution with a $R_0$ very close to 1. We see this value $R_0 \approx 1$ because that is how the (wrong) model is able to get this change in the growth rate into the curve. ### #### #### library(deSolve) library(RColorBrewer) #https://en.wikipedia.org/wiki/Timeline_of_the_2019%E2%80%9320_Wuhan_coronavirus_outbreak#Cases_Chronology_in_Mainland_China Infected <- c(45,62,121,198,291,440,571,830,1287,1975, 2744,4515,5974,7711,9692,11791,14380,17205,20440) #Infected <- c(45,62,121,198,291,440,571,830,1287,1975, # 2744,4515,5974,7711,9692,11791,14380,17205,20440, # 24324,28018,31161,34546,37198,40171,42638,44653) day <- 0:(length(Infected)-1) N <- 1400000000 #pop of china init <- c(S = N-Infected[1], I = Infected[1], R = 0) # model function SIR2 <- function(time, state, parameters) { par <- as.list(c(state, parameters)) with(par, { beta <- const/(1-1/R0) gamma <- const/(R0-1) dS <- -(beta * (S/N) ) * I dI <- (beta * (S/N)-gamma) * I dR <- ( gamma) * I list(c(dS, dI, dR)) }) } ### Two functions RSS to do the optimization in a nested way RSS.SIRMC2 <- function(R0,const) { parameters <- c(const=const, R0=R0) out <- ode(y = init, times = day, func = SIR2, parms = parameters) fit <- out[ , 3] RSS <- sum((Infected_MC - fit)^2) return(RSS) } RSS.SIRMC <- function(const) { optimize(RSS.SIRMC2, lower=1,upper=10^5,const=const)$objective } # wrapper to optimize and return estimated values getOptim <- function() { opt1 <- optimize(RSS.SIRMC,lower=0,upper=1) opt2 <- optimize(RSS.SIRMC2, lower=1,upper=10^5,const=opt1 $minimum) return(list(RSS=opt2$ objective,const=opt1 $minimum,R0=opt2$ minimum)) } # doing the nested model to get RSS Infected_MC <- Infected modnested <- getOptim() rss <- sapply(seq(0.3,0.5,0.01), FUN = function(x) optimize(RSS.SIRMC2, lower=1,upper=10^5,const=x)$objective) plot(seq(0.3,0.5,0.01),rss) optimize(RSS.SIRMC2, lower=1,upper=10^5,const=0.35) # view modnested ### plotting different values R0 const <- modnested $const R0 <- modnested$ R0 # graph plot(-100,-100, xlim=c(0,80), ylim = c(1,6*10^4), log="", ylab = "infected", xlab = "days") title(bquote(paste("scenario's for different ", R[0])), cex.main = 1) ### this is what your beta and gamma from the blog beta = 0.6746089 gamma = 0.3253912 fit <- data.frame(ode(y = init, times = t, func = SIR, parms = c(beta,gamma)))$I lines(t,fit,col=3) # plot model with different R0 t <- seq(0,50,0.1) for (R0 in c(modnested $R0,1.07,1.08,1.09,1.1,1.11)) { fit <- data.frame(ode(y = init, times = t, func = SIR2, parms = c(const,R0)))$I lines(t,fit,col=1+(modnested$R0==R0)) text(t[501],fit[501], bquote(paste(R[0], " = ",.(R0))), cex=0.7,pos=4,col=1+(modnested$ R0==R0)) } # plot observations points(day,Infected, cex = 0.7) If we use the relation between recovered and infected people $R^\prime = c (R_0-1)^{-1} I$ then we also see the opposite, namely a large $R_0$ of around 18: I <- c(45,62,121,198,291,440,571,830,1287,1975,2744,4515,5974,7711,9692,11791,14380,17205,20440, 24324,28018,31161,34546,37198,40171,42638,44653) D <- c(2,2,2,3,6,9,17,25,41,56,80,106,132,170,213,259,304,361,425,490,563,637,722,811,908,1016,1113) R <- c(12,15,19,25,25,25,25,34,38,49,51,60,103,124,171,243,328,475,632,892,1153,1540,2050,2649,3281,3996,4749) A <- I-D-R plot(A[-27],diff(R+D)) mod <- lm(diff(R+D) ~ A[-27]) giving: > const [1] 0.3577354 > const/mod$coefficients[2]+1 A[-27] 17.87653 This is a restriction of the SIR model which models $R_0 = \frac{\beta}{\gamma}$ where $\frac{1}{\gamma}$ is the period how long somebody is sick (time from Infected to Recovered) but that may not need to be the time that somebody is infectious. In addition, the compartment models is limited since the age of patients (how long one has been sick) is not taken into account and each age should be considered as a separate compartment. But in any case. If the numbers from wikipedia are meaningfull (they may be doubted) then only 2% of the active/infected recover daily, and thus the $\gamma$ parameter seems to be small (no matter what model you use).
{ "source": [ "https://stats.stackexchange.com/questions/446712", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/230/" ] }
447,094
An ant has three passages to choose from: Passage A takes 7 minutes to get ant out of the ant house to the woods. Passage B takes 8 minutes to get ant back to the starting point where he is. Passage C takes 12 minutes to get ant back to the starting point where he is. The ant chooses a passage randomly until it gets out of the ant house to the woods. How to calculate the expected average time ant needs to get out? Does simple mean value (7 + 8 + 12) / 3 = 9 answer the question?
$$T=7/3+(8+T)/3+(12+T)/3=9+2T/3$$ $$T/3=9$$ $$T=27$$ From start point, each of 3 paths are equally possible. Two paths lead you back to start point.
{ "source": [ "https://stats.stackexchange.com/questions/447094", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/23669/" ] }
447,631
Why are the number on a ball in a lotto draw categorical nominal instead of categorical ordinal? Don't the numbers have a natural ascending order and would thus be ordinal? Or am I making an incorrect assumption about numbers having a natural order?
You could color-code the balls without fundamentally changing the game. Instead of 6-12-11, we get red-blue-pink. You could go with letters without fundamentally changing the game. Instead of 6-12-11, we get Y-Q-X. You could use animal drawings without fundamentally changing the game. Instead of 6-12-11, we get dog-fish-horse. The 6-ball isn’t worth half as much as the 12-ball. It doesn’t even represent a lesser value. The number is just on the ball as a link to lottery tickets. It could be different if the number represented some kind of quantity, like rolling dice and advancing a game piece that many spots, but there’s nothing quantitative going on. The numbers on lottery balls just serve as links back to the tickets. You probably can accept this for something like towns having zip codes or people having phone numbers. It’s the same idea.
{ "source": [ "https://stats.stackexchange.com/questions/447631", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/272580/" ] }
451,027
I know that in high-dimensional space, the distance between almost all pairs of points has almost the same value ("Distance Concentration"). See Aggarwal et al. 2001, On the Surprising Behavior of Distance Metrics in High Dimensional Space . Is there a mathematical way to observe this phenomenon?
There is a simple mathematical thought experiment that sheds light on this phenomenon, although it might not seem immediately applicable. I will therefore describe this experiment briefly and follow that, in a separate section, by a computer analysis of a concrete situation. A Thought Experiment An old cartographic chestnut is that most of the area of a map lies near its edges. Similarly, much of a pizza--more than you might think--consists of its crust. Even more so is the fact that a great deal of the volume of a thick-skinned fruit, like a grapefruit or watermelon, is in its skin. More than half this pizza lies near its edge, outside the shaded area. However, the width of this "crust" is only $18\%$ of the diameter of the pie. Maps and pizzas and grapefruits don't have a common shape, but there is a common underlying explanation. We may model the border of a map, the crust of a pizza, or the skin of a fruit by supposing its basic shape -- a rectangle, circle, sphere, or whatever -- has been shrunk uniformly by some factor $\alpha$ and that the "crust" or "rind" consists of what lies between these two concentric similar shapes. In $n$ dimensions (the examples involve $n=2$ or $n=3$ ), the $n$ -dimensional volume of the interior will therefore be $\alpha^n$ times the volume of the original shape. (This volume scaling law is sometimes used to define the number of dimensions of a space. ) The volume of the rind relative to the original volume therefore is $$1 - \alpha^n.$$ As a function of $\alpha$ its rate of growth is $$\mathrm{d}(1 - \alpha^n) = -n\,\alpha^{n-1}\,\mathrm{d}\alpha.$$ Beginning with no shrinking ( $\alpha=1$ ) and noting $\alpha$ is decreasing ( $\mathrm{d}\alpha$ is negative), we find the initial rate of growth of the rind equals $n.$ This shows that the volume of the rind initially grows much faster -- $n$ times faster -- than the rate at which the object is being shrunk. The factor of $n$ implies in higher dimensions, relatively tiny changes in distance translate to much larger changes in volume. Let's call this the "edge-of-map principle." Consider, now, a tabular dataset consisting of observations of $n$ numerical features. We may view each observation as a point in $\mathbb{R}^n$ and (at least in our imagination) might also suppose this collection of points is contained within some kind of compact region $\mathcal D$ with relatively simple boundary. If we choose to use Euclidean distances to compare these points to each other (and to other points in $\mathcal D$ ) and consider an arbitrary observation $x,$ the edge-of-map principle implies that most of the room in $\mathcal D$ is nearly as far as possible from $x.$ (The fudge term "nearly" is needed to account for what goes on around the boundary of $\mathcal D.$ ) Another implication that goes to the heart of the question is the generalization of the cartographer's quandary: if our observations are somewhat "spread out" over $\mathcal D,$ then the cartographer's question is "what proportion of this dataset is near the boundary?" To express this in a quantitative fashion, let's invert it: we ask, by how much should we shrink $\mathcal D$ to make it, say, only half its original volume? Let's call this the "half-length" of $\mathcal D,$ analogously to the half-life of a radioactive decay. If the half-length is $\alpha,$ we need only solve the equation $$\alpha^n = \frac{1}{2};\quad \alpha = 2^{-1/n} = e^{-(\log 2)/n} \approx 1 - \frac{\log 2}{n} \approx 1 - \frac{0.7}{n}.$$ In two dimensions the half-length is $1 - 0.35.$ Since half of the shrinking occurs on one side of the map or pizza and the other half on the other side (refer to the preceding figure), half of the area of a map ( $n=2$ ) lies within (approximately) $35/2=18\%$ of its diameter from the boundary. In three dimensions the half-length is $1 - 0.23:$ now, half the volume of a fruit lies within $12\%$ of its diameter from its boundary. A fruit whose skin is just one-eighth the width of the entire fruit is more than half skin. Despite appearances, approximately half the volume of this grapefruit is rind. (Source: FreeDigitalPhotos.net.) In very large dimensions the half-length is very close to $1.$ In $n=350$ dimensions it is greater than $98\%,$ within two percent of $1.$ Thus, expect half of any $350$ -dimensional dataset to lie within $1\%$ of its diameter from its boundary. Unless the data are strongly clustered, this generalization will be accurate. Another way to express these results is: Absent strong clustering, in higher dimensions $n$ we can expect most Euclidean distances between observations in a dataset to be very nearly the same and to be very close to the diameter of the region in which they are enclosed. "Very close" means on the order of $1/n.$ Several parts of this analysis are really just hand-waving and approximations, due to the vagueness of $\mathcal D$ and the very general assumptions about the dataset. How is $\mathcal D$ defined, anyway? In some applications it is determined by inherent limits; for instance, when all features are proportions. In many applications the features are arbitrarily scaled to lie within a fixed interval ("normalized") and we often take $\mathcal D$ to be the corresponding hypercube. But that's only an artifice and it is exquisitely sensitive to any outlying data values. The rest of this post explores an alternative in which the boundary plays a less important role in the results. It comes to similar conclusions. Analysis of distances in a closed Euclidean space I find the paper's setting rather arbitrary, because it is exploring distances within unit cubes. The distance distributions depend strongly on the shapes of the boundaries of those cubes. There's a way to avoid boundary effects. In one dimension, the "cube" is just the unit interval, $[0,1].$ Because this interval has two ends, some of the points are far from the rest; others (near the middle) tend to be close to all the points. This is asymmetric. To remove the asymmetry, roll the interval around into a loop where the beginning point $0$ meets the end point $1:$ Geometrically, all its points are equivalent. We can do the same in higher dimensions by rolling up each coordinate separately into a loop. The result in dimension $d$ is the $d$ -torus. It has no boundaries and all points are geometrically equivalent. It's not perfectly symmetrical like a sphere, though: unlike the (Euclidean) sphere, whose geometry no longer is Euclidean due to its curvature, these $d$ -tori are flat, without curvature. They can give us insight into Euclidean distances without the complication of dealing with boundaries. Analytical study of the distances in a torus is complicated, at least for dimensions greater than $1.$ Let's study these distances by generating random points from the uniform distribution on a $d$ -torus and computing all their mutual distances (apart from the necessarily zero distances between each point and itself). For the following figures I generated 500 points in each of eight separate dimensions, resulting in over 100,000 distances in each dataset. How are these distances distributed and how do those distributions vary with the dimension $d$ ? Here is an array of histograms of these distances, one per dimension. It's not hard to prove mathematically what the eye already sees: the distributions tend to a Gaussian, or "Normal," shape, as the dimension increases. There is another remarkable regularity: the spreads of these histograms are nearly constant. Beneath each I have printed the standard deviation (SD) of the distances. It hardly changes from $1$ through $128$ dimensions. In this sense, there is no "concentration" of distances in high dimensions at all! Here are the same figures shown on a common plot for easier comparison: The colors mean the same as before, showing that the average distances increase with dimension. They do so roughly with a square-root law: the average distance is about one-quarter the square root of the dimension. (Those familiar with the Pythagorean Theorem in higher dimensions will at once understand why.) The greatest possible distance in the $d$ -torus is achieved by pairs of points whose coordinates all differ by $1/2$ (because you cannot get any further apart than that along a loop); that distance obviously is $\sqrt{d}/2.$ Thus, it makes sense to compare the relative distances in each dimension. Here we go with one more plot of the same datasets, now with the distances all divided by $\sqrt{d}/2:$ This normalization has centered the histograms near $0.58,$ regardless of dimension. Here we are looking at the clearest manifestation of a "concentration of distance:" although the relative distances are typically the same in each dimension, as the dimension increases the distances concentrate more closely around a central value. As you can tell from the posted standard deviations, they too enjoy an inverse square-root law: the spread of the relative distances is approximately $1/(4\sqrt{d}).$ In other words, around any given point on a high-dimensional torus (and all points are geometrically the same, so it doesn't matter which point), nearly all other points on the torus are nearly the same distance away! If you were an inhabitant of a high-dimensional flat Euclidean space, albeit one with no boundaries, most of that space would seem to lie close to a spherical shell surrounding you. In $d$ = a million dimensions, for instance, the maximum possible distance is $500,$ the average distance would be around $288.7,$ and virtually all distances would be within $0.5$ of that value. All these general conclusions about the shape, typical value, and spread of Euclidean distances hold in other domain shapes, but the details vary. The general result, though, is that randomly selected points within reasonably compact high-dimensional domains tend not to cluster appreciably. This has obvious implications for statistical (and machine-learning) methods based on clustering and nearest neighbor analyses.
{ "source": [ "https://stats.stackexchange.com/questions/451027", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/120788/" ] }