source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
320,082
If Hessians are so good for optimization (see e.g. Newton's method ), why stop there? Let's use the third, fourth, fifth, and sixth derivatives? Why not?
I am interpreting the question as being "Why does Newton's method only use first and second derivatives, not third or higher derivatives?" Actually, in many cases, going to the third derivative does help; I've done it with custom stuff before. However, in general, going to higher derivatives adds computational complexity - you have to find and calculate all those derivatives, and for multivariate problems, there are a lot more third derivatives than there are first derivatives! - that far outweighs the savings in step count you get, if any. For example, if I have a 3-dimensional problem, I have 3 first derivatives, 6 second derivatives, and 10 third derivatives, so going to a third-order version more than doubles the number of evaluations I have to do (from 9 to 19), not to mention increased complexity of calculating the step direction / size once I've done those evaluations, but will almost certainly not cut the number of steps I have to take in half. Now, in the general case with $k$ variables, the collection of $n^{th}$ partial derivatives will number ${k+n-1} \choose {k-1}$, so for a problem with five variables, the total number of third, fourth, and fifth partial derivatives will equal 231, a more than 10-fold increase over the number of first and second partial derivatives (20). You would have to have a problem that is very, very close to a fifth-order polynomial in the variables to see a large enough reduction in iteration counts to make up for that extra computational burden.
{ "source": [ "https://stats.stackexchange.com/questions/320082", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/130640/" ] }
320,084
Let's consider independent random vectors $\hat{\boldsymbol\theta}_i$, $i = 1, \dots, m$, which are all unbiased for $\boldsymbol\theta$ and that $$\mathbb{E}\left[\left(\hat{\boldsymbol\theta}_i - \boldsymbol\theta\right)^{T}\left(\hat{\boldsymbol\theta}_i - \boldsymbol\theta\right)\right] = \sigma^2\text{.}$$ Let $\mathbf{1}_{n \times p}$ be the $n \times p$ matrix of all ones. Consider the problem of finding $$\mathbb{E}\left[\left(\hat{\boldsymbol\theta} - \boldsymbol\theta\right)^{T}\left(\hat{\boldsymbol\theta} - \boldsymbol\theta\right)\right]$$ where $$\hat{\boldsymbol\theta} = \dfrac{1}{m}\sum_{i=1}^{m}\hat{\boldsymbol\theta}_i\text{.}$$ My attempt is to notice the fact that $$\hat{\boldsymbol\theta} = \dfrac{1}{m}\underbrace{\begin{bmatrix} \hat{\boldsymbol\theta}_1 & \hat{\boldsymbol\theta}_2 & \cdots & \hat{\boldsymbol\theta}_m \end{bmatrix}}_{\mathbf{S}}\mathbf{1}_{m \times 1}$$ and thus $$\text{Var}(\hat{\boldsymbol\theta}) = \dfrac{1}{m^2}\text{Var}(\mathbf{S}\mathbf{1}_{m \times 1})\text{.}$$ How does one find the variance of a random matrix times a constant vector? You may assume that I am familiar with finding variances of linear transformations of a random vector: i.e., if $\mathbf{x}$ is a random vector, $\mathbf{b}$ a vector of constants, and $\mathbf{A}$ a matrix of constants, assuming all are comformable, $$\mathbb{E}[\mathbf{A}\mathbf{x}+\mathbf{b}] = \mathbf{A}\mathbb{E}[\mathbf{x}]+\mathbf{b}$$ $$\mathrm{Var}\left(\mathbf{A}\mathbf{x}+\mathbf{b}\right)=\mathbf{A}\mathrm{Var}(\mathbf{x})\mathbf{A}^{\prime}$$
I am interpreting the question as being "Why does Newton's method only use first and second derivatives, not third or higher derivatives?" Actually, in many cases, going to the third derivative does help; I've done it with custom stuff before. However, in general, going to higher derivatives adds computational complexity - you have to find and calculate all those derivatives, and for multivariate problems, there are a lot more third derivatives than there are first derivatives! - that far outweighs the savings in step count you get, if any. For example, if I have a 3-dimensional problem, I have 3 first derivatives, 6 second derivatives, and 10 third derivatives, so going to a third-order version more than doubles the number of evaluations I have to do (from 9 to 19), not to mention increased complexity of calculating the step direction / size once I've done those evaluations, but will almost certainly not cut the number of steps I have to take in half. Now, in the general case with $k$ variables, the collection of $n^{th}$ partial derivatives will number ${k+n-1} \choose {k-1}$, so for a problem with five variables, the total number of third, fourth, and fifth partial derivatives will equal 231, a more than 10-fold increase over the number of first and second partial derivatives (20). You would have to have a problem that is very, very close to a fifth-order polynomial in the variables to see a large enough reduction in iteration counts to make up for that extra computational burden.
{ "source": [ "https://stats.stackexchange.com/questions/320084", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46427/" ] }
320,665
Recently I have learned that one of ways for finding better solutions for ML problems is by creation of features. One can do that by for example summing two features. For example, we possess two features "attack" and "defense" of some kind of hero. We then create additional feature called "total" which is a sum of "attack" and "defense". Now what appears to me strange is that even tough "attack" and "defense" are almost perfectly correlated with "total" we still gain useful information. What is the math behind that? Or is me reasoning wrong? Additionally, is that not a problem, for classificators such as kNN, that "total" will be always bigger than "attack" or "defense"? Thus, even after standarization we will have features containing values from different ranges?
You question title and the content seems mismatched to me. If you are using linear model, add a total feature in addition to attack and defense will make things worse. First I would answer why feature engineering work in general. A picture is worth a thousand words. This figure may tell you some insights on feature engineering and why it works (picture source ): The data in Cartesian coordinates is more complicated, and it is relatively hard to write a rule / build a model to classify two types. The data in Polar coordinates is much easy:, we can write a simple rule on $r$ to classify two types. This tell us that the representation of the data matters a lot. In certain space, it is much easier to do certain tasks than other spaces. Here I answer the question mentioned in your example (total on attack and defense) In fact, the feature engineering mentioned in this sum of attack and defense example, will not work well for many models such as linear model and it will cause some problems. See Multicollinearity . On the other hand, such feature engineering may work on other models, such as decision tree / random forest. See @Imran's answer for details. So, the answer is that depending on the model you use, some feature engineering will help on some models, but not for other models.
{ "source": [ "https://stats.stackexchange.com/questions/320665", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/189841/" ] }
320,924
Consider a hurdle model predicting count data y from a normal predictor x : set.seed(1839) # simulate poisson with many zeros x <- rnorm(100) e <- rnorm(100) y <- rpois(100, exp(-1.5 + x + e)) # how many zeroes? table(y == 0) FALSE TRUE 31 69 In this case, I have count data with 69 zeros and 31 positive counts. Nevermind for the moment that this is, by definition of the data-generation procedure, a Poisson process, because my question is about hurdle models. Let's say I want to handle these excess zeros by a hurdle model. From my reading about them, it seemed like hurdle models aren't actual models per se—they are just doing two different analyses sequentially. First, a logistic regression predicting whether or not the value is positive versus zero. Second, a zero-truncated Poisson regression with only including the non-zero cases. This second step felt wrong to me because it is (a) throwing away perfectly good data, which (b) could lead to power issues since much of the data are zeros, and (c) basically not a "model" in and of itself, but just sequentially running two different models. So I tried a "hurdle model" versus just running the logistic and zero-truncated Poisson regression separately. They gave me identical answers (I'm abbreviating the output, for brevity's sake): > # hurdle output > summary(pscl::hurdle(y ~ x)) Count model coefficients (truncated poisson with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) -0.5182 0.3597 -1.441 0.1497 x 0.7180 0.2834 2.533 0.0113 * Zero hurdle model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -0.7772 0.2400 -3.238 0.001204 ** x 1.1173 0.2945 3.794 0.000148 *** > # separate models output > summary(VGAM::vglm(y[y > 0] ~ x[y > 0], family = pospoisson())) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.5182 0.3597 -1.441 0.1497 x[y > 0] 0.7180 0.2834 2.533 0.0113 * > summary(glm(I(y == 0) ~ x, family = binomial)) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.7772 0.2400 3.238 0.001204 ** x -1.1173 0.2945 -3.794 0.000148 *** --- This seems off to me since many different mathematical representations of the model include the probability that an observation is non-zero in the estimation of positive count cases, but the models I ran above completely ignore one another. For example, this is from Chapter 5, page 128 of Smithson & Merkle's Generalized Linear Models for Categorical and Continuous Limited Dependent Variables : ...Second, the probability that $y$ assumes any value (zero and the positive integers) must equal one. This is not guaranteed in Equation (5.33). To deal with this issue, we multiply the Poisson probability by the Bernoulli success probability $\pi$. These issues require us to express the above hurdle model as $$ P(Y=y|\boldsymbol{x,z,\beta,\gamma}) = \begin{cases} 1-\hat\pi &\text{for } y=0 \\ \hat\pi\times\frac{\exp(-\hat\lambda)\hat\lambda^y/y!}{1-\exp(-\hat\lambda)} &\text{for } y=1,2,\ldots \end{cases} \tag{5.34} $$ where $\hat\lambda=\exp(\boldsymbol{x\beta})$, $\hat\pi = {\rm logit}^{-1}(\boldsymbol{z\gamma})$, $\boldsymbol x$ are the covariates for the Poisson model, $\boldsymbol z$ are the covariates for the logistic regression model, and $\hat{\boldsymbol{\beta}}$ and $\hat{\boldsymbol{\gamma}}$ are the respective regression coefficients.... By doing the two models completely separate from one another—which seems to be what hurdle models do—I don't see how $\hat{\pi}$ is incorporated into the prediction of positive count cases. But based on how I was able to replicate the hurdle function by just running two different models, I don't see how $\text{logit}^{-1}(z\hat{\gamma})$ plays a role in the truncated Poisson regression at all. Am I understanding hurdle models correctly? They seem two be just running two sequential models: First, a logistic; Second, a Poisson, completely ignoring cases where $y = 0$. I would appreciate if someone could clear-up my confusion with the $\hat{\pi}$ business. If I am correct that that is what hurdle models are, what is the definition of a "hurdle" model, more generally? Imagine two different scenarios: Imagine modeling competitiveness of electoral races by looking at competitiveness scores (1 - (winner's proportion of vote - runner up's proportion of vote)). This is [0, 1), because there are no ties (e.g., 1). A hurdle model makes sense here, because there is one process (a) was the election uncontested? and (b) if it wasn't, what predicted competitiveness? So we first do a logistic regression to analyze 0 vs. (0, 1). Then we do beta regression to analyze the (0, 1) cases. Imagine a typical psychological study. Responses are [1, 7], like a traditional Likert scale, with a huge ceiling effect at 7. One could do a hurdle model that's logistic regression of [1, 7) vs. 7, and then a Tobit regression for all cases where observed responses are < 7. Would it be safe to call both of these situations "hurdle" models , even if I estimate them with two sequential models (logistic and then beta in the first case, logistic and then Tobit in the second)?
Separating the log-likelihood It is correct that most hurdle models can be estimated separately (I would say, instead of sequentially ). The reason is that the log-likelihood can be decomposed into two parts that can be maximized separately. This is because $\hat \pi$ is a just a scaling factor in (5.34) that becomes an additive term in the log-likelihood. In the notation of Smithson & Merkle: $$ \begin{eqnarray*} \ell(\beta, \gamma; y, x, z) & = & \ell_1(\gamma; y, z) + \ell_2(\beta; y, x) \\ & = & \sum_{i: y_i = 0} \log\left\{1 - \mathrm{logit}^{-1}(z_i^\top \gamma)\right\} + \sum_{i: y_i > 0} \log\left\{\mathrm{logit}^{-1}(z_i^\top \gamma)\right\} + \\ & & \sum_{i: y_i > 0} \left[ \log \left\{f(y_i; \exp(x_i^\top \beta)\right\} - \log\left\{ 1 - f(0; \exp(x_i^\top \beta)\right\}\right] \end{eqnarray*} $$ where $f(y; \lambda) = \exp(-\lambda) \lambda^y/y!$ is the density of the (untruncated) Poisson distribution and $1 - f(0; \lambda) = 1 - \exp(-\lambda)$ is the factor from the zero truncation. Then it becomes obvious that $\ell_1(\gamma)$ (binary logit model) and $\ell_2(\beta)$ (zero-truncated Poisson model) can be maximized separately, leading to the same parameter estimates, covariances, etc. as in the case where they are maximized jointly. The same logic also works if the zero hurdle probability $\pi$ is not parametrized through a logit model but any other binary regression model, e.g., a count distribution right-censored at 1. And, of course, $f(\cdot)$ could also be another count distribution, e.g., negative binomial. The whole separation only breaks down if there are shared parameters between the zero hurdle and the truncated count part. A prominent example would be if negative binomial distributions with separate $\mu$ but common $\theta$ parameters are employed in the two components of the model. (This is available in hurdle(..., separate = FALSE, dist = "negbin", zero.dist = "negbin") in the countreg package from R-Forge, the successor to the pscl implementation.) Concrete questions (a) Throwing away perfectly good data: In your case yes, in general no. You have data from a single Poisson model without excess zeros (albeit many zeros ). Hence, it is not necessary to estimate separate models for the zeros and non-zeros. However, if the two parts are really driven by different parameters then it is necessary to account for this. (b) Could lead to power issues since much of the data are zeros: Not necessarily. Here, you have a third of the observations that are "successes" (hurdle crossings). This wouldn't be considered very extreme in a binary regression model. (Of course, if it is unnecessary to estimate separate models you would gain power.) (c) Basically not a 'model' in and of itself, but just sequentially running two different models: This is more philosophical and I won't try to give "one" answer. Instead, I will point out pragmatic points of view. For model estimation , it can be convenient to emphasize that the models are separate because - as you show - you might not need a dedicated function for the estimation. For model application , e.g., for predictions or residuals etc., it can be more convenient to see this as a single model. (d) Would it be safe to call both of these situations 'hurdle' models: In principle yes. However, jargon may vary across communities. For example, the zero-hurdle beta regression is more commonly (and very confusingly) called zero-inflated beta regression. Personally, I find the latter very misleading because the beta distribution has no zeros that could be inflated - but it's the standard term in the literature anyway. Moreover, the tobit model is a censored model and hence not a hurdle model. It could be extended, though, by a probit (or logit) model plus a truncated normal model. In the econometrics literature this is known as the Cragg two-part model. Software comments The countreg package on R-Forge at https://R-Forge.R-project.org/R/?group_id=522 is the successor implementation to hurdle() / zeroinfl() from pscl . The main reason that it is (still) not on CRAN is that we want to revise the predict() interface, possibly in a way that is not fully backward compatible. Otherwise the implementation is pretty stable. Compared to pscl it comes with a few nice features, e.g.: A zerotrunc() function that uses exactly the same code as hurdle() for the zero-truncated part of the model. Thus, it offers an alternative to VGAM . Moreover, it as d/p/q/r functions for the zero-truncated, hurdle, and zero-inflated count distributions. This facilitates looking at these as "one" model rather than separate models. For assessing the goodness of fit, graphical displays like rootograms and randomized quantile residual plots are available. (See Kleiber & Zeileis, 2016, The American Statistician , 70 (3), 296–303. doi:10.1080/00031305.2016.1173590 .) Simulated data Your simulated data comes from a single Poisson process. If e is treated as a known regressor then it would be a standard Poisson GLM. If e is an unknown noise component, then there is some unobserved heterogeneity causing a little bit of overdispersion which could be captured by a negative binomial model or some other kind of continuous mixture or random effect etc. However, as the effect of e is rather small here, none of this makes a big difference. Below, I'm treating e as a regressor (i.e., with true coefficient of 1) but you could also omit this and use negative binomial or Poisson models. Qualitatively, all of these lead to similar insights. ## Poisson GLM p <- glm(y ~ x + e, family = poisson) ## Hurdle Poisson (zero-truncated Poisson + right-censored Poisson) library("countreg") hp <- hurdle(y ~ x + e, dist = "poisson", zero.dist = "poisson") ## all coefficients very similar and close to true -1.5, 1, 1 cbind(coef(p), coef(hp, model = "zero"), coef(hp, model = "count")) ## [,1] [,2] [,3] ## (Intercept) -1.3371364 -1.2691271 -1.741320 ## x 0.9118365 0.9791725 1.020992 ## e 0.9598940 1.0192031 1.100175 This reflects that all three models can consistently estimate the true parameters. Looking at the corresponding standard errors shows that in this scenario (without the need for a hurdle part) the Poisson GLM is more efficient: serr <- function(object, ...) sqrt(diag(vcov(object, ...))) cbind(serr(p), serr(hp, model = "zero"), serr(hp, model = "count")) ## [,1] [,2] [,3] ## (Intercept) 0.2226027 0.2487211 0.5702826 ## x 0.1594961 0.2340700 0.2853921 ## e 0.1640422 0.2698122 0.2852902 Standard information criteria would select the true Poisson GLM as the best model: AIC(p, hp) ## df AIC ## p 3 141.0473 ## hp 6 145.9287 And a Wald test would correctly detect that the two components of the hurdle model are not significantly different: hurdletest(hp) ## Wald test for hurdle models ## ## Restrictions: ## count_((Intercept) - zero_(Intercept) = 0 ## count_x - zero_x = 0 ## count_e - zero_e = 0 ## ## Model 1: restricted model ## Model 2: y ~ x + e ## ## Res.Df Df Chisq Pr(>Chisq) ## 1 97 ## 2 94 3 1.0562 0.7877 Finally both rootogram(p) and qqrplot(p) show that the Poisson GLM fits the data very well and that there are no excess zeros or hints on further misspecifications.
{ "source": [ "https://stats.stackexchange.com/questions/320924", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/130869/" ] }
321,460
When training a pixel segmentation neural network, such as a fully convolutional network, how do you make the decision to use the cross-entropy loss function versus Dice-coefficient loss function? I realize this is a short question, but not quite sure what other information to provide. I looked at a bunch of documentation about the two loss functions but am not able to get a intuitive sense of when to use one over the other.
One compelling reason for using cross-entropy over dice-coefficient or the similar IoU metric is that the gradients are nicer. The gradients of cross-entropy wrt the logits is something like $p - t$ , where $p$ is the softmax outputs and $t$ is the target. Meanwhile, if we try to write the dice coefficient in a differentiable form: $\frac{2pt}{p^2+t^2}$ or $\frac{2pt}{p+t}$ , then the resulting gradients wrt $p$ are much uglier: $\frac{2t(t^2-p^2)}{(p^2+t^2)^2}$ and $\frac{2t^2}{(p+t)^2}$ . It's easy to imagine a case where both $p$ and $t$ are small, and the gradient blows up to some huge value. In general, it seems likely that training will become more unstable. The main reason that people try to use dice coefficient or IoU directly is that the actual goal is maximization of those metrics, and cross-entropy is just a proxy which is easier to maximize using backpropagation. In addition, Dice coefficient performs better at class imbalanced problems by design: However, class imbalance is typically taken care of simply by assigning loss multipliers to each class, such that the network is highly disincentivized to simply ignore a class which appears infrequently, so it's unclear that Dice coefficient is really necessary in these cases. I would start with cross-entropy loss, which seems to be the standard loss for training segmentation networks, unless there was a really compelling reason to use Dice coefficient.
{ "source": [ "https://stats.stackexchange.com/questions/321460", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/133742/" ] }
321,470
In every machine learning discussion, the term "model" is used to describe how the prediction is made. Does this "model" refer to the learning algorithm used? What exactly is a model?
One compelling reason for using cross-entropy over dice-coefficient or the similar IoU metric is that the gradients are nicer. The gradients of cross-entropy wrt the logits is something like $p - t$ , where $p$ is the softmax outputs and $t$ is the target. Meanwhile, if we try to write the dice coefficient in a differentiable form: $\frac{2pt}{p^2+t^2}$ or $\frac{2pt}{p+t}$ , then the resulting gradients wrt $p$ are much uglier: $\frac{2t(t^2-p^2)}{(p^2+t^2)^2}$ and $\frac{2t^2}{(p+t)^2}$ . It's easy to imagine a case where both $p$ and $t$ are small, and the gradient blows up to some huge value. In general, it seems likely that training will become more unstable. The main reason that people try to use dice coefficient or IoU directly is that the actual goal is maximization of those metrics, and cross-entropy is just a proxy which is easier to maximize using backpropagation. In addition, Dice coefficient performs better at class imbalanced problems by design: However, class imbalance is typically taken care of simply by assigning loss multipliers to each class, such that the network is highly disincentivized to simply ignore a class which appears infrequently, so it's unclear that Dice coefficient is really necessary in these cases. I would start with cross-entropy loss, which seems to be the standard loss for training segmentation networks, unless there was a really compelling reason to use Dice coefficient.
{ "source": [ "https://stats.stackexchange.com/questions/321470", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/187837/" ] }
321,841
As per this and this answer, autoencoders seem to be a technique that uses neural networks for dimension reduction. I would like to additionally know what is a variational autoencoder (its main differences/benefits over a "traditional" autoencoders) and also what are the main learning tasks these algorithms are used for.
Even though variational autoencoders (VAEs) are easy to implement and train, explaining them is not simple at all, because they blend concepts from Deep Learning and Variational Bayes, and the Deep Learning and Probabilistic Modeling communities use different terms for the same concepts. Thus when explaining VAEs you risk either concentrating on the statistical model part, leaving the reader without a clue about how to actually implement it, or vice versa to concentrate on the network architecture and loss function, in which the Kullback-Leibler term seems to be pulled out of thin air. I'll try to strike a middle ground here, starting from the model but giving enough details to actually implement it in practice, or understand someone's else implementation. VAEs are generative models Unlike classical (sparse, denoising, etc.) autoencoders, VAEs are generative models, like GANs. With generative model I mean a model which learns the probability distribution $p(\mathbf{x})$ over the input space $\mathcal{x}$ . This means that after we have trained such a model, we can then sample from (our approximation of) $p(\mathbf{x})$ . If our training set is made of handwritten digits (MNIST), then after training the generative model is able to create images which look like handwritten digits, even though they're not "copies" of the images in the training set. Learning the distribution of the images in the training set implies that images which look like handwritten digits should have an high probability of being generated, while images which look like the Jolly Roger or random noise should have a low probability. In other words, it means learning about the dependencies among pixels: if our image is a $28\times 28=784$ pixels grayscale image from MNIST, the model should learn that if a pixel is very bright, then there's a significant probability that some neighboring pixels are bright too, that if we have a long, slanted line of bright pixels we may have another smaller, horizontal line of pixels above this one (a 7), etc. VAEs are latent variable models The VAE is a latent variables model: this means that $\mathbf{x}$ , the random vector of the 784 pixel intensities (the observed variables), is modeled as a (possibly very complicated) function of a random vector $\mathbf{z}\in\mathcal{Z}$ of lower dimensionality, whose components are unobserved ( latent ) variables. When does such a model make sense? For example, in the MNIST case we think that the handwritten digits belong to a manifold of dimension much smaller than the dimension of $\mathcal{x}$ , because the vast majority of random arrangements of 784 pixel intensities, don't look at all like handwritten digit. Intuitively we would expect the dimension to be at least 10 (the number of digits), but it's most likely larger because each digit can be written in different ways. Some differences are unimportant for the quality of the final image (for example, global rotations and translations), but others are important. So in this case the latent model makes sense. More on this later. Note that, amazingly, even if our intuition tells us that the dimension should about 10, we can definitely use just 2 latent variables to encode the MNIST dataset with a VAE (though results won't be pretty). The reason is that even a single real variable can encode infinitely many classes, because it can assume all possible integer values and more. Of course, if the classes have significant overlap among them (such as 9 and 8 or 7 and I in MNIST), even the most complicated function of just two latent variables will do a poor job of generating clearly discernible samples for each class. More on this later. VAEs assume a multivariate parametric distribution $q(\mathbf{z}\vert\mathbf{x},\boldsymbol{\lambda})$ (where $\boldsymbol{\lambda}$ are the parameters of $q$ ), and they learn the parameters of the multivariate distribution. The use of a parametric pdf for $\mathbf{z}$ , which prevents the number of parameters of a VAE to grow without bounds with the growth of the training set, is called amortization in VAE lingo (yeah, I know...). The decoder network We start from the decoder network because the VAE is a generative model, and the only part of the VAE which is actually used to generate new images is the decoder. The encoder network is only used at inference (training) time. The goal of the decoder network is to generate new random vectors $\mathbf{x}$ belonging to the input space $\mathcal{X}$ , i.e., new images, starting from realizations of the latent vector $\mathbf{z}$ . This means clearly that it must learn the conditional distribution $p(\mathbf{x}\vert\mathbf{z})$ . For VAEs this distribution is often assumed to be a multivariate Gaussian 1 : $$p_{\boldsymbol{\phi}}(\mathbf{x}\vert\mathbf{z}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}(\mathbf{z}; \boldsymbol{\phi}), \boldsymbol{\sigma}(\mathbf{z}; \boldsymbol{\phi})^2I) $$ $\boldsymbol{\phi}$ is the vector of weights (and biases) of the encoder network. The vectors $\boldsymbol{\mu}(\mathbf{z};\boldsymbol{\phi})$ and $\boldsymbol{\sigma}(\mathbf{z}; \boldsymbol{\phi})$ are complex, unknown nonlinear functions, modeled by the decoder network: neural networks are powerful nonlinear functions approximators. As noted by @amoeba in the comments, there is a striking similarity between the decoder and a classic latent variables model: Factor Analysis. In Factor Analysis, you assume the model: $$ \mathbf{x}\vert\mathbf{z}\sim\mathcal{N}(\mathbf{W}\mathbf{z}+\boldsymbol{\mu}, \boldsymbol{\sigma}^2I),\ \mathbf{z}\sim\mathcal{N}(0,I)$$ Both models (FA & the decoder) assume that the conditional distribution of the observable variables $\mathbf{x}$ on the latent variables $\mathbf{z}$ is Gaussian, and that the $\mathbf{z}$ themselves are standard Gaussians. The difference is that the decoder doesn't assume that the mean of $p(\mathbf{x}|\mathbf{z})$ is linear in $\mathbf{z}$ , nor it assumes that the standard deviation is a constant vector. On the contrary, it models them as complex nonlinear functions of the $\mathbf{z}$ . In this respect, it can be seen as nonlinear Factor Analysis. See here for an insightful discussion of this connection between FA and VAE. Since FA with an isotropic covariance matrix is just PPCA, this also ties in to the well-known result that a linear autoencoder reduces to PCA. Let's go back to the decoder: how do we learn $\boldsymbol{\phi}$ ? Intuitively we want latent variables $\mathbf{z}$ which maximize the likelihood of generating the $\mathbf{x}_i$ in the training set $D_n$ . In other words we want to compute the posterior probability distribution of the $\mathbf{z}$ , given the data: $$p(\mathbf{z}\vert\mathbf{x})=\frac{p_{\boldsymbol{\phi}}(\mathbf{x}\vert\mathbf{z})p(\mathbf{z})}{p(\mathbf{x})}$$ We assume a $\mathcal{N}(0,I)$ prior on $\mathbf{z}$ , and we're left with the usual issue in Bayesian inference that computing $p(\mathbf{x})$ (the evidence ) is hard (a multidimensional integral). What's more, since here $\boldsymbol{\mu}(\mathbf{z};\boldsymbol{\phi})$ is unknown, we can't compute it anyway. Enter Variational Inference, the tool which gives Variational Autoencoders their name. Variational Inference for the VAE model Variational Inference is a tool to perform approximate Bayesian Inference for very complex models. It's not an overly complex tool, but my answer is already too long and I won't go into a detailed explanation of VI. You can have a look at this answer and the references therein if you're curious: https://stats.stackexchange.com/a/270569/58675 It suffices to say that VI looks for an approximation to $p(\mathbf{z}\vert \mathbf{x})$ in a parametric family of distributions $q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})$ , where, as noted above, $\boldsymbol{\lambda}$ are the parameters of the family. We look for the parameters which minimize the Kullback-Leibler divergence between our target distribution $p(\mathbf{z}\vert \mathbf{x})$ and $q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})$ : $$\min_{\boldsymbol{\lambda}}\mathcal{D}[p(\mathbf{z}\vert \mathbf{x})\vert\vert q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda})]$$ Again, we cannot minimize this directly because the definition of Kullback-Leibler divergence includes the evidence. Introducing the ELBO (Evidence Lower BOund) and after some algebraic manipulations, we finally get at: $$ELBO(\boldsymbol{\lambda})= E_{q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})}[\log p(\mathbf{x}\vert\boldsymbol{z})]-\mathcal{D}[(q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})\vert\vert p(\boldsymbol{z})]$$ Since the ELBO is a lower bound on evidence (see the above link), maximizing the ELBO is not exactly equivalent to maximizing the likelihood of data given $\boldsymbol{\lambda}$ (after all, VI is a tool for approximate Bayesian inference), but it goes in the right direction. In order to make inference, we need to specify the parametric family $q(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$ . In most VAEs we choose a multivariate, uncorrelated Gaussian distribution $$q(\mathbf{z}\vert \mathbf{x},\boldsymbol{\lambda}) = \mathcal{N}(\mathbf{z}\vert\boldsymbol{\mu}(\mathbf{x}), \boldsymbol{\sigma}^2(\mathbf{x})I) $$ This is the same choice we made for $p(\mathbf{x}\vert\mathbf{z})$ , though we may have chosen a different parametric family. As before, we can estimate these complex nonlinear functions by introducing a neural network model. Since this model accepts input images and returns parameters of the distribution of the latent variables we call it the encoder network. As before, we can estimate these complex nonlinear functions by introducing a neural network model. Since this model accepts input images and returns parameters of the distribution of the latent variables we call it the encoder network. The encoder network Also called the inference network, this is only used at training time. As noted above, the encoder must approximate $\boldsymbol{\mu}(\mathbf{x})$ and $\boldsymbol{\sigma}(\mathbf{x})$ , thus if we have, say, 24 latent variables, the output of the encoder is a $d=48$ vector. The encoder has weights (and biases) $\boldsymbol{\theta}$ . To learn $\boldsymbol{\theta}$ , we can finally write the ELBO in terms of the parameters $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ of the encoder and decoder network, as well as the training set points: $$ELBO(\boldsymbol{\theta},\boldsymbol{\phi})= \sum_i E_{q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})}[\log p_{\boldsymbol{\phi}}(\mathbf{x}_i\vert\boldsymbol{z})]-\mathcal{D}[(q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})\vert\vert p(\boldsymbol{z})]$$ We can finally conclude. The opposite of the ELBO, as a function of $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ , is used as the loss function of the VAE. We use SGD to minimize this loss, i.e., maximize the ELBO. Since the ELBO is a lower bound on the evidence, this goes in the direction of maximizing the evidence, and thus generating new images which are optimally similar to those in the training set. The first term in the ELBO is the expected negative log-likelihood of the training set points, thus it encourages the decoder to produce images which are similar to the training ones. The second term can be interpreted as a regularizer: it encourages the encoder to generate a distribution for the latent variables which is similar to $p(\boldsymbol{z})=\mathcal{N}(0,I)$ . But by introducing the probability model first, we understood where the whole expression comes from: the minimization of the Kullabck-Leibler divergence between the approximate posterior $q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$ and the model posterior $p(\boldsymbol{z}\vert \mathbf{x},\boldsymbol{\lambda})$ . 2 Once we have learned $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ by maximizing $ELBO(\boldsymbol{\theta},\boldsymbol{\phi})$ , we can throw away the encoder. From now on, to generate new images just sample $\boldsymbol{z}\sim \mathcal{N}(0,I)$ and propagate it through the decoder. The decoder outputs will be images similar to those in the training set. References and further reading the original paper: Auto-Encoding Variational Bayes a nice tutorial, with a few minor imprecisions: Tutorial on Variational Autoencoders how to reduce the blurriness of the images generated by your VAE, while at the same time getting latent variables which have a visual (perceptual) meaning, so that you can "add" features (smile, sunglasses, etc.) to your generated images: Deep Feature Consistent Variational Autoencoder improving the quality of VAE-generated images even more, by using Gaussian versions of autoregressive autoencoders: Improved Variational Inference with Inverse Autoregressive Flow new directions of research and a deeper understanding of pros & cons of the VAE model: Towards a Deeper Understanding of Variational Autoencoding Models & INFERENCE SUBOPTIMALITY IN VARIATIONAL AUTOENCODERS 1 This assumption is not strictly necessary, though it simplifies our description of VAEs. However, depending on applications, you may assume a different distribution for $p_{\phi}(\mathbf{x}\vert\mathbf{z})$ . For example, if $\mathbf{x}$ is a vector of binary variables, a Gaussian $p$ makes no sense, and a multivariate Bernoulli can be assumed. 2 The ELBO expression, with its mathematical elegance, conceals two major sources of pain for the VAE practitioners. One is the average term $E_{q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})}[\log p_{\boldsymbol{\phi}}(\mathbf{x}_i\vert\boldsymbol{z})]$ . This effectively requires computing an expectation, which requires taking multiple samples from $q_{\boldsymbol{\theta}}(\boldsymbol{z}\vert \mathbf{x}_i,\boldsymbol{\lambda})$ . Given the sizes of the involved neural networks, and the low convergence rate of the SGD algorithm, having to draw multiple random samples at each iteration (actually, for each minibatch, which is even worse) is very time-consuming. VAE users solve this problem very pragmatically by computing that expectation with a single (!) random sample. The other issue is that to train two neural networks (encoder & decoder) with the backpropagation algorithm, I need to be able to differentiate all steps involved in forward propagation from the encoder to the decoder. Since the decoder is not deterministic (evaluating its output requires drawing from a multivariate Gaussian), it doesn't even make sense to ask if it's a differentiable architecture. The solution to this is the reparametrization trick .
{ "source": [ "https://stats.stackexchange.com/questions/321841", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
321,851
Al Rahimi has recently given a very provocative talk in NIPS 2017 comparing current Machine Learning to Alchemy. One of his claims is that we need to get back to theoretical developments, to have simple theorems proving foundational results. When he said that, I started looking for the main theorems for ML, but could not find a good reference making sense of the main results. So here is my question: what are the current main mathematical theorems (theory) in ML/DL and what do they prove? I would guess Vapnik's work would go somewhere here. As an extra, what are the main theoretical open problems?
As I wrote in the comments, this question seems too broad to me, but I'll make an attempt to an answer. In order to set some boundaries, I will start with a little math which underlies most of ML, and then concentrate on recent results for DL. The bias-variance tradeoff is referred to in countless books, courses, MOOCs, blogs, tweets, etc. on ML, so we can't start without mentioning it: $$\mathbb{E}[(Y-\hat{f}(X))^2|X=x_0]=\sigma_{\epsilon}^2+\left(\mathbb{E}\hat{f}(x_0)-f(x_0)\right)^2+\mathbb{E}\left[\left(\hat{f}(x_0)-\mathbb{E}\hat{f}(x_0)\right)^2\right]=\text{Irreducible error + Bias}^2 \text{ + Variance}$$ Proof here: https://web.stanford.edu/~hastie/ElemStatLearn/ The Gauss-Markov Theorem (yes, linear regression will remain an important part of Machine Learning, no matter what: deal with it) clarifies that, when the linear model is true and some assumptions on the error term are valid, OLS has the minimum mean squared error (which in the above expression is just $\text{Bias}^2 \text{ + Variance}$ ) only among the unbiased linear estimators of the linear model. Thus there could well be linear estimators with bias (or nonlinear estimators) which have a better mean square error, and thus a better expected prediction error, than OLS. And this paves the way to all the regularization arsenal (ridge regression, LASSO, weight decay, etc.) which is a workhorse of ML. A proof is given here (and in countless other books): https://www.amazon.com/Linear-Statistical-Models-James-Stapleton/dp/0470231467 Probably more relevant to the explosion of regularization approaches, as noted by Carlos Cinelli in the comments, and definitely more fun to learn about, is the James-Stein theorem . Consider $n$ independent, same variance but not same mean Gaussian random variables: $$X_i|\mu_i\sim \mathcal{N}(\theta_i,\sigma^2), \quad i=1,\dots,n$$ in other words, we have an $n-$ components Gaussian random vector $\mathbf{X}\sim \mathcal{N}(\boldsymbol{\theta},\sigma^2I)$ . We have one sample $\mathbf{x}$ from $\mathbf{X}$ and we want to estimate $\boldsymbol{\theta}$ . The MLE (and also UMVUE) estimator is obviously $\hat{\boldsymbol{\theta}}_{MLE}=\mathbf{x}$ . Consider the James-Stein estimator $$\hat{\boldsymbol{\theta}}_{JS}= \left(1-\frac{(n-2)\sigma^2}{||\mathbf{x}||^2}\right)\mathbf{x} $$ Clearly, if $(n-2)\sigma^2\leq||\mathbf{x}||^2$ , $\hat{\boldsymbol{\theta}}_{JS}$ shrinks the MLE estimate towards zero. The James-Stein theorem states that for $n\geq4$ , $\hat{\boldsymbol{\theta}}_{JS}$ strictly dominates $\hat{\boldsymbol{\theta}}_{MLE}$ , i.e., it has lower MSE $\forall \ \boldsymbol{\theta}$ . Pheraps surprisingly, even if we shrink towards any other constant $\boldsymbol{c}\neq \mathbf{0}$ , $\hat{\boldsymbol{\theta}}_{JS}$ still dominates $\hat{\boldsymbol{\theta}}_{MLE}$ . Since the $X_i$ are independent, it may seem weird that, when trying to estimate the height of three unrelated persons, including a sample from the number of apples produced in Spain, may improve our estimate on average . The key point here is "on average": the mean square error for the simultaneous estimation of all the components of the parameter vector is smaller, but the square error for one or more components may well be larger, and indeed it often is, when you have "extreme" observations. Finding out that MLE, which was indeed the "optimal" estimator for the univariate estimation case, was dethroned for multivariate estimation, was quite a shock at the time, and led to a great interest in shrinkage, better known as regularization in ML parlance. One could note some similarities with mixed models and the concept of "borrowing strength": there is indeed some connection, as discussed here Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models? Reference: James, W., Stein, C., Estimation with Quadratic Loss . Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, 361--379, University of California Press, Berkeley, Calif., 1961 Principal Component Analysis is key to the important topic of dimension reduction, and it's based on the Singular Value Decomposition : for each $N\times p$ real matrix $X$ (although the theorem easily generalizes to complex matrices) we can write $$X=UDV^T$$ where $U$ of size $N \times p$ is orthogonal, $D$ is a $p \times p$ diagonal matrix with nonnegative diagonal elements and $U$ of size $p \times p$ is again orthogonal. For proofs and algorithms on how to compute it see: Golub, G., and Van Loan, C. (1983), Matrix computations , John Hopkins University press, Baltimore. Mercer's theorem is the founding stone for a lot of different ML methods: thin plate splines, support vector machines, the Kriging estimate of a Gaussian random process, etc. Basically, is one of the two theorems behind the so-called kernel trick . Let $K(x,y):[a,b]\times[a,b]\to\mathbb{R}$ be a symmmetric continuous function or kernel. if $K$ is positive semidefinite, then it admits an orthornormal basis of eigenfunctions corresponding to nonnegative eigenvalues: $$K(x,y)=\sum_{i=1}^\infty\gamma_i \phi_i(x)\phi_i(y)$$ The importance of this theorem for ML theory is testified by the number of references it gets in famous texts, such as for example Rasmussen & Williams text on Gaussian processes . Reference: J. Mercer, Functions of positive and negative type, and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 209:415-446, 1909 There is also a simpler presentation in Konrad Jörgens, Linear integral operators , Pitman, Boston, 1982. The other theorem which, together with Mercer's theorem, lays out the theoretical foundation of the kernel trick, is the representer theorem . Suppose you have a sample space $\mathcal{X}$ and a symmetric positive semidefinite kernel $K: \mathcal{X} \times \mathcal{X}\to \mathbb{R}$ . Also let $\mathcal{H}_K$ be the RKHS associated with $K$ . Finally, let $S=\{\mathbb{x}_i,y_i\}_{i=1}^n$ be a training sample. The theorem says that among all functions $f\in \mathcal{H}_K$ , which all admit an infinite representation in terms of eigenfunctions of $K$ because of Mercer's theorem, the one that minimizes the regularized risk always has a finite representation in the basis formed by the kernel evaluated at the $n$ training points, i.e. $$\min_{f \in \mathcal{H}_K} \sum_{i=1}^n L(y_i,f(x_i))+\lambda||f||^2_{\mathcal{H}_K}=\min_{\{c_j\}_1^\infty} \sum_{i=1}^n L(y_i,\sum_j^\infty c_j\phi_j(x_i))+\lambda\sum_j^\infty \frac{c_j^2}{\gamma_j}=\sum_{i=1}^n\alpha_i K(x,x_i)$$ (the theorem is the last equality). References: Wahba, G. 1990, Spline Models for Observational Data , SIAM, Philadelphia. The universal approximation theorem has been already cited by user Tobias Windisch and is much less relevant to Machine Learning than it is to functional analysis, even if it may not seem so at a first glance. The problem is that the theorem only says that such a network exists, but: it doesn't give any correlation between the size $N$ of the hidden layer and some measure of complexity of the target function $f(x)$ , such as for example Total Variation. If $f(x)=\sin(\omega x):[0,2\pi]\to[-1,1]$ and the $N$ required for a fixed error $\epsilon$ growed exponentially with $\omega$ , then single hidden layer neural networks would be useless. it doesn't say if the network $F(x)$ is learnable . In other words assume that given $f$ and $\epsilon$ , we know that a size $N$ NN will approximate $f$ with the required tolerance in the hypercube. Then by using training sets of size $M$ and a learning procedure such as for example back-prop, do we have any guarantee that by increasing $M$ we can recover $F$ ? finally, and worse of them all, it doesn't say anything about the prediction error of neural networks. What we're really interested in is an estimate of the prediction error, at least averaged over all training sets of size $M$ . The theorem doesn't help in this respect. A smaller pain point with the Hornik's version of this theorem is that it doesn't hold for ReLU activation functions. However, Bartlett has since proved an extended version which covers this gap. Until now, I guess all the theorems I considered were well-known to anybody. So now it's time for the fun stuff :-) Let's see a few Deep Learning theorems: Assumptions: the deep neural network $\Phi(X,W)$ (for fixed $W$ , $\Phi_W(X)$ is the function which associates the inputs of the neural network with its outputs) and the regularization loss $\Theta(W)$ are both sums of positively homogeneous functions of the same degree the loss function $L(Y,\Phi(X,W))$ is convex and at least once differentiable in $X$ , in a compact set $S$ Then: any local minimum for $L(Y,\Phi(X,W))+\lambda\Theta(W)$ such that a subnetwork of $\Phi(X,W)$ has zero weights, is a global minimum ( Theorem 1 ) above a critical network size, local descent will always converge to a global minimum from any initialization ( Theorem 2 ). This is very interesting: CNNs made only of convolutional layers, ReLU, max-pooling, fully connected ReLU and linear layers are positively homogenous functions, while if we include sigmoid activation functions, this isn't true anymore, which may partly explain the superior performance in some applications of ReLU + max pooling with respect to sigmoids. What's more, the theorems only hold if also $\Theta$ is positively homogeneous in $W$ of the same degree as $\Phi$ . Now, the fun fact is that $l_1$ or $l_2$ regularization, although positively homogeneous, don't have the same degree of $\Phi$ (the degree of $\Phi$ , in the simple CNN case mentioned before, increases with the number of layers). Instead, more modern regularization methods such as batch normalization and path-SGD do correspond to a positively homogeneous regularization function of the same degree as $\Phi$ , and dropout, while not fitting this framework exactly, holds strong similarities to it. This may explain why, in order to get high accuracy with CNNs, $l_1$ and $l_2$ regularization are not enough, but we need to employ all kinds of devilish tricks, such as dropout and batch normalization! To the best of my knowledge, this is the closest thing to an explanation of the efficacy of batch normalization, which is otherwise very obscure, as correctly noted by Al Rahimi in his talk. Another observation that some people make, based on Theorem 1 , is that it could explain why ReLU work well, even with the problem of dead neurons . According to this intuition, the fact that, during training, some ReLU neurons "die" (go to zero activation and then never recover from that, since for $x<0$ the gradient of ReLU is zero) is "a feature, not a bug", because if we have reached a minimum and a full subnetwork has died, then we're provably reached a global minimum (under the hypotheses of Theorem 1 ). I may be missing something, but I think this interpretation is far-fetched. First of all, during training ReLUs can "die" well before we have reached a local minimun. Secondly, it has to be proved that when ReLU units "die", they always do it over a full subnetwork: the only case where this is trivially true is when you have just one hidden layer, in which case of course each single neuron is a subnetwork. But in general I would be very cautious in seeing "dead neurons" as a good thing. References: B. Haeffele and R. Vidal, Global optimality in neural network training , In IEEE Conference on Computer Vision and Pattern Recognition, 2017. B. Haeffele and R. Vidal. Global optimality in tensor factorization, deep learning, and beyond , arXiv, abs/1506.07540, 2015. Image classification requires learning representations which are invariant (or at least robust, i.e., very weakly sensitive) to various transformations such as location, pose, viewpoint, lighting, expression, etc. which are commonly present in natural images, but do not contain info for the classification task. Same thing for speech recognition: changes in pitch, volume, pace, accent. etc. should not lead to a change in the classification of the word. Operations such as convolution, max pooling, average pooling, etc., used in CNNs, have exactly this goal, so intuitively we expect that they would work for these applications. But do we have theorems to support this intuition? There is a vertical translation invariance theorem , which, notwithstanding the name, has nothing to do with translation in the vertical direction, but it's basically a result which says that features learnt in following layers get more and more invariant, as the number of layers grows. This is opposed to an older horizontal translation invariance theorem which however holds for scattering networks, but not for CNNs. The theorem is very technical, however: assume $f$ (your input image) is square-integrable assume your filter commutes with the translation operator $T_t$ , which maps the input image $f$ to a translated copy of itself $T_t f$ . A learned convolution kernel (filter) satisfies this hypothesis. assume all filters, nonlinearities and pooling in your network satisfy a so-called weak admissibility condition , which is basically some sort of weak regularity and boundedness conditions. These conditions are satisfied by learned convolution kernel (as long as some normalization operation is performed on each layer), ReLU, sigmoid, tanh, etc, nonlinearities, and by average pooling, but not by max-pooling. So it covers some (not all) real world CNN architectures. Assume finally that each layer $n$ has a pooling factor $S_n> 1$ , i.e., pooling is applied in each layer and effectively discards information. The condition $S_n\geq 1 $ would also suffice for a weaker version of the theorem. Indicate with $\Phi^n(f)$ the output of layer $n$ of the CNN, when the input is $f$ . Then finally: $$\lim_{n\to\infty}|||\Phi^n(T_f f)-\Phi^n(f)|||=0$$ (the triple bars are not an error) which basically means that each layer learns features which become more and more invariant, and in the limit of an infinitely deep network we have a perfectly invariant architecture. Since CNNs have a finite number of layers, they're not perfectly translation-invariant, which is something well-known to practitioners. Reference: T. Wiatowski and H. Bolcskei, A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction , arXiv:1512.06293v3 . To conclude, numerous bounds for the generalization error of a Deep Neural Network based on its Vapnik-Chervonkensis dimension or on the Rademacher complexity grow with the number of parameters (some even exponentially), which means they can't explain why DNNs work so well in practice even when the number of parameters is considerably larger than the number of training samples. As a matter of fact, VC theory is not very useful in Deep Learning. Conversely, some results from last year bound the generalization error of a DNN classifier with a quantity which is independent of the neural network's depth and size, but depends only on the structure of the training set and the input space. Under some pretty technical assumptions on the learning procedure, and on the training set and input space, but with very little assumptions on the DNN (in particular, CNNs are fully covered), then with probability at least $1-\delta$ , we have $$\text{GE} \leq \sqrt{2\log{2}N_y\frac{\mathcal{N_{\gamma}}}{m}}+\sqrt{\frac{2\log{(1/\delta)}}{m}}$$ where: $\text{GE}$ is the generalization error, defined as the difference between the expected loss (the average loss of the learned classifier on all possible test points) and the empirical loss (just the good ol' training set error) $N_y$ is the number of classes $m$ is the size of the training set $\mathcal{N_{\gamma}}$ is the covering number of the data, a quantity related to the structure of the input space and to the the minimal separation among points of different classes in the training set. Reference: J. Sokolic, R. Giryes, G. Sapiro, and M. Rodrigues. Generalization error of invariant classifiers . In AISTATS, 2017
{ "source": [ "https://stats.stackexchange.com/questions/321851", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
322,523
The Matérn covariance function is commonly used as kernel function in Gaussian Process. It is defined like this $$ {\displaystyle C_{\nu }(d)=\sigma ^{2}{\frac {2^{1-\nu }}{\Gamma (\nu )}}{\Bigg (}{\sqrt {2\nu }}{\frac {d}{\rho }}{\Bigg )}^{\nu }K_{\nu }{\Bigg (}{\sqrt {2\nu }}{\frac {d}{\rho }}{\Bigg )}} $$ where $d$ is a distance function (such as Euclidean distance), $\Gamma$ is the gamma function, $K_\nu$ is the modified Bessel function of the second kind, $\rho$ and $\nu$ are positive parameters. $\nu$ is a lot of time chosen to be $\frac{3}{2}$ or $\frac{5}{2}$ in practice. A lot of time this kernel works better than the standard Gaussian kernel as it is 'less smooth', but except that, are there any other reason why one would prefer this kernel? Some geometric intuition about how it behaves, or some explanation of the seemingly cryptic formula would be highly appreciated.
In addition to @Dahn's nice answer , I thought I would try to say a little bit more about where the Bessel and Gamma functions come from. One starting point for arriving at the covariance function is Bochner's theorem . Theorem (Bochner) A continuous stationary function $k(x, y) = \widetilde{k}(|x − y|)$ is positive definite if and only if $\widetilde{k}$ is the Fourier transform of a finite positive measure: $$\widetilde{k}(t) = \int_{\mathbb{R}} e^{−iωt}\mathrm{d}µ(ω) .$$ From this you can deduce that the Matérn covariance matrix is derived as the Fourier transform of $\frac{1}{(1+\omega^2)^p}$ (Source: Durrande) . That's all good but it doesn't really tell us how you arrive at this finite positive measure given by $\frac{1}{(1+\omega^2)^p}$ . Well, it's the (power) spectral density of a stochastic process $f(x)$ . Which stochastic process? It's known that a random process on $\mathbb{R}^d$ with a Matérn covariance function is a solution to the stochastic partial differential equation (SPDE) $$ (κ^2 − ∆)^{α/2} X(s) = φW(s), $$ where $W(s)$ is Gaussian white noise with unit variance, $$\Delta = \sum_{i=1}^d \frac{\partial^2}{\partial x^2_i}$$ is the Laplace operator, and $α =ν + d/2$ (I think this is in Cressie and Wikle ). Why pick this particular SPDE/stochastic process? The origin is in spatial statistics where it's argued that this is the simplest and natural covariance that works well in $\mathbb{R}^2$ : The exponential correlation function is a natural correlation in one dimension, since it corresponds to a Markov process. In two dimensions this is no longer so, although the exponential is a common correlation function in geostatistical work. Whittle (1954) determined the correlation corresponding to a stochastic differential equation of Laplace type: $$ \left[ \left(\frac{\partial}{\partial t_1}\right)^2 + \left(\frac{\partial}{\partial t_2}\right)^2 - \kappa^2 \right] X(t_1, t_2) = \epsilon(t_1 , t_2) $$ where $\epsilon$ is white noise. The corresponding discrete lattice process is a second order autoregression. (Source: Guttorp&Gneiting) The family of processes included in the SDE associated with the Matérn equation includes the $AR(1)$ Ornstein–Uhlenbeck model of the velocity of a particle undergoing Brownian motion. More generally, you can define a power spectrum for a family of $AR(p)$ processes for every integer $p$ which also have a Matérn family covariance. This is in the appendix of Rasmussen and Williams. This covariance function is not related to Matérn cluster process . References Cressie, Noel, and Christopher K. Wikle. Statistics for spatio-temporal data. John Wiley & Sons, 2015. Guttorp, Peter, and Tilmann Gneiting. "Studies in the history of probability and statistics XLIX On the Matern correlation family." Biometrika 93.4 (2006): 989-995. Rasmussen, C. E. and Williams, C. K. I. Gaussian Processes for Machine Learning. the MIT Press, 2006.
{ "source": [ "https://stats.stackexchange.com/questions/322523", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/141210/" ] }
322,639
A neural net learns features of a data set as a means of achieving some goal. When it is done, we may want to know what the neural net learned. What were the features and why did it care about those. Can someone give some references on the body of work that concerns this problem?
It is true that it's hard to understand what a neural network is learning but there has been a lot of work on that front. We definitely can get some idea of what our network is looking for. Let's consider the case of a convolutional neural net for images. We have the interpretation for our first layer that we are sliding $K$ filters over the image, so our first hidden layer corresponds to the agreement between small chunks of the image and our various filters. We can visualize these filters to see what our first layer of representation is: This picture is of the first layer of filters from an AlexNet and is taken from this wonderful tutorial: http://cs231n.github.io/understanding-cnn/ . This lets us interpret the first hidden layer as learning to represent the image, consisting of raw pixels, as a tensor where each coordinate is the agreement of a filter with a small region of the image. The next layer then is working with these filter activations. It's not so hard to understand the first hidden layer because we can just look at the filters to see how they behave, because they're directly applied to an input image. E.g. let's say you're working with a black and white image (so our filters are 2D rather than 3D) and you have a filter that's something like $$ \begin{bmatrix}0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0\end{bmatrix}. $$ Imagine applying this to a 3x3 region of an image (ignoring the bias term). If every pixel was the same color then you'd get $0$ since they'd cancel out. But if the upper half is different from the lower half, say, then you'll get a potentially large value. This filter, in fact, is an edge detector, and we can figure that out by actually just applying it to images and seeing what happens. But it's a lot harder to understand the deeper layers because the whole problem is we don't know how to interpret what we're applying the filters to. This paper by Erhan et al (2009) agrees with this: they say that first hidden layer visualizations are common (and that was back in 2009) but visualizing the deeper layers is the hard part. From that paper: The main experimental finding of this investigation is very surprising: the response of an internal unit to input images, as a function in image space, appears to be unimodal, or at least that the maximum is found reliably and consistently for all the random initializations tested. This is interesting because finding this dominant mode is relatively easy, and displaying it then provides a good characterization of what the unit does. Chris Olah et al ( https://distill.pub/2017/feature-visualization/ ) build on this and discuss how in general you can (1) generate images that lead to large activations in order to get a sense of what the network is looking for; or (2) take actual input images and see how different parts of the image activate the network. That post focuses on (1). In the image below, taken from that linked article by Olah et al., the authors discuss the different aspects of the network that you can inspect. The left-most image shows the result of optimizing the activation of a particular neuron over the input image space, and so on. I would highly recommend reading that article in its entirety if you want a deeper understanding of this, and by reading its references you should have a great grasp of what's been done with this. Now of course this was all just for images where we as humans can make sense of the inputs. If you're working with something harder to interpret, like just a big vector of numbers, then you may not be able to make such cool visualizations, but in principle you could still consider these techniques for assessing the various neurons, layers, and etc.
{ "source": [ "https://stats.stackexchange.com/questions/322639", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/138757/" ] }
324,981
I am working on theoretical machine learning — on transfer learning, to be specific — for my Ph.D. Out of curiosity, why should I take a course on convex optimization? What take-aways from convex optimization can I use in my research on theoretical machine learning?
Machine learning algorithms use optimization all the time. We minimize loss, or error, or maximize some kind of score functions. Gradient descent is the "hello world" optimization algorithm covered on probably any machine learning course. It is obvious in the case of regression, or classification models, but even with tasks such as clustering we are looking for a solution that optimally fits our data (e.g. k-means minimizes the within-cluster sum of squares). So if you want to understand how the machine learning algorithms do work, learning more about optimization helps. Moreover, if you need to do things like hyperparameter tuning, then you are also directly using optimization. One could argue that convex optimization shouldn't be that interesting for machine learning since instead of dealing with convex functions , we often encounter loss surfaces like the one below, that are far from convex . (source: https://www.cs.umd.edu/~tomg/projects/landscapes/ and arXiv:1712.09913 ) Nonetheless, as mentioned in other answers, convex optimization is faster, simpler, and less computationally intensive. For example, gradient descent and alike algorithms are commonly used in machine learning, especially for neural networks, because they "work", scale, and are widely implemented in different software, nonetheless, they are not the best that we can get and have their pitfalls, as discussed by Ali Rahimi's talk at NIPS 2017 . On another hand, non-convex optimization algorithms such as evolutionary algorithms seem to be gaining more and more recognition in the ML community, e.g. training neural networks by neuroevolution seems to be a recent research topic (see also arXiv:1712.07897 ).
{ "source": [ "https://stats.stackexchange.com/questions/324981", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/102192/" ] }
326,065
I am playing with convolutional neural networks using Keras+Tensorflow to classify categorical data. I have a choice of two loss functions: categorial_crossentropy and sparse_categorial_crossentropy . I have a good intuition about the categorial_crossentropy loss function, which is defined as follows: $$ J(\textbf{w}) = -\frac{1}{N} \sum_{i=1}^{N} \left[ y_i \text{log}(\hat{y}_i) + (1-y_i) \text{log}(1-\hat{y}_i) \right] $$ where, $\textbf{w}$ refer to the model parameters, e.g. weights of the neural network $y_i$ is the true label $\hat{y_i}$ is the predicted label Both labels use the one-hot encoded scheme. Questions: How does the above loss function change in sparse_categorial_crossentropy ? What is the mathematical intuition behind it? When to use one over the other?
Both, categorical cross entropy and sparse categorical cross entropy have the same loss function which you have mentioned above. The only difference is the format in which you mention $Y_i$ (i,e true labels). If your $Y_i$ 's are one-hot encoded, use categorical_crossentropy. Examples (for a 3-class classification): [1,0,0] , [0,1,0], [0,0,1] But if your $Y_i$ 's are integers, use sparse_categorical_crossentropy. Examples for above 3-class classification problem: [1] , [2], [3] The usage entirely depends on how you load your dataset. One advantage of using sparse categorical cross entropy is it saves time in memory as well as computation because it simply uses a single integer for a class, rather than a whole vector.
{ "source": [ "https://stats.stackexchange.com/questions/326065", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/148774/" ] }
326,253
The formula for the conditional probability of $\text{A}$ happening given that $\text{B}$ has happened is:$$ P\left(\text{A}~\middle|~\text{B}\right)=\frac{P\left(\text{A} \cap \text{B}\right)}{P\left(\text{B}\right)}. $$ My textbook explains the intuition behind this in terms of a Venn diagram. Given that $\text{B}$ has occurred, the only way for $\text{A}$ to occur is for the event to fall in the intersection of $\text{A}$ and $\text{B}$. In that case, wouldn't the probability of $P\left(\text{A} \middle| \text{B}\right)$ simply be equal to the probability of $\text{A}$ intersection $\text{B}$, since that's the only way the event could happen? What am I missing?
A good intuition is that we are now in the universe in which B occurred, the full circle. Of that circle, how much is also A? This intuitive explanation was offered in a class by Marc Herman https://people.math.rochester.edu/faculty/herman/
{ "source": [ "https://stats.stackexchange.com/questions/326253", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/169299/" ] }
326,350
I am trying to use squared loss to do binary classification on a toy data set. I am using mtcars data set, use mile per gallon and weight to predict transmission type. The plot below shows the two types of transmission type data in different colors, and decision boundary generated by different loss function. The squared loss is $\sum_i (y_i-p_i)^2$ where $y_i$ is the ground truth label (0 or 1) and $p_i$ is the predicted probability $p_i=\text{Logit}^{-1}(\beta^Tx_i)$ . In other words, I am replace logistic loss with squared loss in classification setting, other parts are the same. For a toy example with mtcars data, in many cases, I got a model "similar" to logistic regression (see following figure, with random seed 0). But in somethings (if we do set.seed(1) ), squared loss seems not working well. What is happening here? The optimization does not converge? Logistic loss is easier to optimize comparing to squared loss? Any help would be appreciated. Code d=mtcars[,c("am","mpg","wt")] plot(d $mpg,d$ wt,col=factor(d $am)) lg_fit=glm(am~.,d, family = binomial()) abline(-lg_fit$ coefficients[1]/lg_fit $coefficients[3], -lg_fit$ coefficients[2]/lg_fit$coefficients[3]) grid() # sq loss lossSqOnBinary<-function(x,y,w){ p=plogis(x %*% w) return(sum((y-p)^2)) } # this random seed is important for reproducing the problem set.seed(0) x0=runif(3) x=as.matrix(cbind(1,d[,2:3])) y=d$am opt=optim(x0, lossSqOnBinary, method="BFGS", x=x,y=y) abline(-opt $par[1]/opt$ par[3], -opt $par[2]/opt$ par[3], lty=2) legend(25,5,c("logisitc loss","squared loss"), lty=c(1,2))
It seems like you've fixed the issue in your particular example but I think it's still worth a more careful study of the difference between least squares and maximum likelihood logistic regression. Let's get some notation. Let $L_S(y_i, \hat y_i) = \frac 12(y_i - \hat y_i)^2$ and $L_L(y_i, \hat y_i) = y_i \log \hat y_i + (1 - y_i) \log(1 - \hat y_i)$ . If we're doing maximum likelihood (or minimum negative log likelihood as I'm doing here), we have $$ \hat \beta_L := \text{argmin}_{b \in \mathbb R^p} -\sum_{i=1}^n y_i \log g^{-1}(x_i^T b) + (1-y_i)\log(1 - g^{-1}(x_i^T b)) $$ with $g$ being our link function. Alternatively we have $$ \hat \beta_S := \text{argmin}_{b \in \mathbb R^p} \frac 12 \sum_{i=1}^n (y_i - g^{-1}(x_i^T b))^2 $$ as the least squares solution. Thus $\hat \beta_S$ minimizes $L_S$ and similarly for $L_L$ . Let $f_S$ and $f_L$ be the objective functions corresponding to minimizing $L_S$ and $L_L$ respectively as is done for $\hat \beta_S$ and $\hat \beta_L$ . Finally, let $h = g^{-1}$ so $\hat y_i = h(x_i^T b)$ . Note that if we're using the canonical link we've got $$ h(z) = \frac{1}{1+e^{-z}} \implies h'(z) = h(z) (1 - h(z)). $$ For regular logistic regression we have $$ \frac{\partial f_L}{\partial b_j} = -\sum_{i=1}^n h'(x_i^T b)x_{ij} \left( \frac{y_i}{h(x_i^T b)} - \frac{1-y_i}{1 - h(x_i^T b)}\right). $$ Using $h' = h \cdot (1 - h)$ we can simplify this to $$ \frac{\partial f_L}{\partial b_j} = -\sum_{i=1}^n x_{ij} \left( y_i(1 - \hat y_i) - (1-y_i)\hat y_i\right) = -\sum_{i=1}^n x_{ij}(y_i - \hat y_i) $$ so $$ \nabla f_L(b) = -X^T (Y - \hat Y). $$ Next let's do second derivatives. The Hessian $$H_L:= \frac{\partial^2 f_L}{\partial b_j \partial b_k} = \sum_{i=1}^n x_{ij} x_{ik} \hat y_i (1 - \hat y_i). $$ This means that $H_L = X^T A X$ where $A = \text{diag} \left(\hat Y (1 - \hat Y)\right)$ . $H_L$ does depend on the current fitted values $\hat Y$ but $Y$ has dropped out, and $H_L$ is PSD. Thus our optimization problem is convex in $b$ . Let's compare this to least squares. $$ \frac{\partial f_S}{\partial b_j} = - \sum_{i=1}^n (y_i - \hat y_i) h'(x^T_i b)x_{ij}. $$ This means we have $$ \nabla f_S(b) = -X^T A (Y - \hat Y). $$ This is a vital point: the gradient is almost the same except for all $i$ $\hat y_i (1 - \hat y_i) \in (0,1)$ so basically we're flattening the gradient relative to $\nabla f_L$ . This'll make convergence slower. For the Hessian we can first write $$ \frac{\partial f_S}{\partial b_j} = - \sum_{i=1}^n x_{ij}(y_i - \hat y_i) \hat y_i (1 - \hat y_i) = - \sum_{i=1}^n x_{ij}\left( y_i \hat y_i - (1+y_i)\hat y_i^2 + \hat y_i^3\right). $$ This leads us to $$ H_S:=\frac{\partial^2 f_S}{\partial b_j \partial b_k} = - \sum_{i=1}^n x_{ij} x_{ik} h'(x_i^T b) \left( y_i - 2(1+y_i)\hat y_i + 3 \hat y_i^2 \right). $$ Let $B = \text{diag} \left( y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 \right)$ . We now have $$ H_S = -X^T A B X. $$ Unfortunately for us, the weights in $B$ are not guaranteed to be non-negative: if $y_i = 0$ then $y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 = \hat y_i (3 \hat y_i - 2)$ which is positive iff $\hat y_i > \frac 23$ . Similarly, if $y_i = 1$ then $y_i - 2(1+y_i)\hat y_i + 3 \hat y_i ^2 = 1-4 \hat y_i + 3 \hat y_i^2$ which is positive when $\hat y_i < \frac 13$ (it's also positive for $\hat y_i > 1$ but that's not possible). This means that $H_S$ is not necessarily PSD, so not only are we squashing our gradients which will make learning harder, but we've also messed up the convexity of our problem. All in all, it's no surprise that least squares logistic regression struggles sometimes, and in your example you've got enough fitted values close to $0$ or $1$ so that $\hat y_i (1 - \hat y_i)$ can be pretty small and thus the gradient is quite flattened. Connecting this to neural networks, even though this is but a humble logistic regression I think with squared loss you're experiencing something like what Goodfellow, Bengio, and Courville are referring to in their Deep Learning book when they write the following: One recurring theme throughout neural network design is that the gradient of the cost function must be large and predictable enough to serve as a good guide for the learning algorithm. Functions that saturate (become very flat) undermine this objective because they make the gradient become very small. In many cases this happens because the activation functions used to produce the output of the hidden units or the output units saturate. The negative log-likelihood helps to avoid this problem for many models. Many output units involve an exp function that can saturate when its argument is very negative. The log function in the negative log-likelihood cost function undoes the exp of some output units. We will discuss the interaction between the cost function and the choice of output unit in Sec. 6.2.2. and, in 6.2.2, Unfortunately, mean squared error and mean absolute error often lead to poor results when used with gradient-based optimization. Some output units that saturate produce very small gradients when combined with these cost functions. This is one reason that the cross-entropy cost function is more popular than mean squared error or mean absolute error, even when it is not necessary to estimate an entire distribution $p(y|x)$ . (both excerpts are from chapter 6).
{ "source": [ "https://stats.stackexchange.com/questions/326350", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113777/" ] }
327,746
Is the following true? low bias = high variance high bias = low variance I understand high and low bias but then how is variance different? Or are the above synonyms?
No. You can have both high or both low at same time. Here is an illustrate example. picture and article source I also recommend you to read the article where this picture comes from. The reason you have such impression is that in "early age" of machine learning, there is a concept called bias variance trade-off (as @Kodiologist mentioned, this concept is still true and a fundamental concept of tuning models today.) When increase model complexity, variance is increased and bias is reduced when regularize the model, bias is increased and variance is reduced. In Andrew Ng's recent Deep Learning Coursera lecture, he mentioned that in recent deep learning framework (with huge amount of data), people talk less about trade off. In stead, there are ways to only reduce variance and do not increase bias (For example, increase training data size), as vice versa.
{ "source": [ "https://stats.stackexchange.com/questions/327746", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/194748/" ] }
327,800
What is the best recommended book on Bayesian Non parametric approaches ? Specifically something which also tackles regression problems such as Gaussian processes.
No. You can have both high or both low at same time. Here is an illustrate example. picture and article source I also recommend you to read the article where this picture comes from. The reason you have such impression is that in "early age" of machine learning, there is a concept called bias variance trade-off (as @Kodiologist mentioned, this concept is still true and a fundamental concept of tuning models today.) When increase model complexity, variance is increased and bias is reduced when regularize the model, bias is increased and variance is reduced. In Andrew Ng's recent Deep Learning Coursera lecture, he mentioned that in recent deep learning framework (with huge amount of data), people talk less about trade off. In stead, there are ways to only reduce variance and do not increase bias (For example, increase training data size), as vice versa.
{ "source": [ "https://stats.stackexchange.com/questions/327800", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/78448/" ] }
327,831
I am testing the accuracy of discrete variable prediction (>= 2 possible outcomes). I've seen things like using a confusion matrix or ROC curve for binary outcomes, but not much for > 2 outcome variables. What are good measures of accuracy for discrete variables other than classification accuracy?
No. You can have both high or both low at same time. Here is an illustrate example. picture and article source I also recommend you to read the article where this picture comes from. The reason you have such impression is that in "early age" of machine learning, there is a concept called bias variance trade-off (as @Kodiologist mentioned, this concept is still true and a fundamental concept of tuning models today.) When increase model complexity, variance is increased and bias is reduced when regularize the model, bias is increased and variance is reduced. In Andrew Ng's recent Deep Learning Coursera lecture, he mentioned that in recent deep learning framework (with huge amount of data), people talk less about trade off. In stead, there are ways to only reduce variance and do not increase bias (For example, increase training data size), as vice versa.
{ "source": [ "https://stats.stackexchange.com/questions/327831", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/138752/" ] }
328,630
Consider a good old regression problem with $p$ predictors and sample size $n$. The usual wisdom is that OLS estimator will overfit and will generally be outperformed by the ridge regression estimator: $$\hat\beta = (X^\top X + \lambda I)^{-1}X^\top y.$$ It is standard to use cross-validation to find an optimal regularization parameter $\lambda$. Here I use 10-fold CV. Clarification update: when $n<p$, by "OLS estimator" I understand "minimum-norm OLS estimator" given by $$\hat\beta_\text{OLS} = (X^\top X)^+X^\top y = X^+ y.$$ I have a dataset with $n=80$ and $p>1000$. All predictors are standardized, and there are quite a few that (alone) can do a good job in predicting $y$. If I randomly select a small-ish, say $p=50<n$, number of predictors, I get a reasonable CV curve: large values of $\lambda$ yield zero R-squared, small values of $\lambda$ yield negative R-squared (because of overfitting) and there is some maximum in between. For $p=100>n$ the curve looks similar. However, for $p$ much larger than that, e.g. $p=1000$, I do not get any maximum at all: the curve plateaus, meaning that OLS with $\lambda\to 0$ performs as good as ridge regression with optimal $\lambda$. How is it possible and what does it say about my dataset? Am I missing something obvious or is it indeed counter-intuitive? How can there be any qualitative difference between $p=100$ and $p=1000$ given that both are larger than $n$? Under what conditions does minimal-norm OLS solution for $n<p$ not overfit? Update: There was some disbelief in the comments, so here is a reproducible example using glmnet . I use Python but R users will easily adapt the code. %matplotlib notebook import numpy as np import pylab as plt import seaborn as sns; sns.set() import glmnet_python # from https://web.stanford.edu/~hastie/glmnet_python/ from cvglmnet import cvglmnet; from cvglmnetPlot import cvglmnetPlot # 80x1112 data table; first column is y, rest is X. All variables are standardized mydata = np.loadtxt('../q328630.txt') # file is here https://pastebin.com/raw/p1cCCYBR y = mydata[:,:1] X = mydata[:,1:] # select p here (try 1000 and 100) p = 1000 # randomly selecting p variables out of 1111 np.random.seed(42) X = X[:, np.random.permutation(X.shape[1])[:p]] fit = cvglmnet(x = X.copy(), y = y.copy(), alpha = 0, standardize = False, intr = False, lambdau=np.array([.0001, .001, .01, .1, 1, 10, 100, 1000, 10000, 100000])) cvglmnetPlot(fit) plt.gcf().set_size_inches(6,3) plt.tight_layout()
A natural regularization happens because of the presence of many small components in the theoretical PCA of $x$. These small components are implicitly used to fit the noise using small coefficients. When using minimum norm OLS, you fit the noise with many small independent components and this has a regularizing effect equivalent to Ridge regularization. This regularization is often too strong, and it is possible to compensate it using "anti-regularization" know as negative Ridge . In that case, you will see the minimum of the MSE curve appears for negative values of $\lambda$. By theoretical PCA, I mean: Let $x\sim N(0,\Sigma)$ a multivariate normal distribution. There is a linear isometry $f$ such as $u=f(x)\sim N(0,D)$ where $D$ is diagonal: the components of $u$ are independent. $D$ is simply obtained by diagonalizing $\Sigma$. Now the model $y=\beta.x+\epsilon$ can be written $y=f(\beta).f(x)+\epsilon$ (a linear isometry preserves dot product). If you write $\gamma=f(\beta)$, the model can be written $y=\gamma.u+\epsilon$. Furthermore $\|\beta\|=\|\gamma\|$ hence fitting methods like Ridge or minimum norm OLS are perfectly isomorphic: the estimator of $y=\gamma.u+\epsilon$ is the image by $f$ of the estimator of $y=\beta.x+\epsilon$. Theoretical PCA transforms non independent predictors into independent predictors. It is only loosely related to empirical PCA where you use the empirical covariance matrix (that differs a lot from the theoretical one with small sample size). Theoretical PCA is not practically computable but is only used here to interpret the model in an orthogonal predictor space. Let's see what happens when we append many small variance independent predictors to a model: Theorem Ridge regularization with coefficient $\lambda$ is equivalent (when $p\rightarrow\infty$) to: adding $p$ fake independent predictors (centred and identically distributed) each with variance $\frac{\lambda}{p}$ fitting the enriched model with minimum norm OLS estimator keeping only the parameters for the true predictors (sketch of) Proof We are going to prove that the cost functions are asymptotically equal. Let's split the model into real and fake predictors: $y=\beta x+\beta'x'+\epsilon$. The cost function of Ridge (for the true predictors) can be written: $$\mathrm{cost}_\lambda=\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2$$ When using minimum norm OLS, the response is fitted perfectly: the error term is 0. The cost function is only about the norm of the parameters. It can be split into the true parameters and the fake ones: $$\mathrm{cost}_{\lambda,p}=\|\beta\|^2+\inf\{\|\beta'\|^2 \mid X'\beta'=y-X\beta\}$$ In the right expression, the minimum norm solution is given by: $$\beta'=X'^+(y-X\beta )$$ Now using SVD for $X'$: $$X'=U\Sigma V$$ $$X'^{+}=V^\top\Sigma^{+} U^\top$$ We see that the norm of $\beta'$ essentially depends on the singular values of $X'^+$ that are the reciprocals of the singular values of $X'$. The normalized version of $X'$ is $\sqrt{p/\lambda} X'$. I've looked at literature and singular values of large random matrices are well known. For $p$ and $n$ large enough, minimum $s_\min$ and maximum $s_\max$ singular values are approximated by (see theorem 1.1 ): $$s_\min(\sqrt{p/\lambda}X')\approx \sqrt p\left(1-\sqrt{n/p}\right)$$ $$s_\max(\sqrt{p/\lambda}X')\approx \sqrt p \left(1+\sqrt{n/p}\right)$$ Since, for large $p$, $\sqrt{n/p}$ tends towards 0, we can just say that all singular values are approximated by $\sqrt p$. Thus: $$\|\beta'\|\approx\frac{1}{\sqrt\lambda}\|y-X\beta\|$$ Finally: $$\mathrm{cost}_{\lambda,p}\approx\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2=\mathrm{cost}_\lambda$$ Note : it does not matter if you keep the coefficients of the fake predictors in your model. The variance introduced by $\beta'x'$ is $\frac{\lambda}{p}\|\beta'\|^2\approx\frac{1}{p}\|y-X\beta\|^2\approx\frac{n}{p}MSE(\beta)$. Thus you increase your MSE by a factor $1+n/p$ only which tends towards 1 anyway. Somehow you don't need to treat the fake predictors differently than the real ones. Now, back to @amoeba's data. After applying theoretical PCA to $x$ (assumed to be normal), $x$ is transformed by a linear isometry into a variable $u$ whose components are independent and sorted in decreasing variance order. The problem $y=\beta x+\epsilon$ is equivalent the transformed problem $y=\gamma u+\epsilon$. Now imagine the variance of the components look like: Consider many $p$ of the last components, call the sum of their variance $\lambda$. They each have a variance approximatively equal to $\lambda/p$ and are independent. They play the role of the fake predictors in the theorem. This fact is clearer in @jonny's model: only the first component of theoretical PCA is correlated to $y$ (it is proportional $\overline{x}$) and has huge variance. All the other components (proportional to $x_i-\overline{x}$) have comparatively very small variance (write the covariance matrix and diagonalize it to see this) and play the role of fake predictors. I calculated that the regularization here corresponds (approx.) to prior $N(0,\frac{1}{p^2})$ on $\gamma_1$ while the true $\gamma_1^2=\frac{1}{p}$. This definitely over-shrinks. This is visible by the fact that the final MSE is much larger than the ideal MSE. The regularization effect is too strong. It is sometimes possible to improve this natural regularization by Ridge. First you sometimes need $p$ in the theorem really big (1000, 10000...) to seriously rival Ridge and the finiteness of $p$ is like an imprecision. But it also shows that Ridge is an additional regularization over a naturally existing implicit regularization and can thus have only a very small effect. Sometimes this natural regularization is already too strong and Ridge may not even be an improvement. More than this, it is better to use anti-regularization: Ridge with negative coefficient. This shows MSE for @jonny's model ($p=1000$), using $\lambda\in\mathbb{R}$:
{ "source": [ "https://stats.stackexchange.com/questions/328630", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/28666/" ] }
328,667
One of the things that make them super-slow in training, is that they often use two nested loops to iterate on data through all time steps for every single iteration, calculating loss and cost at the end, and then back-propagate that error to get the gradients to update all weight matrices. I mean, We can't get rid of iterating for some epochs to fit the model more and more. but Isn't there any known method to just feeding the whole sequence at once and iterate for some epochs while training? I know this is impossible if we are talking about something like language models where the next input character is based on the probability made by the softmax activation from the previous time-step.
A natural regularization happens because of the presence of many small components in the theoretical PCA of $x$. These small components are implicitly used to fit the noise using small coefficients. When using minimum norm OLS, you fit the noise with many small independent components and this has a regularizing effect equivalent to Ridge regularization. This regularization is often too strong, and it is possible to compensate it using "anti-regularization" know as negative Ridge . In that case, you will see the minimum of the MSE curve appears for negative values of $\lambda$. By theoretical PCA, I mean: Let $x\sim N(0,\Sigma)$ a multivariate normal distribution. There is a linear isometry $f$ such as $u=f(x)\sim N(0,D)$ where $D$ is diagonal: the components of $u$ are independent. $D$ is simply obtained by diagonalizing $\Sigma$. Now the model $y=\beta.x+\epsilon$ can be written $y=f(\beta).f(x)+\epsilon$ (a linear isometry preserves dot product). If you write $\gamma=f(\beta)$, the model can be written $y=\gamma.u+\epsilon$. Furthermore $\|\beta\|=\|\gamma\|$ hence fitting methods like Ridge or minimum norm OLS are perfectly isomorphic: the estimator of $y=\gamma.u+\epsilon$ is the image by $f$ of the estimator of $y=\beta.x+\epsilon$. Theoretical PCA transforms non independent predictors into independent predictors. It is only loosely related to empirical PCA where you use the empirical covariance matrix (that differs a lot from the theoretical one with small sample size). Theoretical PCA is not practically computable but is only used here to interpret the model in an orthogonal predictor space. Let's see what happens when we append many small variance independent predictors to a model: Theorem Ridge regularization with coefficient $\lambda$ is equivalent (when $p\rightarrow\infty$) to: adding $p$ fake independent predictors (centred and identically distributed) each with variance $\frac{\lambda}{p}$ fitting the enriched model with minimum norm OLS estimator keeping only the parameters for the true predictors (sketch of) Proof We are going to prove that the cost functions are asymptotically equal. Let's split the model into real and fake predictors: $y=\beta x+\beta'x'+\epsilon$. The cost function of Ridge (for the true predictors) can be written: $$\mathrm{cost}_\lambda=\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2$$ When using minimum norm OLS, the response is fitted perfectly: the error term is 0. The cost function is only about the norm of the parameters. It can be split into the true parameters and the fake ones: $$\mathrm{cost}_{\lambda,p}=\|\beta\|^2+\inf\{\|\beta'\|^2 \mid X'\beta'=y-X\beta\}$$ In the right expression, the minimum norm solution is given by: $$\beta'=X'^+(y-X\beta )$$ Now using SVD for $X'$: $$X'=U\Sigma V$$ $$X'^{+}=V^\top\Sigma^{+} U^\top$$ We see that the norm of $\beta'$ essentially depends on the singular values of $X'^+$ that are the reciprocals of the singular values of $X'$. The normalized version of $X'$ is $\sqrt{p/\lambda} X'$. I've looked at literature and singular values of large random matrices are well known. For $p$ and $n$ large enough, minimum $s_\min$ and maximum $s_\max$ singular values are approximated by (see theorem 1.1 ): $$s_\min(\sqrt{p/\lambda}X')\approx \sqrt p\left(1-\sqrt{n/p}\right)$$ $$s_\max(\sqrt{p/\lambda}X')\approx \sqrt p \left(1+\sqrt{n/p}\right)$$ Since, for large $p$, $\sqrt{n/p}$ tends towards 0, we can just say that all singular values are approximated by $\sqrt p$. Thus: $$\|\beta'\|\approx\frac{1}{\sqrt\lambda}\|y-X\beta\|$$ Finally: $$\mathrm{cost}_{\lambda,p}\approx\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2=\mathrm{cost}_\lambda$$ Note : it does not matter if you keep the coefficients of the fake predictors in your model. The variance introduced by $\beta'x'$ is $\frac{\lambda}{p}\|\beta'\|^2\approx\frac{1}{p}\|y-X\beta\|^2\approx\frac{n}{p}MSE(\beta)$. Thus you increase your MSE by a factor $1+n/p$ only which tends towards 1 anyway. Somehow you don't need to treat the fake predictors differently than the real ones. Now, back to @amoeba's data. After applying theoretical PCA to $x$ (assumed to be normal), $x$ is transformed by a linear isometry into a variable $u$ whose components are independent and sorted in decreasing variance order. The problem $y=\beta x+\epsilon$ is equivalent the transformed problem $y=\gamma u+\epsilon$. Now imagine the variance of the components look like: Consider many $p$ of the last components, call the sum of their variance $\lambda$. They each have a variance approximatively equal to $\lambda/p$ and are independent. They play the role of the fake predictors in the theorem. This fact is clearer in @jonny's model: only the first component of theoretical PCA is correlated to $y$ (it is proportional $\overline{x}$) and has huge variance. All the other components (proportional to $x_i-\overline{x}$) have comparatively very small variance (write the covariance matrix and diagonalize it to see this) and play the role of fake predictors. I calculated that the regularization here corresponds (approx.) to prior $N(0,\frac{1}{p^2})$ on $\gamma_1$ while the true $\gamma_1^2=\frac{1}{p}$. This definitely over-shrinks. This is visible by the fact that the final MSE is much larger than the ideal MSE. The regularization effect is too strong. It is sometimes possible to improve this natural regularization by Ridge. First you sometimes need $p$ in the theorem really big (1000, 10000...) to seriously rival Ridge and the finiteness of $p$ is like an imprecision. But it also shows that Ridge is an additional regularization over a naturally existing implicit regularization and can thus have only a very small effect. Sometimes this natural regularization is already too strong and Ridge may not even be an improvement. More than this, it is better to use anti-regularization: Ridge with negative coefficient. This shows MSE for @jonny's model ($p=1000$), using $\lambda\in\mathbb{R}$:
{ "source": [ "https://stats.stackexchange.com/questions/328667", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/195395/" ] }
328,668
In The Idiot Brain: A Neuroscientist Explains What Your Head is Really Up To , Dean Burnett wrote The correlation between height and intelligence is usually cited as being about $0.2$, meaning height and intelligence seem to be associated in only $1$ in $5$ people. To me, this sound wrong: I understand the correlation more like the (lack of) error we get when we try to predict one measure (here intelligence) if the only thing we know about that person is the other measure (here height). If the correlation is $1$ or $-1$, then we don't make any error in our prediction, if the correlation is $0.8$, then there is more error. Thus the correlation would apply to anyone one, not just $1$ in $5$ people. I have looked at this question but I am not good enough in maths to understand the answer. This answer which talks about the strength of the linear relationship seems in line which my understanding but I am not sure.
The quoted passage is indeed incorrect. A correlation coefficient quantifies the degree of association throughout an entire population (or sample, in the case of the sample correlation coefficient). It does not divide the population into parts with one part showing an association and the other part not. It could be the case that the population actually consists of two subpopulations with different degrees of association, but a correlation coefficient alone doesn't imply this.
{ "source": [ "https://stats.stackexchange.com/questions/328668", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/195399/" ] }
329,521
I have a bet with a co-worker that out of 50 ping pong games (first to win 21 points, win by 2), I will win all 50. So far we've played 15 games and on average I win 58% of the points, plus I've won all the games so far. So we're wondering if I have a 58% chance of winning a point and he has a 42% chance of winning a point, what's the percent chance that I would win the game? Is there a formula that we can plug in difference % chances? We've googled all over and even asked the data scientists at our company but couldn't find a straight answer. Edit: Wow, I am blown away by the thoroughness of responses. Thank you all so much!!! In case people are curious, I have an update to how my bet is going: I've now won 18 out of 50 games, so I need to win 32 more games. I've won 58.7% of all points and my opponent has therefore won 41.3% of points. The standard deviation for my opponent is 3.52, his average score is 14.83, and his median score is 15.50. Below is a screenshot of the score of each game so far. I can keep updating as the bet goes on, if people are interested. Edit #2 : Unfortunately we've only been able to play a few more games, below are the results. I'm just going to keep replacing the picture so I don't have a bunch of screenshots of the score. Final Update : I finally lost to my co-worker on game #28. He beat me 21-13. Thanks for all of your help!
The analysis is complicated by the prospect that the game goes into "overtime" in order to win by a margin of at least two points. (Otherwise it would be as simple as the solution shown at https://stats.stackexchange.com/a/327015/919 .) I will show how to visualize the problem and use that to break it down into readily-computed contributions to the answer. The result, although a bit messy, is manageable. A simulation bears out its correctness. Let $p$ be your probability of winning a point. Assume all points are independent. The chance that you win a game can be broken down into (nonoverlapping) events according to how many points your opponent has at the end assuming you don't go into overtime ($0,1,\ldots, 19$) or you go into overtime. In the latter case it is (or will become) obvious that at some stage the score was 20-20. There is a nice visualization. Let scores during the game be plotted as points $(x,y)$ where $x$ is your score and $y$ is your opponent's score. As the game unfolds, the scores move along the integer lattice in the first quadrant beginning at $(0,0)$, creating a game path . It ends the first time one of you has scored at least $21$ and has a margin of at least $2$. Such winning points form two sets of points, the "absorbing boundary" of this process, whereat the game path must terminate. This figure shows part of the absorbing boundary (it extends infinitely up and to the right) along with the path of a game that went into overtime (with a loss for you, alas). Let's count. The number of ways the game can end with $y$ points for your opponent is the number of distinct paths in the integer lattice of $(x,y)$ scores beginning at the initial score $(0,0)$ and ending at the penultimate score $(20,y)$. Such paths are determined by which of the $20+y$ points in the game you won. They correspond therefore to the subsets of size $20$ of the numbers $1,2,\ldots, 20+y$, and there are $\binom{20+y}{20}$ of them. Since in each such path you won $21$ points (with independent probabilities $p$ each time, counting the final point) and your opponent won $y$ points (with independent probabilities $1-p$ each time), the paths associated with $y$ account for a total chance of $$f(y) = \binom{20+y}{20}p^{21}(1-p)^y.$$ Similarly, there are $\binom{20+20}{20}$ ways to arrive at $(20,20)$ representing the 20-20 tie. In this situation you don't have a definite win. We may compute the chance of your win by adopting a common convention: forget how many points have been scored so far and start tracking the point differential. The game is at a differential of $0$ and will end when it first reaches $+2$ or $-2$, necessarily passing through $\pm 1$ along the way. Let $g(i)$ be the chance you win when the differential is $i\in\{-1,0,1\}$. Since your chance of winning in any situation is $p$, we have $$\eqalign{ g(0) &= p g(1) + (1-p)g(-1), \\ g(1) &= p + (1-p)g(0),\\ g(-1) &= pg(0). }$$ The unique solution to this system of linear equations for the vector $(g(-1),g(0),g(1))$ implies $$g(0) = \frac{p^2}{1-2p+2p^2}.$$ This, therefore, is your chance of winning once $(20,20)$ is reached (which occurs with a chance of $\binom{20+20}{20}p^{20}(1-p)^{20}$). Consequently your chance of winning is the sum of all these disjoint possibilities, equal to $$\eqalign{ &\sum_{y=0}^{19}f(y) + g(0)p^{20}(1-p)^{20} \binom{20+20}{20} \\ = &\sum_{y=0}^{19}\binom{20+y}{20}p^{21}(1-p)^y + \frac{p^2}{1-2p+2p^2}p^{20}(1-p)^{20} \binom{20+20}{20}\\ = &\frac{p^{21}}{1-2p+2p^2}\left(\sum_{y=0}^{19}\binom{20+y}{20}(1-2p+2p^2)(1-p)^y + \binom{20+20}{20}p(1-p)^{20} \right). }$$ The stuff inside the parentheses on the right is a polynomial in $p$. (It looks like its degree is $21$, but the leading terms all cancel: its degree is $20$.) When $p=0.58$, the chance of a win is close to $0.855913992.$ You should have no trouble generalizing this analysis to games that terminate with any numbers of points. When the required margin is greater than $2$ the result gets more complicated but is just as straightforward. Incidentally , with these chances of winning, you had a $(0.8559\ldots)^{15}\approx 9.7\%$ chance of winning the first $15$ games. That's not inconsistent with what you report, which might encourage us to continue supposing the outcomes of each point are independent. We would thereby project that you have a chance of $$(0.8559\ldots)^{35}\approx 0.432\%$$ of winning all the remaining $35$ games, assuming they proceed according to all these assumptions. It doesn't sound like a good bet to make unless the payoff is large! I like to check work like this with a quick simulation. Here is R code to generate tens of thousands of games in a second. It assumes the game will be over within 126 points (extremely few games need to continue that long, so this assumption has no material effect on the results). n <- 21 # Points your opponent needs to win m <- 21 # Points you need to win margin <- 2 # Minimum winning margin p <- .58 # Your chance of winning a point n.sim <- 1e4 # Iterations in the simulation sim <- replicate(n.sim, { x <- sample(1:0, 3*(m+n), prob=c(p, 1-p), replace=TRUE) points.1 <- cumsum(x) points.0 <- cumsum(1-x) win.1 <- points.1 >= m & points.0 <= points.1-margin win.0 <- points.0 >= n & points.1 <= points.0-margin which.max(c(win.1, TRUE)) < which.max(c(win.0, TRUE)) }) mean(sim) When I ran this, you won in 8,570 cases out of the 10,000 iterations. A Z-score (with approximately a Normal distribution) can be computed to test such results: Z <- (mean(sim) - 0.85591399165186659) / (sd(sim)/sqrt(n.sim)) message(round(Z, 3)) # Should be between -3 and 3, roughly. The value of $0.31$ in this simulation is perfectly consistent with the foregoing theoretical computation. Appendix 1 In light of the update to the question, which lists the outcomes of the first 18 games, here are reconstructions of game paths consistent with these data. You can see that two or three of the games were perilously close to losses. (Any path ending on a light gray square is a loss for you.) Potential uses of this figure include observing: The paths concentrate around a slope given by the ratio 267:380 of total scores, equal approximately to 58.7%. The scatter of the paths around that slope shows the variation expected when points are independent. If points are made in streaks, then individual paths would tend to have long vertical and horizontal stretches. In a longer set of similar games, expect to see paths that tend to stay within the colored range, but also expect a few to extend beyond it. The prospect of a game or two whose path lies generally above this spread indicates the possibility that your opponent will eventually win a game, probably sooner rather than later. Appendix 2 The code to create the figure was requested. Here it is (cleaned up to produce a slightly nicer graphic). library(data.table) library(ggplot2) n <- 21 # Points your opponent needs to win m <- 21 # Points you need to win margin <- 2 # Minimum winning margin p <- 0.58 # Your chance of winning a point # # Quick and dirty generation of a game that goes into overtime. # done <- FALSE iter <- 0 iter.max <- 2000 while(!done & iter < iter.max) { Y <- sample(1:0, 3*(m+n), prob=c(p, 1-p), replace=TRUE) Y <- data.table(You=c(0,cumsum(Y)), Opponent=c(0,cumsum(1-Y))) Y[, Complete := (You >= m & You-Opponent >= margin) | (Opponent >= n & Opponent-You >= margin)] Y <- Y[1:which.max(Complete)] done <- nrow(Y[You==m-1 & Opponent==n-1 & !Complete]) > 0 iter <- iter+1 } if (iter >= iter.max) warning("Unable to find a solution. Using last.") i.max <- max(n+margin, m+margin, max(c(Y$You, Y$Opponent))) + 1 # # Represent the relevant part of the lattice. # X <- as.data.table(expand.grid(You=0:i.max, Opponent=0:i.max)) X[, Win := (You == m & You-Opponent >= margin) | (You > m & You-Opponent == margin)] X[, Loss := (Opponent == n & You-Opponent <= -margin) | (Opponent > n & You-Opponent == -margin)] # # Represent the absorbing boundary. # A <- data.table(x=c(m, m, i.max, 0, n-margin, i.max-margin), y=c(0, m-margin, i.max-margin, n, n, i.max), Winner=rep(c("You", "Opponent"), each=3)) # # Plotting. # ggplot(X[Win==TRUE | Loss==TRUE], aes(You, Opponent)) + geom_path(aes(x, y, color=Winner, group=Winner), inherit.aes=FALSE, data=A, size=1.5) + geom_point(data=X, color="#c0c0c0") + geom_point(aes(fill=Win), size=3, shape=22, show.legend=FALSE) + geom_path(data=Y, size=1) + coord_equal(xlim=c(-1/2, i.max-1/2), ylim=c(-1/2, i.max-1/2), ratio=1, expand=FALSE) + ggtitle("Example Game Path", paste0("You need ", m, " points to win; opponent needs ", n, "; and the margin is ", margin, "."))
{ "source": [ "https://stats.stackexchange.com/questions/329521", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/195976/" ] }
330,559
In Andrew Ng's Neural Networks and Deep Learning course on Coursera he says that using $tanh$ is almost always preferable to using $sigmoid$. The reason he gives is that the outputs using $tanh$ centre around 0 rather than $sigmoid$'s 0.5, and this "makes learning for the next layer a little bit easier". Why does centring the activation's output speed learning? I assume he's referring to the previous layer as learning happens during backprop? Are there any other features that make $tanh$ preferable? Would the steeper gradient delay vanishing gradients? Are there any situations where $sigmoid$ would be preferable? Math-light, intuitive answers preferred.
Yan LeCun and others argue in Efficient BackProp that Convergence is usually faster if the average of each input variable over the training set is close to zero. To see this, consider the extreme case where all the inputs are positive. Weights to a particular node in the first weight layer are updated by an amount proportional to $\delta x$ where $\delta$ is the (scalar) error at that node and $x$ is the input vector (see equations (5) and (10)). When all of the components of an input vector are positive, all of the updates of weights that feed into a node will have the same sign (i.e. sign( $\delta$ )). As a result, these weights can only all decrease or all increase together for a given input pattern. Thus, if a weight vector must change direction it can only do so by zigzagging which is inefficient and thus very slow. This is why you should normalize your inputs so that the average is zero. The same logic applies to middle layers: This heuristic should be applied at all layers which means that we want the average of the outputs of a node to be close to zero because these outputs are the inputs to the next layer. Postscript @craq makes the point that this quote doesn't make sense for ReLU(x)=max(0,x) which has become a widely popular activation function. While ReLU does avoid the first zigzag problem mentioned by LeCun, it doesn't solve this second point by LeCun who says it is important to push the average to zero. I would love to know what LeCun has to say about this. In any case, there is a paper called Batch Normalization , which builds on top of the work of LeCun and offers a way to address this issue: It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its inputs are whitened – i.e., linearly transformed to have zero means and unit variances, and decorrelated. As each layer observes the inputs produced by the layers below, it would be advantageous to achieve the same whitening of the inputs of each layer. By the way, this video by Siraj explains a lot about activation functions in 10 fun minutes. @elkout says "The real reason that tanh is preferred compared to sigmoid (...) is that the derivatives of the tanh are larger than the derivatives of the sigmoid." I think this is a non-issue. I never seen this being a problem in the literature. If it bothers you that one derivative is smaller than another, you can just scale it. The logistic function has the shape $\sigma(x)=\frac{1}{1+e^{-kx}}$ . Usually, we use $k=1$ , but nothing forbids you from using another value for $k$ to make your derivatives wider, if that was your problem. Nitpick: tanh is also a sigmoid function. Any function with a S shape is a sigmoid. What you guys are calling sigmoid is the logistic function. The reason why the logistic function is more popular is historical reasons. It has been used for a longer time by statisticians. Besides, some feel that it is more biologically plausible.
{ "source": [ "https://stats.stackexchange.com/questions/330559", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/162527/" ] }
331,782
On page 223 in An Introduction to Statistical Learning , the authors summarise the differences between ridge regression and lasso. They provide an example (Figure 6.9) of when "lasso tends to outperform ridge regression in terms of bias, variance, and MSE". I understand why lasso can be desirable: it results in sparse solutions since it shrinks many coefficients to 0, resulting in simple and interpretable models. But I do not understand how it can outperform ridge when only predictions are of interest (i.e. how is it getting a substantially lower MSE in the example?). With ridge, if many predictors have almost no affect on the response (with a few predictors having a large effect), won't their coefficients simply be shrunk to a small number very close to zero... resulting in something very similar to lasso? So why would the final model have worse performance than lasso?
You are right to ask this question. In general, when a proper accuracy scoring rule is used (e.g., mean squared prediction error), ridge regression will outperform lasso. Lasso spends some of the information trying to find the "right" predictors and it's not even great at doing that in many cases. Relative performance of the two will depend on the distribution of true regression coefficients. If you have a small fraction of nonzero coefficients in truth, lasso can perform better. Personally I use ridge almost all the time when interested in predictive accuracy.
{ "source": [ "https://stats.stackexchange.com/questions/331782", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/134691/" ] }
331,803
In a prior post , I developed an "unfolded" gamma distribution generalization of a normal distribution as an example of how to relate a gamma distribution to a normal distribution . This yielded $$ \text{ND}(x;\mu,\sigma^2,a) = \dfrac{a e^{-2^{-\frac{a}{2}} \left(\frac{1}{\sigma }\right)^a \left| x-\mu\right| ^a}}{2 \sqrt{2} \sigma \Gamma \left(\frac{1}{a}\right)} \,,$$ where the mean is $\mu$, the variance is $\sigma^2$, and the shape is $a>0$, where $a=2$ for an ordinary normal distribution. This appears to be a different distribution from the generalized error distribution $$\text{GED}(x;\mu,\alpha,\beta)=\frac{\beta}{2\alpha\Gamma(1/\beta)} \; e^{-(|x-\mu|/\alpha)^\beta}\,,$$ where $\mu$ is the location, $\alpha>0$ is the scale, and $\beta>0$ is the shape and where $\beta=2$ yields a normal distribution. It includes the Laplace distribution when $\beta=1$. As $\beta\rightarrow\infty$, the density converges pointwise to a uniform density on $(\mu-\alpha,\mu+\alpha)$. Questions Are these distributions the same? If so, how does one convert between them? If not, and $\text{ND}(x;\mu,\sigma^2,a)$ is a different distribution, what are its other properties, for example, does it reduce to a distribution other than a normal distribution?
You are right to ask this question. In general, when a proper accuracy scoring rule is used (e.g., mean squared prediction error), ridge regression will outperform lasso. Lasso spends some of the information trying to find the "right" predictors and it's not even great at doing that in many cases. Relative performance of the two will depend on the distribution of true regression coefficients. If you have a small fraction of nonzero coefficients in truth, lasso can perform better. Personally I use ridge almost all the time when interested in predictive accuracy.
{ "source": [ "https://stats.stackexchange.com/questions/331803", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/99274/" ] }
331,973
For long time I did not understand why the "sum" of two random variables is their convolution , whereas a mixture density function sum of $f(x)$ and $g(x)$ is $p\,f(x)+(1-p)g(x)$ ; the arithmetic sum and not their convolution. The exact phrase "the sum of two random variables" appears in google 146,000 times, and is elliptical as follows. If one considers an RV to yield a single value, then that single value can be added to another RV single value, which has nothing to do with convolution, at least not directly, all that is is a sum of two numbers. An RV outcome in statistics is however a collection of values and thus a more exact phrase would be something like "the set of coordinated sums of pairs of associated individual values from two RV's is their discrete convolution"...and can be approximated by the convolution of the density functions corresponding to those RV's. Even simpler language: 2 RV's of $n$ -samples are in effect two n-dimensional vectors that add as their vector sum. Please show the details of how the sum of two random variables are a convolution and a sum.
Notation, upper and lower case https://en.wikipedia.org/wiki/Notation_in_probability_and_statistics Random variables are usually written in upper case roman letters: $X$ , $Y$ , etc. Particular realizations of a random variable are written in corresponding lower case letters. For example $x_1$ , $x_2$ , …, $x_n$ could be a sample corresponding to the random variable $X$ and a cumulative probability is formally written $P ( X > x )$ to differentiate random variable from realization. $Z=X+Y$ means $z_i=x_i+y_i \qquad \forall x_i,y_i$ Mixture of variables $ \rightarrow $ sum of pdf's https://en.wikipedia.org/wiki/Mixture_distribution You use a sum of the probability density functions $f_{X_1}$ and $f_{X_2}$ when the probability (of say Z) is a defined by a single sum of different probabilities. For example when $Z$ is a fraction $s$ of the time defined by $X_1$ and a fraction $1-s$ of the time defined by $X_2$ , then you get $$\mathbb{P}(Z=z) = s \mathbb{P}(X_1=z) + (1-s) \mathbb{P}(X_2=z)$$ and $$f_Z(z) = s f_{X_1}(z) + (1-s) f_{X_2}(z)$$ . . . . an example is a choice between dice rolls with either a 6 sided dice or a 12 sided dice. Say you do 50-50 percent of the time the one dice or the other. Then $$f_{mixed roll}(z) = 0.5 \, f_{6-sided}(z) + 0.5 \, f_{12-sided}(z)$$ Sum of variables $ \rightarrow $ convolution of pdf's https://en.wikipedia.org/wiki/Convolution_of_probability_distributions You use a convolution of the probability density functions $f_{X_1}$ and $f_{X_2}$ when the probability (of say Z) is a defined by multiple sums of different (independent) probabilities. For example when $Z = X_1 + X_2$ (ie. a sum!) and multiple different pairs $x_1,x_2$ sum up to $z$ , with each the probability $f_{X_1}(x_1)f_{X_2}(x_2)$ . Then you get the convolution $$\mathbb{P}(Z=z) = \sum_{\text{all pairs }x_1+x_2=z} \mathbb{P}(X_1=x_1) \cdot \mathbb{P}(X_2=x_2)$$ and $$f_Z(z) = \sum_{x_1 \in \text{ domain of }X_1} f_{X_1}(x_1) f_{X_2}(z-x_1)$$ or for continuous variables $$f_Z(z) = \int_{x_1 \in \text{ domain of }X_1} f_{X_1}(x_1) f_{X_2}(z-x_1) d x_1$$ . . . . an example is a sum of two dice rolls $f_{X_2}(x) = f_{X_1}(x) = 1/6$ for $x \in \lbrace 1,2,3,4,5,6 \rbrace$ and $$f_Z(z) = \sum_{x \in \lbrace 1,2,3,4,5,6 \rbrace \\ \text{ and } z-x \in \lbrace 1,2,3,4,5,6 \rbrace} f_{X_1}(x) f_{X_2}(z-x)$$ note I choose to integrate and sum $x_1 \in \text{ domain of } X_1$ , which I find more intuitive, but it is not necessary and you can integrate from $-\infty$ to $\infty$ if you define $f_{X_1}(x_1)=0$ outside the domain. Image example Let $Z$ be $X+Y$ . To know $\mathbb{P}(z-\frac{1}{2}dz<Z<z+\frac{1}{2}dz)$ you will have to integrate over the probabilities for all the realizations of $x,y$ that lead to $z-\frac{1}{2}dz<Z=X+Y<z+\frac{1}{2}dz$ . So that is the integral of $f(x)g(y)$ in the region $\pm \frac{1}{2}dz$ along the line $x+y=z$ .
{ "source": [ "https://stats.stackexchange.com/questions/331973", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/99274/" ] }
332,179
in nearly all code examples I've seen of a VAE, the loss functions are defined as follows (this is tensorflow code, but I've seen similar for theano, torch etc. It's also for a convnet, but that's also not too relevant, just affects the axes the sums are taken over): # latent space loss. KL divergence between latent space distribution and unit gaussian, for each batch. # first half of eq 10. in https://arxiv.org/abs/1312.6114 kl_loss = -0.5 * tf.reduce_sum(1 + log_sigma_sq - tf.square(mu) - tf.exp(log_sigma_sq), axis=1) # reconstruction error, using pixel-wise L2 loss, for each batch rec_loss = tf.reduce_sum(tf.squared_difference(y, x), axis=[1,2,3]) # or binary cross entropy (assuming 0...1 values) y = tf.clip_by_value(y, 1e-8, 1-1e-8) # prevent nan on log(0) rec_loss = -tf.reduce_sum(x * tf.log(y) + (1-x) * tf.log(1-y), axis=[1,2,3]) # sum the two and average over batches loss = tf.reduce_mean(kl_loss + rec_loss) However the numeric range of kl_loss and rec_loss are very dependent on latent space dims and input feature size (e.g. pixel resolution) respectively. Would it be sensible to replace the reduce_sum's with reduce_mean to get per z-dim KLD and per pixel (or feature) LSE or BCE? More importantly, how do we weight latent loss with reconstruction loss when summing together for the final loss? Is it just trial and error? or is there some theory (or at least rule of thumb) for it? I couldn't find any info on this anywhere (including the original paper). The issue I'm having, is that if the balance between my input feature (x) dimensions and latent space (z) dimensions is not 'optimum', either my reconstructions are very good but the learnt latent space is unstructured (if x dimensions is very high and reconstruction error dominates over KLD), or vice versa (reconstructions are not good but learnt latent space is well structured if KLD dominates). I'm finding myself having to normalise reconstruction loss (dividing by input feature size), and KLD (dividing by z dimensions) and then manually weighting the KLD term with an arbitrary weight factor (The normalisation is so that I can use the same or similar weight independent of dimensions of x or z ). Empirically I've found around 0.1 to provide a good balance between reconstruction and structured latent space which feels like a 'sweet spot' to me. I'm looking for prior work in this area. Upon request, maths notation of above (focusing on L2 loss for reconstruction error) $$\mathcal{L}_{latent}^{(i)} = -\frac{1}{2} \sum_{j=1}^{J}(1+\log (\sigma_j^{(i)})^2 - (\mu_j^{(i)})^2 - (\sigma_j^{(i)})^2)$$ $$\mathcal{L}_{recon}^{(i)} = -\sum_{k=1}^{K}(y_k^{(i)}-x_k^{(i)})^2$$ $$\mathcal{L}^{(m)} = \frac{1}{M}\sum_{i=1}^{M}(\mathcal{L}_{latent}^{(i)} + \mathcal{L}_{recon}^{(i)})$$ where $J$ is the dimensionality of latent vector $z$ (and corresponding mean $\mu$ and variance $\sigma^2$), $K$ is the dimensionality of the input features, $M$ is the mini-batch size, the superscript $(i)$ denotes the $i$th data point and $\mathcal{L}^{(m)}$ is the loss for the $m$th mini-batch.
For anyone stumbling on this post also looking for an answer, this twitter thread has added a lot of very useful insight. Namely: beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework discusses my exact question with a few experiments. Interestingly, it seems their $\beta_{norm}$ (which is similar to my normalised KLD weight) is also centred around 0.1, with higher values giving more structured latent space at the cost of poorer reconstruction, and lower values giving better reconstruction with less structured latent space (though their focus is specifically on learning disentangled representations). and related reading (where similar issues are discussed) Semi-Supervised Learning with Deep Generative Models https://github.com/dpkingma/nips14-ssl InfoVAE: Information Maximizing Variational Autoencoders Density estimation using Real NVP Neural Discrete Representation Learning
{ "source": [ "https://stats.stackexchange.com/questions/332179", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/175767/" ] }
332,199
Let $X$ be $Normal(\mu,\sigma^2)$, $\sigma$ is known. I need to find an unbiased estimate for $P(X<0)$. My question is, is it enough to find an unbiased estimate for $\mu$ and use that? That is, is $P(X<0|\bar x)$ unbiased for $P(X<0|\mu)$? I'm thinking it is not. If it is not, how do I find an unbiased estimate then?
For anyone stumbling on this post also looking for an answer, this twitter thread has added a lot of very useful insight. Namely: beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework discusses my exact question with a few experiments. Interestingly, it seems their $\beta_{norm}$ (which is similar to my normalised KLD weight) is also centred around 0.1, with higher values giving more structured latent space at the cost of poorer reconstruction, and lower values giving better reconstruction with less structured latent space (though their focus is specifically on learning disentangled representations). and related reading (where similar issues are discussed) Semi-Supervised Learning with Deep Generative Models https://github.com/dpkingma/nips14-ssl InfoVAE: Information Maximizing Variational Autoencoders Density estimation using Real NVP Neural Discrete Representation Learning
{ "source": [ "https://stats.stackexchange.com/questions/332199", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/164144/" ] }
332,819
Have a look at this Excel graph: The 'common sense' line-of-best-fit would appear be an almost vertical line straight through the center of the points (edited by hand in red). However the linear trend line as decided by Excel is the diagonal black line shown. Why has Excel produced something that (to the human eye) appears to be wrong? How can I produce a best fit line that looks a little more intuitive (i.e. something like the red line)? Update 1. An Excel spreadsheet with data and graph is available here: example data , CSV in Pastebin . Are the type1 and type2 regression techniques available as excel functions? Update 2. The data represent a paraglider climbing in a thermal whilst drifting with the wind. The final objective is to investigate how wind strength and direction varies with height. I'm an engineer, NOT a mathematician or statistician, so the information in these responses has given me a lot more areas for research. This is an interesting thread and It would be a shame for the data to be lost and someone in future unable to reproduce the examples, so I'm adding it as a comment here (which is the data from the following link ). "lon","lat" -0.713917,53.9351 -0.712917,53.93505 -0.712617,53.934983 -0.712333,53.9349 -0.7122,53.93475 -0.71215,53.934567 -0.712233,53.9344 -0.712483,53.934233 -0.712817,53.934167 -0.713217,53.934167 -0.713617,53.934267 -0.7141,53.934733 -0.714133,53.935 -0.71395,53.935283 -0.713617,53.9355 -0.713233,53.935617 -0.712767,53.935617 -0.712383,53.9355 -0.712183,53.9353 -0.712367,53.934883 -0.712717,53.934767 -0.713133,53.9348 -0.713583,53.934917 -0.713867,53.93515 -0.714017,53.935433 -0.7139,53.935717 -0.7136,53.935933 -0.71325,53.936067 -0.712833,53.936133 -0.7124,53.936117 -0.712083,53.935983 -0.7119,53.935767 -0.711917,53.935567 -0.7121,53.935383 -0.7124,53.935283 -0.712733,53.93525 -0.713117,53.935267 -0.7135,53.93535 -0.713817,53.935517 -0.71405,53.935733 -0.71415,53.935983 -0.7141,53.93625 -0.7139,53.9365 -0.713567,53.936667 -0.713183,53.936767 -0.712767,53.9368 -0.7124,53.9367 -0.712133,53.93655 -0.712033,53.936333 -0.712167,53.936167 -0.712383,53.936017 -0.712733,53.935917 -0.7132,53.93595 -0.713567,53.936067 -0.713867,53.936267 -0.714067,53.9365 -0.71415,53.936767 -0.714033,53.937033 -0.71375,53.937233 -0.7134,53.9374 -0.712967,53.93745 -0.71255,53.937433 -0.7122,53.937267 -0.712067,53.937033 -0.712117,53.9368 -0.712367,53.936617 -0.712733,53.936533 -0.713133,53.93655 -0.713467,53.93665 -0.71375,53.93685 -0.713933,53.937083 -0.71395,53.937367 -0.713767,53.937633 -0.713433,53.937833 -0.713033,53.937967 -0.712567,53.937967 -0.71215,53.937867 -0.711883,53.93765 -0.711817,53.937433 -0.711983,53.937233 -0.71265,53.937033 -0.713067,53.9371 -0.713683,53.93745 -0.713817,53.937983 -0.713633,53.938233 -0.7133,53.938433 -0.71285,53.938533 -0.71205,53.938333 -0.71185,53.938117 -0.711867,53.937867 -0.712067,53.9377 -0.712417,53.937583 -0.712833,53.937567 -0.713233,53.937667 -0.713567,53.937883 -0.7137,53.938417 -0.713467,53.93865 -0.713117,53.938817 -0.712683,53.938917000000004 -0.71225,53.938867 -0.711917,53.938717 -0.711767,53.938483 -0.711883,53.938267 -0.712133,53.9381 -0.712483,53.938017 -0.713283,53.93815 -0.713567,53.938333 -0.7138,53.938567 -0.713683,53.9391 -0.713417,53.9393 -0.71305,53.939433 -0.7126,53.939483 -0.7122,53.9394 -0.711917,53.93925 -0.711783,53.93905 -0.7118,53.938817 -0.711967,53.938667 -0.712217,53.938533 -0.712567,53.938433 -0.712933,53.93845 -0.7133,53.938567 -0.713583,53.93875 -0.71375,53.939
Is there a dependent variable? The trend line in Excel is from the regression of the dependent variable "lat" on independent variable "lon." What you call a "common sense line" can be obtained when you don't designate dependent variable, and treat both the latitude and longitude equally. The latter can be obtained by applying PCA . In particular, it's one of the eigen vectors of the covariance matrix of these variables. You can think of it as a line minimizing the shortest distance from any given $(x_i,y_i)$ point to a line itself, i.e. you draw a perpendicular to a line, and minimize the sum of those for each observation. Here's how you could do it in R: > para <- read.csv("para.csv") > plot(para) > > # run PCA > pZ=prcomp(para,rank.=1) > # look at 1st PC > pZ $rotation PC1 lon 0.09504313 lat 0.99547316 > > colMeans(para) # PCA was centered lon lat -0.7129371 53.9368720 > # recover the data from 1st PC > pc1=t(pZ$ rotation %*% t(pZ$x) ) > # center and show > lines(pc1 + t(t(rep(1,123))) %*% c) The trend line that you got from Excel is as a common sense as the eigen vector from PCA when you understand that in the Excel regression the variables are not equal. Here you're minimizing a vertical distance from $y_i$ to $y(x_i)$ , where y-axis is latitude and x-axis is a longitude. Whether you want to treat the variables equally or not depends on the objective. It's not the inherent quality of the data. You have to pick the right statistical tool to analyze the data, in this case choose between the regression and PCA. An answer to a question that wasn't asked So, why in your case a (regression) trend line in Excel doesn't seem to be a suitable tool for your case? The reason is that the trend line is an answer to a question that wasn't asked. Here's why. Excel regression is trying to estimate the parameters of a line $lat=a+b \times lon$ . So, the first problem is the latitude is not even a function of a longitude, strictly speaking (see the note at the end of the post), and it's not even the main issue. The real trouble is that you're not even interested in paraglider's location, you're interested in the wind. Imagine that there was no wind. A paraglider would be making the same circle over and over. What would be the trend line? Obviously, it would be flat horizontal line, its slope would be zero, yet it doesn't mean that the wind is blowing in horizontal direction! Here's a simulated plot for when there's a strong wind along y-axis, while a paraglider is making perfect circles. You can see how linear regression $y\sim x$ produces nonsensical result, a horizontal trend line. Actually, it's even slightly negative, but not significant. The wind direction is shown with a red line: R code for the simulation: t=1:123 a=1 #1 b=0 #1/10 y=10*sin(t)+a*t x=10*cos(t)+b*t plot(x,y,xlim=c(-60,60)) xp=-60:60 lines(b*t,a*t,col='red') model=lm(y~x) lines(xp,xp*model $coefficients[2]+model$ coefficients[1]) So, the direction of the wind clearly is not aligned with the trend line at all. They're linked, of course, but in a nontrivial way. Hence, my statement that the Excel trend line is an answer to some question, but not the one you asked. Why PCA? As you noted there are at least two components of the motion of a paraglider: the drift with a wind and circular motion controlled by a paraglider. This is clearly seen when you connect the dots on your plot: On one hand, the circular motion is really a nuisance to you: you're interested in the wind. Though on the other hand, you don't observe the wind speed, you only observe the paraglider. So, your objective is to infer the unobservable wind from observable paraglider's location reading. This is exactly the situation where tools such as factor analysis and PCA can be useful. The aim of PCA is to isolate a few factors that determine the multiple outputs by analyzing the correlations in outputs. It's effective when the output is linked to factors linearly, which happens to be the case in your data: wind drift simply adds to the coordinates of the circular motion, that's why PCA is working here. PCA setup So, we established that PCA should have a chance here, but how will we actually set it up? Let's start with adding a third variable, time. We're going to assign time 1 to 123 to each 123 observation, assuming the constant sampling frequency. Here's how the 3D plot looks like of the data, revealing its spiral structure: The next plot shows the imaginary center of rotation of a paraglider as brown circles. You can see how it drifts on lat-lon plane with the wind, while paraglider shown with a blue dot is circling around it. The time is on vertical axis. I connected the center of rotation to a corresponding location of a paraglider showing only the first two circles. The corresponding R code: library(plotly) para <- read.csv("para.csv") n=24 para$t=1:123 # add time parameter # run PCA pZ3=prcomp(para) c3=colMeans(para) # PCA was centered # look at PCs in columns pZ3$rotation # get the imaginary center of rotation pc31=t(pZ3 $rotation[,1] %*% t(pZ3$ x[,1]) ) eye = pc31 + t(t(rep(1,123))) %*% c3 eyedata = data.frame(eye) p = plot_ly(x=para[1:n,1],y=para[1:n,2],z=para[1:n,3],mode="lines+markers",type="scatter3d") %>% layout(showlegend=FALSE,scene=list(xaxis = list(title = 'lat'),yaxis = list(title = 'lon'),zaxis = list(title = 't'))) %>% add_trace(x=eyedata[1:n,1],y=eyedata[1:n,2],z=eyedata[1:n,3],mode="markers",type="scatter3d") for( i in 1:n){ p = add_trace(p,x=c(eyedata[i,1],para[i,1]),y=c(eyedata[i,2],para[i,2]),z=c(eyedata[i,3],para[i,3]),color="black",mode="lines",type="scatter3d") } subplot(p) The drift of the center of paraglider's rotation is caused mainly by the wind, and the path and speed of the drift is correlated with the direction and the speed of the wind, unobservable variables of interest. This is how the drift looks like when projected to lat-lon plane: PCA Regression So, earlier we established that regular linear regression doesn't seem to work very well here. We also figured why: because it doesn't reflect the underlying process, because paraglider's motion is highly nonlinear. It's a combination of circular motion and a linear drift. We also discussed that in this situation factor analysis might be helpful. Here's an outline of one possible approach to modeling this data: PCA regression . But fist I'll show you the PCA regression fitted curve: This has been obtained as follows. Run PCA on the data set which has extra column t=1:123, as discussed earlier. You get three principal components. The first one is simply t. The second corresponds to the lon column, and the third to lat column. I fit the latter two principal components to a variable of the form $a\sin(\omega t+\varphi)$ , where $\omega,\varphi$ are extracted from spectral analysis of the components. They happen to have the same frequency but different phases, which is not surprising given the circular motion. That's it. To get the fitted values you recover the data from fitted components by plugging the transpose of the PCA rotation matrix into the predicted principal components. My R code above shows parts of the procedure, and the rest you can figure out easily. Conclusion It's interesting to see how powerful is PCA and other simple tools when it comes to physical phenomena where the underlying processes are stable, and the inputs translate into outputs via linear (or linearized) relationships. So in our case the circular motion is very nonlinear but we easily linearized it by using sine/cosine functions on a time t parameter. My plots were produced with just a few lines of R code as you saw. The regression model should reflect the underlying process, then only you can expect that its parameters are meaningful. If this is a paraglider drifting in the wind, then a simple scatter plot like in the original question will hide the time structure of the process. Also Excel regression was a cross sectional analysis, for which the linear regression works best, while your data is a time series process, where the observations are ordered in time. Time series analysis must be applied here, and it was done in PCA regression. Notes on a function Since a paraglider is making circles, there will be multiple latitudes corresponding to a single longitude. In mathematics a function $y=f(x)$ maps a value $x$ to a single value $y$ . It's many-to-one relationship, meaning that multiple $x$ may correspond to $y$ , but not multiple $y$ correspond to a single $x$ . That is why $lat=f(lon)$ is not a function, strictly speaking.
{ "source": [ "https://stats.stackexchange.com/questions/332819", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/198350/" ] }
333,224
I have a binary outcome variable {0,1} and a predictor variable {0,1}. My thoughts are that it doesn't make sense to do logistic unless I include other variables and calculate the odds ratio. With one binary predictor, wouldn't calculation of probability suffice vs odds ratio?
In this case you can collapse your data to $$ \begin{array}{c|cc} X \backslash Y & 0 & 1 \\ \hline 0 & S_{00} & S_{01} \\ 1 & S_{10} & S_{11} \end{array} $$ where $S_{ij}$ is the number of instances for $x = i$ and $y =j$ with $i,j \in \{0,1\}$. Suppose there are $n$ observations overall. If we fit the model $p_i = g^{-1}(x_i^T \beta) = g^{-1}(\beta_0 + \beta_1 1_{x_i = 1})$ (where $g$ is our link function) we'll find that $\hat \beta_0$ is the logit of the proportion of successes when $x_i = 0$ and $\hat \beta_0 + \hat \beta_1$ is the logit of the proportion of successes when $x_i = 1$. In other words, $$ \hat \beta_0 = g\left(\frac{S_{01}}{S_{00} + S_{01}}\right) $$ and $$ \hat \beta_0 + \hat \beta_1 = g\left(\frac{S_{11}}{S_{10} + S_{11}}\right). $$ Let's check this is R . n <- 54 set.seed(123) x <- rbinom(n, 1, .4) y <- rbinom(n, 1, .6) tbl <- table(x=x,y=y) mod <- glm(y ~ x, family=binomial()) # all the same at 0.5757576 binomial()$linkinv( mod$coef[1]) mean(y[x == 0]) tbl[1,2] / sum(tbl[1,]) # all the same at 0.5714286 binomial()$linkinv( mod$coef[1] + mod$coef[2]) mean(y[x == 1]) tbl[2,2] / sum(tbl[2,]) So the logistic regression coefficients are exactly transformations of proportions coming from the table. The upshot is that we certainly can analyze this dataset with a logistic regression if we have data coming from a series of Bernoulli random variables, but it turns out to be no different than directly analyzing the resulting contingency table. I want to comment on why this works from a theoretical perspective. When we're fitting a logistic regression, we are using the model that $Y_i | x_i \stackrel{\perp}{\sim} \text{Bern}(p_i)$. We then decide to model the mean as a transformation of a linear predictor in $x_i$, or in symbols $p_i = g^{-1}\left( \beta_0 + \beta_1 x_i\right)$. In our case we only have two unique values of $x_i$, and therefore there are only two unique values of $p_i$, say $p_0$ and $p_1$. Because of our independence assumption we have $$ \sum \limits_{i : x_i = 0} Y_i = S_{01} \sim \text{Bin} \left(n_0, p_0\right) $$ and $$ \sum \limits_{i : x_i = 1} Y_i = S_{11} \sim \text{Bin} \left(n_1, p_1\right). $$ Note how we're using the fact that the $x_i$, and in turn $n_0$ and $n_1$, are nonrandom: if this was not the case then these would not necessarily be binomial. This means that $$ S_{01} / n_0 = \frac{S_{01}}{S_{00} + S_{01}} \to_p p_0 \hspace{2mm} \text{ and } \hspace{2mm} S_{11} / n_1 = \frac{S_{11}}{S_{10} + S_{11}} \to_p p_1. $$ The key insight here: our Bernoulli RVs are $Y_i | x_i = j \sim \text{Bern}(p_j)$ while our binomial RVs are $S_{j1} \sim \text{Bin}(n_j, p_j)$, but both have the same probability of success. That's the reason why these contingency table proportions are estimating the same thing as an observation-level logistic regression. It's not just some coincidence with the table: it's a direct consequence of the distributional assumptions we have made.
{ "source": [ "https://stats.stackexchange.com/questions/333224", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/115982/" ] }
335,375
I suppose that $$P(A|B) = P(A | B,C) * P(C) + P(A|B,\neg C) * P(\neg C)$$ is correct, whereas $$P(A|B) = P(A | B,C) + P(A|B,\neg C) $$ is incorrect. However, I have got an "intuition" about the later one, that is, you consider the probability P(A | B) by splitting two cases (C or Not C). Why this intuition is wrong?
Suppose, as an easy counter example, that the probability $P(A)$ of $A$ is $1$, regardless of the value of $C$. Then, if we take the incorrect equation , we get: $P(A | B) = P(A | B, C) + P(A | B, \neg C) = 1 + 1 = 2$ That obviously can't be correct, a probably cannot be greater than $1$. This helps to build the intuition that you should assign a weight to each of the two cases proportional to how likely that case is , which results in the first (correct) equation. . That brings you closer to your first equation, but the weights are not completely right. See A. Rex' comment for the correct weights.
{ "source": [ "https://stats.stackexchange.com/questions/335375", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11270/" ] }
336,055
I'm getting a 100% accuracy for my decision tree. What am I doing wrong? This is my code: import pandas as pd import json import numpy as np import sklearn import matplotlib.pyplot as plt data = np.loadtxt("/Users/Nadjla/Downloads/allInteractionsnum.csv", delimiter=',') x = data[0:14] y = data[-1] from sklearn.cross_validation import train_test_split x_train = x[0:2635] x_test = x[0:658] y_train = y[0:2635] y_test = y[0:658] from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier() tree.fit(x_train.astype(int), y_train.astype(int)) from sklearn.metrics import accuracy_score y_predicted = tree.predict(x_test.astype(int)) accuracy_score(y_test.astype(int), y_predicted)
Your test sample is a subset of your training sample: x_train = x[0:2635] x_test = x[0:658] y_train = y[0:2635] y_test = y[0:658] This means that you evaluate your model on a part of your training data, i.e., you are doing in-sample evaluation. In-sample accuracy is a notoriously poor indicator to out-of-sample accuracy, and maximizing in-sample accuracy can lead to overfitting. Therefore, one should always evaluate a model on a true holdout sample that is completely independent of the training data. Make sure your training and your testing data are disjoint, e.g., x_train = x[659:2635] x_test = x[0:658] y_train = y[659:2635] y_test = y[0:658]
{ "source": [ "https://stats.stackexchange.com/questions/336055", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/201007/" ] }
336,066
In relation to the following figure Kline (2016) writes on p194: The numerals (1) in Figure 9.3(b) that appear next to paths from the factors to one of their indicators are scaling constants, or unit loading identification (ULI) constraints. The specifications that $A \rightarrow X_1 = 1.0$ and $B \rightarrow X_4 = 1.0$ scale the factors in a metric related to that of the explained (common) variance of the corresponding indicator, or reference (marker) variable. I understand from this answer that the reason for having the reference variable is so we can determine the variance of the latent variable. However, it wasn't clear to me why this goal was not also relevant to exploratory factor analysis. Why don't we set $A \rightarrow X_1 = 1.0$ in the EFA model? Kline, R. B. (2016). Principles and practice of structural equation modeling. Guilford Press.
Your test sample is a subset of your training sample: x_train = x[0:2635] x_test = x[0:658] y_train = y[0:2635] y_test = y[0:658] This means that you evaluate your model on a part of your training data, i.e., you are doing in-sample evaluation. In-sample accuracy is a notoriously poor indicator to out-of-sample accuracy, and maximizing in-sample accuracy can lead to overfitting. Therefore, one should always evaluate a model on a true holdout sample that is completely independent of the training data. Make sure your training and your testing data are disjoint, e.g., x_train = x[659:2635] x_test = x[0:658] y_train = y[659:2635] y_test = y[0:658]
{ "source": [ "https://stats.stackexchange.com/questions/336066", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9162/" ] }
336,261
I have a data frame in python where I need to find all categorical variables. Checking the type of the column doesn't always work because int type can also be categorical. So I seek help in finding the right hypothesis test method to identify if a column is categorical or not. I was trying below chi-square test but I am not sure if this is good enough import numpy as np data = np.random.randint(0,5,100) import scipy.stats as ss ss.chisquare(data) Please advise.
Short answer: you can't. There is no statistical test that will tell you whether a predictor that contains the integers between 1 and 10 is a numeric predictor (e.g., number of children) or encodes ten different categories. (If the predictor contains negative numbers, or the smallest number is larger than one, or it skips integers, this might argue against its being a categorical encoding - or it may just mean that the analyst used nonstandard encoding.) The only way to be sure is to leverage domain expertise, or the dataset's codebook (which should always exist).
{ "source": [ "https://stats.stackexchange.com/questions/336261", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/89996/" ] }
336,455
The following quote comes from the famous research paper Statistical significance for genome wide studies by Storey & Tibshirani (2003): For example, a false positive rate of 5% means that on average 5% of the truly null features in the study will be called significant. A FDR (False Discovery rate) of 5% means that among all features called significant, 5% of these are truly null on average. Can somebody explain what that means using a simple numerical or visual example? I am having hard time understanding what it means. I've found various posts on FDR or FPR alone, but haven't found any where a specific comparison was made. It would be especially good if someone expert in this area could illustrate situations where one is better than the other, or both are good or bad.
I'm going to explain these in a few different ways because it helped me understand it. Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some terms. For each of the following, I am referring to an individual who has been tested: True positive (TP) : Has the disease, identified as having the disease False positive (FP) : Does not have the disease, identified as having the disease True negative (TN) : Does not have the disease, identified as not having the disease False negative (FN) : Has the disease, identified as not having the disease Visually, this is typically shown using the confusion matrix : The false positive rate (FPR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who do not have the disease (includes all FPs and TNs). $$ FPR = \frac{FP}{FP + TN} $$ The false discovery rate (FDR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who are identified as having the disease (includes all FPs and TPs). $$ FDR = \frac{FP}{FP + TP} $$ So, the difference is in the denominator i.e. what are you comparing the number of false positives to? The FPR is telling you the proportion of all the people who do not have the disease who will be identified as having the disease. The FDR is telling you the proportion of all the people identified as having the disease who do not have the disease. Both are therefore useful, distinct measures of failure. Depending on the situation and the proportions of TPs, FPs, TNs and FNs, you may care more about one that the other. Let's now put some numbers to this. You have measured 100 people for the disease and you get the following: True positives (TPs) : 12 False positives (FPs) : 4 True negatives (TNs) : 76 False negatives (FNs) : 8 To show this using the confusion matrix: Then, $$ FPR = \frac{FP}{FP + TN} = \frac{4}{4 + 76} = \frac{4}{80} = 0.05 = 5\% $$ $$ FDR = \frac{FP}{FP + TP} = \frac{4}{4 + 12} = \frac{4}{16} = 0.25 = 25\% $$ In other words, The FPR tells you that 5% of people of people who did not have the disease were identified as having the disease. The FDR tells you that 25% of people who were identified as having the disease actually did not have the disease. EDIT based on @amoeba's comment (also the numbers in the example above): Why is the distinction so important? In the paper you link to, Storey & Tibhshirani are pointing out that there was a strong focus on the FPR (or type I error rate) in genomewide studies, and that this was leading people to make flawed inferences. This is because once you find $n$ significant results by fixing the FPR, you really, really need to consider how many of your significant results are incorrect. In the above example, 25% of the 'significant results' would have been wrong! [Side note: Wikipedia points out that though the FPR is mathematically equivalent to the type I error rate, it is considered conceptually distinct because one is typically set a priori while the other is typically used to measure the performance of a test afterwards. This is important but I will not discuss that here]. And for a bit more completeness: Obviously, FPR and FDR are not the only relevant metrics you can calculate with the four quantities in the confusion matrix. Of the many possible metrics that may be useful in different contexts , two relatively common ones that you are likely to encounter are: True Positive Rate (TPR) , also known as sensitivity , is the proportion of people who have the disease who are identified as having the disease. $$ TPR = \frac{TP}{TP + FN} $$ True Negative Rate (TNR) , also known as specificity , is the proportion of people who do not have the disease who are identified as not having the disease. $$ TNR = \frac{TN}{TN + FP} $$
{ "source": [ "https://stats.stackexchange.com/questions/336455", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/108743/" ] }
336,974
I've been doing a lot of research about Reinforcement Learning lately. I followed Sutton & Barto's Reinforcement Learning: An Introduction for most of this. I know what Markov Decision Processes are and how Dynamic Programming (DP), Monte Carlo and Temporal Difference (DP) learning can be used to solve them. The problem I'm having is that I don't see when Monte Carlo would be the better option over TD-learning. The main difference between them is that TD-learning uses bootstrapping to approximate the action-value function and Monte Carlo uses an average to accomplish this. I just can't really think of a scenario when this is the better way to go. My guess is that it might have something to do with performance but I can't find any sources that can proof this. Am I missing something or is TD-learning generally the better option?
The main problem with TD learning and DP is that their step updates are biased on the initial conditions of the learning parameters. The bootstrapping process typically updates a function or lookup Q(s,a) on a successor value Q(s',a') using whatever the current estimates are in the latter. Clearly at the very start of learning these estimates contain no information from any real rewards or state transitions. If learning works as intended, then the bias will reduce asymptotically over multiple iterations. However, the bias can cause significant problems, especially for off-policy methods (e.g. Q Learning) and when using function approximators. That combination is so likely to fail to converge that it is called the deadly triad in Sutton & Barto . Monte Carlo control methods do not suffer from this bias, as each update is made using a true sample of what Q(s,a) should be. However, Monte Carlo methods can suffer from high variance, which means more samples are required to achieve the same degree of learning compared to TD. In practice, TD learning appears to learn more efficiently if the problems with the deadly triad can be overcome. Recent results using experience replay and staged "frozen" copies of estimators provide work-arounds that address problems - e.g. that is how DQN learner for Atari games was built. There is also a middle ground between TD and Monte Carlo. It is possible to construct a generalised method that combines trajectories of different lengths - from single-step TD to complete episode runs in Monte Carlo - and combine them. The most common variant of this is TD( $\lambda$ ) learning, where $\lambda$ is a parameter from $0$ (effectively single-step TD learning) to $1$ (effectively Monte Carlo learning, but with a nice feature that it can be used in continuous problems). Typically, a value between $0$ and $1$ makes the most efficient learning agent - although like many hyperparameters, the best value to use depends on the problem. If you are using a value-based method (as opposed to a policy-based one), then TD learning is generally used more in practice, or a TD/MC combination method such as TD(λ) can be even better. In terms of "practical advantage" for MC? Monte Carlo learning is conceptually simple, robust and easy to implement, albeit often slower than TD. I would generally not use it for a learning controller engine (unless in a hurry to implement something for a simple environment), but I would seriously consider it for policy evaluation in order to compare multiple agents for instance - that is due to it being an unbiased measure, which is important for testing.
{ "source": [ "https://stats.stackexchange.com/questions/336974", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/201613/" ] }
337,158
Consider a within-subject and within-item factorial design where the experimental treatment variable has two levels (conditions). Let m1 be the maximal model and m2 the no-random-correlations model. m1: y ~ condition + (condition|subject) + (condition|item) m2: y ~ condition + (1|subject) + (0 + condition|subject) + (1|item) + (0 + condition|item) Dale Barr states the following for this situation: Edit (4/20/2018): As Jake Westfall pointed out, the following statements seem to refer to the datasets that are shown in Fig. 1 and 2 on this website only. However, the keynote remains the same. In a deviation-coding representation (condition: -0.5 vs. 0.5) m2 allows distributions, where subject's random intercepts are uncorrelated with subject's random slopes. Only a maximal model m1 allows distributions, where the two are correlated. In the treatment-coding representation (condition: 0 vs. 1) these distributions, where subject's random intercepts are uncorrelated with subject's random slopes, cannot be fitted using the no-random-correlations model, since in each case there is a correlation between random slope and intercept in the treatment-coding representation. Why does treatment coding always result in a correlation between random slope and intercept?
Treatment coding doesn't always or necessarily result in intercept/slope correlation, but it tends to more often than not. It's easiest to see why this is the case using pictures, and considering the case of a continuous rather than categorical predictor. Here's a picture of a normal-looking clustered dataset with approximately 0 correlation between the random intercepts and random slopes: But now look what happens when shift the predictor X far to the right by adding 3 to each X value: It's the same dataset in a deep sense -- if we zoomed in on the data points it would look identical to the first plot, but with the X axis relabeled -- but simply by shifting X we've induced an almost perfect negative correlation between the random intercepts and random slopes. This happens because when we shift X, we redefine the intercepts of each group. Remember that the intercepts always refer to the Y-values where the group-specific regression lines cross X=0. But now the X=0 point is far away from the center of the data. So we're essentially extrapolating outside the range of the observed data in order to compute the intercepts. The result, as you can see, is that the greater the slope is, the lower the intercept is, and vice versa. When you use treatment coding, it's like doing a less extreme version of the X-shifting depicted in the bottom graph. This is because the treatment codes {0,1} are just a shifted version of the deviation codes {-0.5, 0.5}, where a shift of +0.5 has been added. Edit 2018-08-29: this is now illustrated more clearly and directly in the second figure of this more recent answer of mine to another question . Like I said earlier, this is not true by necessity . It's possible to have a dataset similar to the above, but where the slopes and intercepts are uncorrelated on the shifted scale (where the intercepts refer to points far away from the data) and correlated on the centered scale. But the group-specific regression lines in such datasets will tend to exhibit "fanning out" patterns that, in practice, are just not that common in the real world.
{ "source": [ "https://stats.stackexchange.com/questions/337158", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/136579/" ] }
337,636
Imagine we have two time-series processes, which are stationary, producing: $x_t,y_t$. Is $z_t=\alpha x_t +\beta y_t$, $\forall \alpha, \beta \in \mathbb{R}$ also stationary? Any help would be appreciated. I would say yes, since it has an MA representation.
Perhaps surprisingly, this is not true. (Independence of the two time series will make it true, however.) I understand "stable" to mean stationary, because those words appear to be used interchangeably in millions of search hits, including at least one on our site . For a counterexample, let $X$ be a non-constant stationary time series for which every $X_t$ is independent of $X_s$, $s\ne t,$ and whose marginal distributions are symmetric around $0$. Define $$Y_t = (-1)^t X_t.$$ These plots show portions of the three time series discussed in this post. $X$ was simulated as a series of independent draws from a standard Normal distribution. To show that $Y$ is stationary, we need to demonstrate that the joint distribution of $(Y_{s+t_1}, Y_{s+t_2}, \ldots, Y_{s+t_n})$ for any $t_1\lt t_2 \lt \cdots \lt t_n$ does not depend on $s$. But this follows directly from the symmetry and independence of the $X_t$. These lagged scatterplots (for a sequence of 512 values of $Y$) illustrate the assertion that the joint bivariate distributions of $Y$ are as expected: independent and symmetric. (A "lagged scatterplot" displays the values of $Y_{t+s}$ against $Y_{t}$; values of $s=0,1,2$ are shown.) Nevertheless, choosing $\alpha=\beta=1/2$, we have $$\alpha X_t + \beta Y_t = X_t$$ for even $t$ and otherwise $$\alpha X_t + \beta Y_t = 0.$$ Since $X$ is non-constant, obviously these two expressions have different distributions for any $t$ and $t+1$, whence the series $(X+Y)/2$ is not stationary. The colors in the first figure highlight this non-stationarity in $(X+Y)/2$ by distinguishing the zero values from the rest.
{ "source": [ "https://stats.stackexchange.com/questions/337636", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/40252/" ] }
340,175
In a recent assignment, we were told to use PCA on the MNIST digits to reduce the dimensions from 64 (8 x 8 images) to 2. We then had to cluster the digits using a Gaussian Mixture Model. PCA using only 2 principal components does not yield distinct clusters and as a result the model is not able to produce useful groupings. However, using t-SNE with 2 components, the clusters are much better separated. The Gaussian Mixture Model produces more distinct clusters when applied to the t-SNE components. The difference in PCA with 2 components and t-SNE with 2 components can be seen in the following pair of images where the transformations have been applied to the MNIST dataset. I have read that t-SNE is only used for visualization of high dimensional data, such as in this answer , yet given the distinct clusters it produces, why is it not used as a dimensionality reduction technique that is then used for classification models or as a standalone clustering method?
The main reason that $t$-SNE is not used in classification models is that it does not learn a function from the original space to the new (lower) dimensional one. As such, when we would try to use our classifier on new / unseen data we will not be able to map / pre-process these new data according to the previous $t$-SNE results. There is work on training a deep neural network to approximate $t$-SNE results (e.g., the "parametric" $t$-SNE paper) but this work has been superseded in part by the existence of (deep) autoencoders . Autoencoders are starting to be used as input / pre-processors to classifiers (especially DNN) exactly because they get very good performance in training as well as generalise naturally to new data. $t$-SNE can be potentially used if we use a non-distance based clustering techniques like FMM (Finite Mixture Models ) or DBSCAN ( Density-based Models ). As you correctly note, in such cases, the $t$-SNE output can quite helpful. The issue in these use cases is that some people might try to read into the cluster placement and not only the cluster membership. As the global distances are lost, drawing conclusions from cluster placement can lead to bogus insights. Notice that just saying: " hey, we found all the 1 s cluster together " does not offer great value if cannot say what they are far from. If we just wanted to find the 1 's we might as well have used classification to begin with (which bring us back to the use of autoencoders).
{ "source": [ "https://stats.stackexchange.com/questions/340175", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/157316/" ] }
340,462
In reinforcement learning, I saw many notions with respect to control and prediction , like Monte Carlo prediction and Monte Carlo control. But what are we actually predicting and controlling?
The difference between prediction and control is to do with goals regarding the policy. The policy describes the way of acting depending on current state, and in the literature is often noted as $\pi(a|s)$, the probability of taking action $a$ when in state $s$. So, my question is for prediction, predict what? A prediction task in RL is where the policy is supplied, and the goal is to measure how well it performs. That is, to predict the expected total reward from any given state assuming the function $\pi(a|s)$ is fixed. for control, control what? A control task in RL is where the policy is not fixed, and the goal is to find the optimal policy. That is, to find the policy $\pi(a|s)$ that maximises the expected total reward from any given state. A control algorithm based on value functions (of which Monte Carlo Control is one example) usually works by also solving the prediction problem, i.e. it predicts the values of acting in different ways, and adjusts the policy to choose the best actions at each step. As a result, the output of the value-based algorithms is usually an approximately optimal policy and the expected future rewards for following that policy.
{ "source": [ "https://stats.stackexchange.com/questions/340462", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/132651/" ] }
341,535
When working with high dimensional data, it is almost useless to compare data points using euclidean distance - this is the curse of dimensionality. However, I have read that using different distance metrics, such as a cosine similarity, performs better with high dimensional data. Why is this? Is there some mathematical proof / intuition? My intuition is that it's because the cosine metric is only concerned with the angle between data points, and the fact that the plane between any two data points and the origin is 3 dimensional. But what if two data points have a very small angle but lie "far away" from each other (in an absolute difference sense) - then how would they still be considered close / similar?
Contrary to various unproven claims, cosine cannot be significantly better . It is easy to see that Cosine is essentially the same as Euclidean on normalized data. The normalization takes away one degree of freedom. Thus, cosine on a 1000 dimensional space is about as "cursed" as Euclidean on a 999 dimensional space. What is usually different is the data where you would use one vs. the other. Euclidean is commonly used on dense, continuous variables. There every dimension matters, and a 20 dimensional space can be challenging. Cosine is mostly used on very sparse, discrete domains such as text. Here, most dimensions are 0 and do not matter at all. A 100.000 dimensional vector space may have just some 50 nonzero dimensions for a distance computation; and of these many will have a low weight (stopwords). So it is the typical use case of cosine that is not cursed, even though it theoretically is a very high dimensional space. There is a term for this: intrinsic dimensionality vs. representation dimensionality.
{ "source": [ "https://stats.stackexchange.com/questions/341535", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/178320/" ] }
341,557
Here is a simple statistics question I was given. I'm not really sure I understand it. X = the number of aquired points in an exam (multiple choice and a right answer is one point). Is X binomial distributed? The professor's answer was: Yes, because there is only right or wrong answers. My answer: No, because each question has a different "success-probability" p. As I did understand a binomial distribution is just a series of Bernoulli-experiments, which each have a simple outcome (success or failure) with a given success-probability p (and all are "identical" regarding p). E.g., Flipping a (fair) coin 100 times, this is 100 Bernoulli-experiments and all have p=0.5 . But here the questions have different kinds of p right?
I would agree with your answer. Usually this kind of data would nowadays be modeled with some kind of Item Response Theory model. For example, if you used the Rasch model , then the binary answer $X_{ni}$ would be modeled as $$ \Pr \{X_{ni}=1\} =\frac{e^{{\beta_n} - {\delta_i}}}{1 + e^{{\beta_n} - {\delta_i}}} $$ where $\beta_n$ can be thought as $n$-th persons ability and $\delta_i$ as $i$-th question difficulty. So the model enables you to catch the fact that different persons vary in abilities and questions vary in difficulty, and this is the simplest of the IRT models. Your professors answer assumes that all questions have same probability of "success" and are independent, since binomial is a distribution of a sum of $n$ i.i.d. Bernoulli trials. It ignores the two kinds of dependencies described above. As noticed in the comments, if you looked at the distribution of answers of a particular person (so you don't have to care about between-person variability), or answers of different people on the same item (so there is no between-item variability), then the distribution would be Poisson-binomial, i.e. the distribution of the sum of $n$ non-i.i.d. Bernoulli trials. The distribution could be approximated with binomial, or Poisson, but that's all. Otherwise you're making the i.i.d. assumption. Even under "null" assumption about guessing, this assumes that there is no guessing patterns, so people do not differ in how they guess and items do not differ in how they are guessed--so the guessing is purely random.
{ "source": [ "https://stats.stackexchange.com/questions/341557", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/205121/" ] }
342,466
I've seen "residuals" defined variously as being either "predicted minus actual values" or "actual minus predicted values". For illustration purposes, to show that both formulas are widely used, compare the following Web searches: residual "predicted minus actual" residual "actual minus predicted" In practice, it almost never makes a difference, since the sign of the invidividual residuals don't usually matter (e.g. if they are squared or the absolute values are taken). However, my question is: is one of these two versions (prediction first vs. actual first) considered "standard"? I like to be consistent in my usage, so if there is a well-established conventional standard, I would prefer to follow it. However, if there is no standard, I am happy to accept that as an answer, if it can be convincingly demonstrated that there is no standard convention.
The residuals are always actual minus predicted. The models are: $$y=f(x;\beta)+\varepsilon$$ Hence, the residuals $\hat\varepsilon$, which are estimates of errors $\varepsilon$: $$\hat\varepsilon=y-\hat y\\\hat y=f(x;\hat\beta)$$ I agree with @whuber that the sign doesn't really matter mathematically. It's just good to have a convention though. And the current convention is as in my answer. Since OP challenged my authority on this subject, I'm adding some references: " (2008) Residual. In: The Concise Encyclopedia of Statistics. Springer, New York, NY , which gives the same definition. Fisher's "Statistical Methods for Research Workers" 1925, has the same definition too, see Section 26 in this 1934 version . Despite unassuming title, this is an important work in historical context
{ "source": [ "https://stats.stackexchange.com/questions/342466", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81392/" ] }
343,069
I saw this list here and couldn't believe there were so many ways to solve least squares. The "normal equations" on Wikipedia seemed to be a fairly straight forward way: $$ {\displaystyle {\begin{aligned}{\hat {\alpha }}&={\bar {y}}-{\hat {\beta }}\,{\bar {x}},\\{\hat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}\end{aligned}}} $$ So why not just use them? I assumed there must be a computational or precision issue given that in the first link above Mark L. Stone mentions that SVD or QR are popular methods in statistical software and that the normal equations are "TERRIBLE from reliability and numerical accuracy standpoint". However , in the following code, the normal equations are giving me accuracy to ~12 decimal places when compared to three popular python functions: numpy's polyfit ; scipy's linregress ; and scikit-learn's LinearRegression . What's more interesting is that the normal equation method is the fastest when n = 100000000. Computational times for me are: 2.5s for linregress; 12.9s for polyfit; 4.2s for LinearRegression; and 1.8s for the normal equation. Code: import numpy as np from sklearn.linear_model import LinearRegression from scipy.stats import linregress import timeit b0 = 0 b1 = 1 n = 100000000 x = np.linspace(-5, 5, n) np.random.seed(42) e = np.random.randn(n) y = b0 + b1*x + e # scipy start = timeit.default_timer() print(str.format('{0:.30f}', linregress(x, y)[0])) stop = timeit.default_timer() print(stop - start) # numpy start = timeit.default_timer() print(str.format('{0:.30f}', np.polyfit(x, y, 1)[0])) stop = timeit.default_timer() print(stop - start) # sklearn clf = LinearRegression() start = timeit.default_timer() clf.fit(x.reshape(-1, 1), y.reshape(-1, 1)) stop = timeit.default_timer() print(str.format('{0:.30f}', clf.coef_[0, 0])) print(stop - start) # normal equation start = timeit.default_timer() slope = np.sum((x-x.mean())*(y-y.mean()))/np.sum((x-x.mean())**2) stop = timeit.default_timer() print(str.format('{0:.30f}', slope)) print(stop - start)
For the problem $Ax \approx b$, forming the Normal equations squares the condition number of $A$ by forming $A^TA$. Roughly speaking $log_{10}(cond)$ is the number of digits you lose in your calculation if everything is done well. And this doesn't really have anything to do with forming the inverse of $A^TA$. No matter how $A^TAx = A^Tb$ is solved, you've already lost $log_{10}(cond(A^TA)) = 2*log_{10}(cond(A))$ digits of accuracy. I.e., forming the Normal equations has doubled the number of digits of accuracy lost, right off the bat. If the condition number is small (one is the best possible), it doesn't matter much. If the condition number = $10^8$ and you use a stable method such as QR or SVD, you may have about 8 digits of accuracy in double precision. If you form the Normal equations, you've squared the condition number to $10^{16}$, and you have essentially no accuracy in your answer. Sometimes you get away with the Normal equations, and sometimes you don't.
{ "source": [ "https://stats.stackexchange.com/questions/343069", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/134691/" ] }
343,177
Background and Empirical Example I have two studies; I ran an experiment (Study 1) and then replicated it (Study 2). In Study 1, I found an interaction between two variables; in Study 2, this interaction was in the same direction but not significant. Here is the summary for Study 1's model: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.75882 0.26368 21.840 < 2e-16 *** condSuppression -1.69598 0.34549 -4.909 1.94e-06 *** prej -0.01981 0.08474 -0.234 0.81542 condSuppression:prej 0.36342 0.11513 3.157 0.00185 ** And Study 2's model: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.24493 0.24459 21.444 <2e-16 *** prej 0.13817 0.07984 1.731 0.0851 . condSuppression -0.59510 0.34168 -1.742 0.0831 . prej:condSuppression 0.13588 0.11889 1.143 0.2545 Instead of saying, "I guess I don't have anything, because I 'failed to replicate,'" what I did was combine the two data sets, created a dummy variable for what study the data came from, and then ran the interaction again after controlling for study dummy variable. This interaction was significant even after controlling for it, and I found that this two-way interaction between condition and dislike/prej was not qualified by a three-way interaction with the study dummy variable. Introducing Bayesian Analysis I had someone suggest that this is a great opportunity to use Bayesian analysis: In Study 2, I have information from Study 1 that I can use as prior information! In this way, Study 2 is doing a Bayesian updating from the frequentist, ordinary least squares results in Study 1. So, I go back and re-analyze the Study 2 model, now using informative priors on the coefficients: All the coefficients had a normal prior where the mean was the estimate in Study 1 and the standard deviation was the standard error in Study 1. This is a summary of the result: Estimates: mean sd 2.5% 25% 50% 75% 97.5% (Intercept) 5.63 0.17 5.30 5.52 5.63 5.74 5.96 condSuppression -1.20 0.20 -1.60 -1.34 -1.21 -1.07 -0.80 prej 0.02 0.05 -0.08 -0.01 0.02 0.05 0.11 condSuppression:prej 0.34 0.06 0.21 0.30 0.34 0.38 0.46 sigma 1.14 0.06 1.03 1.10 1.13 1.17 1.26 mean_PPD 5.49 0.11 5.27 5.41 5.49 5.56 5.72 log-posterior -316.40 1.63 -320.25 -317.25 -316.03 -315.23 -314.29 It looks like now we have pretty solid evidence for an interaction from the Study 2 analysis. This agrees with what I did when I simply stacked the data on top of one another and ran the model with study number as a dummy-variable. Counterfactual: What If I Ran Study 2 First? That got me thinking: What if I had run Study 2 first and then used the data from Study 1 to update my beliefs on Study 2? I did the same thing as above, but in reverse: I re-analyzed the Study 1 data using the frequentist, ordinary least squares coefficient estimates and standard deviations from Study 2 as prior means and standard deviations for my analysis of Study 1 data. The summary results were: Estimates: mean sd 2.5% 25% 50% 75% 97.5% (Intercept) 5.35 0.17 5.01 5.23 5.35 5.46 5.69 condSuppression -1.09 0.20 -1.47 -1.22 -1.09 -0.96 -0.69 prej 0.11 0.05 0.01 0.08 0.11 0.14 0.21 condSuppression:prej 0.17 0.06 0.05 0.13 0.17 0.21 0.28 sigma 1.10 0.06 0.99 1.06 1.09 1.13 1.21 mean_PPD 5.33 0.11 5.11 5.25 5.33 5.40 5.54 log-posterior -303.89 1.61 -307.96 -304.67 -303.53 -302.74 -301.83 Again, we see evidence for an interaction, however this might not have necessarily been the case. Note that the point estimate for both Bayesian analyses aren't even in the 95% credible intervals for one another; the two credible intervals from the Bayesian analyses have more non-overlap than they do overlap. What Is The Bayesian Justification For Time Precedence? My question is thus: What is the justifications that Bayesians have for respecting the chronology of how the data were collected and analyzed? I get results from Study 1 and use them as informative priors in Study 2 so that I use Study 2 to "update" my beliefs. But if we assume that the results I get are randomly taken from a distribution with a true population effect... then why do I privilege the results from Study 1? What is the justification for using Study 1 results as priors for Study 2 instead of taking Study 2 results as priors for Study 1? Does the order in which I collected and calculated the analyses really matter? It does not seem like it should to me—what is the Bayesian justification for this? Why should I believe the point estimate is closer to .34 than it is to .17 just because I ran Study 1 first? Responding to Kodiologist's Answer Kodiologist remarked: The second of these points to an important departure you have made from Bayesian convention. You didn't set a prior first and then fit both models in Bayesian fashion. You fit one model in a non-Bayesian fashion and then used that for priors for the other model. If you used the conventional approach, you wouldn't see the dependence on order that you saw here. To address this, I fit the models for Study 1 and Study 2 where all regression coefficients had a prior of $\text{N}(0, 5)$. The cond variable was a dummy variable for experimental condition, coded 0 or 1; the prej variable, as well as the outcome, were both measured on 7-point scales ranging from 1 to 7. Thus, I think it is a fair choice of prior. Just by how the data are scaled, it would be very, very rare to see coefficients much larger than what that prior suggests. The mean estimates and standard deviation of those estimates are about the same as in the OLS regression. Study 1: Estimates: mean sd 2.5% 25% 50% 75% 97.5% (Intercept) 5.756 0.270 5.236 5.573 5.751 5.940 6.289 condSuppression -1.694 0.357 -2.403 -1.925 -1.688 -1.452 -0.986 prej -0.019 0.087 -0.191 -0.079 -0.017 0.040 0.150 condSuppression:prej 0.363 0.119 0.132 0.282 0.360 0.442 0.601 sigma 1.091 0.057 0.987 1.054 1.088 1.126 1.213 mean_PPD 5.332 0.108 5.121 5.259 5.332 5.406 5.542 log-posterior -304.764 1.589 -308.532 -305.551 -304.463 -303.595 -302.625 And Study 2: Estimates: mean sd 2.5% 25% 50% 75% 97.5% (Intercept) 5.249 0.243 4.783 5.082 5.246 5.417 5.715 condSuppression -0.599 0.342 -1.272 -0.823 -0.599 -0.374 0.098 prej 0.137 0.079 -0.021 0.084 0.138 0.192 0.287 condSuppression:prej 0.135 0.120 -0.099 0.055 0.136 0.214 0.366 sigma 1.132 0.056 1.034 1.092 1.128 1.169 1.253 mean_PPD 5.470 0.114 5.248 5.392 5.471 5.548 5.687 log-posterior -316.699 1.583 -320.626 -317.454 -316.342 -315.561 -314.651 Since these means and standard deviations are the more or less the same as the OLS estimates, the order effect above still occurs. If I plug-in the posterior summary statistics from Study 1 into the priors when analyzing Study 2, I observe a different final posterior than when analyzing Study 2 first and then using those posterior summary statistics as priors for analyzing Study 1. Even when I use the Bayesian means and standard deviations for the regression coefficients as priors instead of the frequentist estimates, I would still observe the same order effect. So the question remains: What is the Bayesian justification for privileging the study that came first?
Bayes' theorem says the posterior is equal to prior * likelihood after rescaling (so the probability sums to 1). Each observation has a likelihood which can be used to update the prior and create a new posterior : posterior_1 = prior * likelihood_1 posterior_2 = posterior_1 * likelihood_2 ... posterior_n = posterior_{n-1} * likelihood_n So that posterior_n = prior * likelihood_1 * ... * likelihood_n The commutativity of multiplication implies the updates can be done in any order . So if you start with a single prior, you can mix the observations from Study 1 and Study 2 in any order, apply Bayes' formula and arrive at the same final posterior .
{ "source": [ "https://stats.stackexchange.com/questions/343177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/130869/" ] }
343,183
So not doing some sort of time-series model, ex: ARIMA, is it ever reasonable to regress a variable against time using the least squares method and build a model that way? Is this something ever done in practice and if so, in what scenario?
Bayes' theorem says the posterior is equal to prior * likelihood after rescaling (so the probability sums to 1). Each observation has a likelihood which can be used to update the prior and create a new posterior : posterior_1 = prior * likelihood_1 posterior_2 = posterior_1 * likelihood_2 ... posterior_n = posterior_{n-1} * likelihood_n So that posterior_n = prior * likelihood_1 * ... * likelihood_n The commutativity of multiplication implies the updates can be done in any order . So if you start with a single prior, you can mix the observations from Study 1 and Study 2 in any order, apply Bayes' formula and arrive at the same final posterior .
{ "source": [ "https://stats.stackexchange.com/questions/343183", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/197885/" ] }
343,763
In my master thesis, I am researching on transfer learning on a specific use Case, a traffic sign detector implemented as a Single Shot Detector with a VGG16 base network for classification. The Research focuses on the problem of having a detector which is well trained and interfering on the traffic sign dataset it was trained for (I took the Belgium Traffic sign detection dataset ) but when it Comes to use the detector in another Country (Germany, Austria, Italy, Spain,...) the traffic signs look more or less different which results in a certain unwanted loss. For an overview on this Topic, I would recommend the wikipedia article ~~~ the following section is about my Research questions ~~~ So having a couple of examples of traffic signs in the new Country, is it better to fine tune the Network Transfer learn the Network and freeze some of the convolution layers (as a comparision) learn the new Country from scratch Even the very first detector (the one trained from scratch on the comprehensive belgium dataset), does it have any advantage, to load pretrained weights from published model Zoos (for example VGG16/COCO) and then finetune/transferlearn based on this? Now what am I asking here: I've implemented my detector not on my own, but based upon an original SSD port to Keras/Tensorflow (from here ) and already trained it with different variations (Belgium from scratch, pretrained with MS COCO, Transfer to Germany, Convolution frozen, fine tuned to Germany) and after weeks of Training now I can say, that Belgium with random weights from scratch is converging fastest (after only 40 epochs/2 days my custom SSD loss function is down to a value of 3) while all other variations need much more time, more epochs and loss is never falling below a value of 9. I also found pretrained weights for traffic sign classification with VGG16 which I thought should be the ideal base for transfer learning on this topic, but this detector was the worst performing so far (loss stagnated at 11, even when learning rate is changed and after 100 epochs it overfitted). It seems, that transfer learning or fine tuning on these detectors doesn't have any advantage at all. It's likely that I am doing anything wrong or that I get the purpose of transfer learning wrong (I thought that it should speed up learning, as most layers aren't trainable and therefore no calculation is done) I don't know if this is the proper platform for discussion on this topic, perhaps you know a slack or gitter channel which this belongs to. I just don't know if I am stuck, or I am just doing something horribly wrong.
Transfer learning is when a model developed for one task is reused to work on a second task. Fine-tuning is one approach to transfer learning where you change the model output to fit the new task and train only the output model. In Transfer Learning or Domain Adaptation, we train the model with a dataset. Then, we train the same model with another dataset that has a different distribution of classes, or even with other classes than in the first training dataset). In Fine-tuning , an approach of Transfer Learning, we have a dataset, and we use let's say 90% of it in training. Then, we train the same model with the remaining 10%. Usually, we change the learning rate to a smaller one, so it does not have a significant impact on the already adjusted weights. You can also have a base model working for a similar task and then freezing some of the layers to keep the old knowledge when performing the new training session with the new data. The output layer can also be different and have some of it frozen regarding the training. In my experience learning from scratch leads to better results, but it is much costly than the others especially regarding time and resources consumption. Using Transfer Learning you should freeze some layers , mainly the pre-trained ones and only train in the added ones, and decrease the learning rate to adjust the weights without mixing their meaning for the network. If you speed up the learning rate you normally face yourself with poor results due to the big steps in the gradient descent optimisation. This can lead to a state where the neural network cannot find the global minimum but only a local one. Using a pre-trained model in a similar task, usually have great results when we use Fine-tuning . However, if you do not have enough data in the new dataset or even your hyperparameters are not the best ones, you can get unsatisfactory results . Machine learning always depends on its dataset and network's parameters. And in that case, you should only use the "standard" Transfer Learning . So, we need to evaluate the trade-off between the resources and time consumption with the accuracy we desire, to choose the best approach.
{ "source": [ "https://stats.stackexchange.com/questions/343763", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/153318/" ] }
344,006
I'm trying to work out how to perform some hypothesis testing on a linear regression (null hypothesis being no correlation). Every guide and page on the subject I run into seems to be using a t-test. But I don't understand what t-test for linear regression actually means. A t-test, unless I have a completely wrong understanding or mental model, is used to compare two populations. But the regressor and regressand are not samples of similar populations, and might not even be of the same unit, so it doesn't make sense to compare them. So, when using a t-test on a linear regression, what is it that we're actually doing?
You're probably thinking of the two sample $t$ test because that's often the first place the $t$ distribution comes up. But really all a $t$ test means is that the reference distribution for the test statistic is a $t$ distribution. If $Z \sim \mathcal N(0,1)$ and $S^2 \sim \chi^2_d$ with $Z$ and $S^2$ independent, then $$ \frac{Z}{\sqrt{S^2 / d}} \sim t_d $$ by definition. I'm writing this out to emphasize that the $t$ distribution is just a name that was given to the distribution of this ratio because it comes up a lot, and anything of this form will have a $t$ distribution. For the two sample t test, this ratio appears because under the null the difference in means is a zero-mean Gaussian and the variance estimate for independent Gaussians is an independent $\chi^2$ (the independence can be shown via Basu's theorem which uses the fact that the standard variance estimate in a Gaussian sample is ancillary to the population mean, while the sample mean is complete and sufficient for that same quantity). With linear regression we basically get the same thing. In vector form, $\hat \beta \sim \mathcal N(\beta, \sigma^2 (X^T X)^{-1})$. Let $S^2_j = (X^T X)^{-1}_{jj}$ and assume the predictors $X$ are non-random. If we knew $\sigma^2$ we'd have $$ \frac{\hat \beta_j - 0}{\sigma S_j} \sim \mathcal N(0, 1) $$ under the null $H_0 : \beta_j = 0$ so we'd actually have a Z test. But once we estimate $\sigma^2$ we end up with a $\chi^2$ random variable that, under our normality assumptions, turns out to be independent of our statistic $\hat \beta_j$ and then we get a $t$ distribution. Here's the details of that: assume $y \sim \mathcal N(X\beta, \sigma^2 I)$. Letting $H = X(X^TX)^{-1}X^T$ be the hat matrix we have $$ \|e\|^2 = \|(I-H)y\|^2 = y^T(I-H)y. $$ $H$ is idempotent so we have the really nice result that $$ y^T(I-H)y / \sigma^2 \sim \mathcal \chi_{n-p}^2(\delta) $$ with non-centrality parameter $\delta = \beta^TX^T(I-H)X\beta = \beta^T(X^TX - X^T X)\beta = 0$, so actually this is a central $\chi^2$ with $n-p$ degrees of freedom (this is a special case of Cochran's theorem ). I'm using $p$ to denote the number of columns of $X$, so if one column of $X$ gives the intercept then we'd have $p-1$ non-intercept predictors. Some authors use $p$ to be the number of non-intercept predictors so sometimes you might see something like $n-p-1$ in the degrees of freedom there, but it's all the same thing. The result of this is that $E(e^Te / \sigma^2) = n-p$, so $\hat \sigma^2 := \frac{1}{n-p} e^T e$ works great as an estimator of $\sigma^2$. This means that $$ \frac{\hat \beta_j}{\hat \sigma S_j}= \frac{\hat \beta_j}{S_j\sqrt{e^Te / (n-p)}} = \frac{\hat \beta_j}{\sigma S_j\sqrt{\frac{e^Te}{\sigma^2(n-p)}}} $$ is the ratio of a standard Gaussian to a chi squared divided by its degrees of freedom. To finish this, we need to show independence and we can use the following result: Result: for $Z \sim \mathcal N_k(\mu, \Sigma)$ and matrices $A$ and $B$ in $\mathbb R^{l\times k}$ and $\mathbb R^{m\times k}$ respectively, $AZ$ and $BZ$ are independent if and only if $A\Sigma B^T = 0$ (this is exercise 58(b) in chapter 1 of Jun Shao's Mathematical Statistics ). We have $\hat \beta = (X^TX)^{-1}X^T y$ and $e = (I-H)y$ where $y \sim \mathcal N(X\beta, \sigma^2 I)$. This means $$ (X^TX)^{-1}X^T \cdot \sigma^2 I \cdot (I-H)^T = \sigma^2 \left((X^TX)^{-1}X^T - (X^TX)^{-1}X^TX(X^TX)^{-1}X^T\right) = 0 $$ so $\hat \beta \perp e$, and therefore $\hat \beta \perp e^T e$. The upshot is we now know $$ \frac{\hat \beta_j}{\hat \sigma S_j} \sim t_{n-p} $$ as desired (under all of the above assumptions). Here's the proof of that result. Let $C = {A \choose B}$ be the $(l+m)\times k$ matrix formed by stacking $A$ on top of $B$. Then $$ CZ = {AZ \choose BZ} \sim \mathcal N \left({A\mu \choose B\mu}, C\Sigma C^T \right) $$ where $$ C\Sigma C^T = {A \choose B} \Sigma \left(\begin{array}{cc} A^T & B^T \end{array}\right) = \left(\begin{array}{cc}A\Sigma A^T & A\Sigma B^T \\ B\Sigma A^T & B\Sigma B^T\end{array}\right). $$ $CZ$ is a multivariate Gaussian and it is a well-known result that two components of a multivariate Gaussian are independent if and only if they are uncorrelated, so the condition $A\Sigma B^T = 0$ turns out to be exactly equivalent to the components $AZ$ and $BZ$ in $CZ$ being uncorrelated. $\square$
{ "source": [ "https://stats.stackexchange.com/questions/344006", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/206659/" ] }
344,053
Predictor A is a continuous predictor. Predictor B is a dummy coded categorical predictor with three levels. I run a regression including PA, PB and PA*PB. The results indicate a significant effect of PB (level 3 vs. level 1) and an interaction between PA and PB (level 2 vs. level 1). I understand that so long as the interaction is in the model, the effect of PB (level 3 vs. level 1) is only a simple effect. Does it make sense to remove the interaction between PA and PB in order to interpret the main effect of PB (level 3 vs. level 1)? Or am I stuck only being able to talk about the simple effect of PB (3 vs 1) because another level of PB is involved in an interaction?
You're probably thinking of the two sample $t$ test because that's often the first place the $t$ distribution comes up. But really all a $t$ test means is that the reference distribution for the test statistic is a $t$ distribution. If $Z \sim \mathcal N(0,1)$ and $S^2 \sim \chi^2_d$ with $Z$ and $S^2$ independent, then $$ \frac{Z}{\sqrt{S^2 / d}} \sim t_d $$ by definition. I'm writing this out to emphasize that the $t$ distribution is just a name that was given to the distribution of this ratio because it comes up a lot, and anything of this form will have a $t$ distribution. For the two sample t test, this ratio appears because under the null the difference in means is a zero-mean Gaussian and the variance estimate for independent Gaussians is an independent $\chi^2$ (the independence can be shown via Basu's theorem which uses the fact that the standard variance estimate in a Gaussian sample is ancillary to the population mean, while the sample mean is complete and sufficient for that same quantity). With linear regression we basically get the same thing. In vector form, $\hat \beta \sim \mathcal N(\beta, \sigma^2 (X^T X)^{-1})$. Let $S^2_j = (X^T X)^{-1}_{jj}$ and assume the predictors $X$ are non-random. If we knew $\sigma^2$ we'd have $$ \frac{\hat \beta_j - 0}{\sigma S_j} \sim \mathcal N(0, 1) $$ under the null $H_0 : \beta_j = 0$ so we'd actually have a Z test. But once we estimate $\sigma^2$ we end up with a $\chi^2$ random variable that, under our normality assumptions, turns out to be independent of our statistic $\hat \beta_j$ and then we get a $t$ distribution. Here's the details of that: assume $y \sim \mathcal N(X\beta, \sigma^2 I)$. Letting $H = X(X^TX)^{-1}X^T$ be the hat matrix we have $$ \|e\|^2 = \|(I-H)y\|^2 = y^T(I-H)y. $$ $H$ is idempotent so we have the really nice result that $$ y^T(I-H)y / \sigma^2 \sim \mathcal \chi_{n-p}^2(\delta) $$ with non-centrality parameter $\delta = \beta^TX^T(I-H)X\beta = \beta^T(X^TX - X^T X)\beta = 0$, so actually this is a central $\chi^2$ with $n-p$ degrees of freedom (this is a special case of Cochran's theorem ). I'm using $p$ to denote the number of columns of $X$, so if one column of $X$ gives the intercept then we'd have $p-1$ non-intercept predictors. Some authors use $p$ to be the number of non-intercept predictors so sometimes you might see something like $n-p-1$ in the degrees of freedom there, but it's all the same thing. The result of this is that $E(e^Te / \sigma^2) = n-p$, so $\hat \sigma^2 := \frac{1}{n-p} e^T e$ works great as an estimator of $\sigma^2$. This means that $$ \frac{\hat \beta_j}{\hat \sigma S_j}= \frac{\hat \beta_j}{S_j\sqrt{e^Te / (n-p)}} = \frac{\hat \beta_j}{\sigma S_j\sqrt{\frac{e^Te}{\sigma^2(n-p)}}} $$ is the ratio of a standard Gaussian to a chi squared divided by its degrees of freedom. To finish this, we need to show independence and we can use the following result: Result: for $Z \sim \mathcal N_k(\mu, \Sigma)$ and matrices $A$ and $B$ in $\mathbb R^{l\times k}$ and $\mathbb R^{m\times k}$ respectively, $AZ$ and $BZ$ are independent if and only if $A\Sigma B^T = 0$ (this is exercise 58(b) in chapter 1 of Jun Shao's Mathematical Statistics ). We have $\hat \beta = (X^TX)^{-1}X^T y$ and $e = (I-H)y$ where $y \sim \mathcal N(X\beta, \sigma^2 I)$. This means $$ (X^TX)^{-1}X^T \cdot \sigma^2 I \cdot (I-H)^T = \sigma^2 \left((X^TX)^{-1}X^T - (X^TX)^{-1}X^TX(X^TX)^{-1}X^T\right) = 0 $$ so $\hat \beta \perp e$, and therefore $\hat \beta \perp e^T e$. The upshot is we now know $$ \frac{\hat \beta_j}{\hat \sigma S_j} \sim t_{n-p} $$ as desired (under all of the above assumptions). Here's the proof of that result. Let $C = {A \choose B}$ be the $(l+m)\times k$ matrix formed by stacking $A$ on top of $B$. Then $$ CZ = {AZ \choose BZ} \sim \mathcal N \left({A\mu \choose B\mu}, C\Sigma C^T \right) $$ where $$ C\Sigma C^T = {A \choose B} \Sigma \left(\begin{array}{cc} A^T & B^T \end{array}\right) = \left(\begin{array}{cc}A\Sigma A^T & A\Sigma B^T \\ B\Sigma A^T & B\Sigma B^T\end{array}\right). $$ $CZ$ is a multivariate Gaussian and it is a well-known result that two components of a multivariate Gaussian are independent if and only if they are uncorrelated, so the condition $A\Sigma B^T = 0$ turns out to be exactly equivalent to the components $AZ$ and $BZ$ in $CZ$ being uncorrelated. $\square$
{ "source": [ "https://stats.stackexchange.com/questions/344053", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67137/" ] }
344,189
I have read something like 6 articles on Markov Chain Monte carlo methods, there are a couple of basic points I can't seem to wrap my head around. How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? Again, how can you determine which parameter estimate "fits your data better" without first knowing your posterior distribution? If you already know the properties of your posterior distribution (as is indicated by 1) and 2)), then what's the point of using this method in the first place? This just seems like circular reasoning to me.
If this was not a clear conflict of interest , I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial perspective. How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? MCMC is based on the assumption that the product$$\pi(\theta)f(x^\text{obs}|\theta)$$can be numerically computed (hence is known) for a given $\theta$, where $x^\text{obs}$ denotes the observation, $\pi(\cdot)$ the prior, and $f(x^\text{obs}|\theta)$ the likelihood. This does not imply an in-depth knowledge about this function of $\theta$. Still, from a mathematical perspective the posterior density is completely and entirely determined by $$\pi(\theta|x^\text{obs})=\dfrac{\pi(\theta)f(x^\text{obs}|\theta)}{\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta}\tag{1}$$Thus, it is not particularly surprising that simulation methods can be found using solely the input of the product $$\pi(\theta)\times f(x^\text{obs}|\theta)$$ The amazing feature of Monte Carlo methods is that some methods like Markov chain Monte Carlo (MCMC) algorithms do not formally require anything further than this computation of the product, when compared with accept-reject algorithms for instance, which calls for an upper bound. A related software like Stan operates on this input and still delivers high end performances with tools like NUTS and HMC , including numerical differentiation. A side comment written later in the light of some of the other answers is that the normalising constant $$\mathfrak{Z}=\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta$$is not particularly useful for conducting Bayesian inference in that, were I to "know" its exact numerical value in addition to the function in the numerator of (1), $\mathfrak{Z}=3.17232\,10^{-23}$ say, I would not have made any progress towards finding Bayes estimates or credible regions. (The only exception when this constant matters is in conducting Bayesian model comparison .) When teaching about MCMC algorithms, my analogy is that in a videogame we have a complete map (the posterior) and a moving player that can only illuminate a portion of the map at once. Visualising the entire map and spotting the highest regions is possible with enough attempts (and a perfect remembrance of things past!). A local and primitive knowledge of the posterior density (up to a constant) is therefore sufficient to learn about the distribution. Again, how can you determine which parameter estimate "fits your data better" without first knowing your posterior distribution? Again, the distribution is known in a mathematical or numerical sense. The Bayes parameter estimates provided by MCMC, if needed, are based on the same principle as most simulation methods, the law of large numbers . More generally, Monte Carlo based (Bayesian) inference replaces the exact posterior distribution with an empirical version. Hence, once more, a numerical approach to the posterior, one value at a time, is sufficient to build a convergent representation of the associated estimator. The only restriction is the available computing time, i.e., the number of terms one can call in the law of large numbers approximation. If you already know the properties of your posterior distribution (as is indicated by 1) and 2)), then what's the point of using this method in the first place? It is the very paradox of (1) that this is a perfectly well-defined mathematical object such that most integrals related with (1) including its denominator may be out of reach from analytical and numerical methods. Exploiting the stochastic nature of the object by simulation methods ( Monte Carlo integration ) is a natural and manageable alternative that has proven immensely helpful. Connected X validated questions: Confusion related to MCMC technique What are Monte Carlo simulations? Is Markov chain based sampling the “best” for Monte Carlo sampling? Are there alternative schemes available? MCMC; Can we be sure that we have a ''pure'' and ''large enough'' sample from the posterior? How can it work if we are not? How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? How to do MC integration from Gibbs sampling of posterior?
{ "source": [ "https://stats.stackexchange.com/questions/344189", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91971/" ] }
344,258
I haven't found a satisfactory answer to this from google . Of course if the data I have is of the order of millions then deep learning is the way. And I have read that when I do not have big data then maybe it is better to use other methods in machine learning. The reason given is over-fitting. Machine learning: i.e. looking at data, feature extractions, crafting new features from what is collected etc. things such as removing heavily correlated variables etc. the whole machine learning 9 yards. And I have been wondering: why is it that the neural networks with one hidden layer are not panacea to machine learning problems? They are universal estimators, over-fitting can be managed with dropout, l2 regularization, l1 regularization, batch-normalization. Training speed is not generally an issue if we have just 50,000 training examples. They are better at test time than, let us say, random forests. So why not - clean the data, impute missing values as you would generally do, center the data, standardize the data, throw it to an ensemble of neural networks with one hidden layer and apply regularization till you see no over-fitting and then train them to the end. No issues with gradient explosion or gradient vanishing since it is just a 2 layered network. If deep layers were needed, that means hierarchical features are to be learned and then other machine learning algorithms are no good as well. For example SVM is a neural network with hinge loss only. An example where some other machine learning algorithm would outperform a carefully regularized 2 layered (maybe 3?) neural network would be appreciated. You can give me the link to the problem and I would train the best neural network that I can and we can see if 2 layered or 3 layered neural networks falls short of any other benchmark machine learning algorithm.
Each machine learning algorithm has a different inductive bias, so it's not always appropriate to use neural networks. A linear trend will always be learned best by simple linear regression rather than a ensemble of nonlinear networks. If you take a look at the winners of past Kaggle competitions , excepting any challenges with image/video data, you will quickly find that neural networks are not the solution to everything. Some past solutions here. apply regularization till you see no over-fitting and then train them to the end There is no guarantee that you can apply enough regularization to prevent overfitting without completely destroying the capacity of the network to learn anything. In real life, it is rarely feasible to eliminate the train-test gap, and that's why papers still report train and test performance. they are universal estimators This is only true in the limit of having an unbounded number of units, which isn't realistic. you can give me the link to the problem and i would train the best neural network that i can and we can see if 2 layered or 3 layered neural networks falls short of any other benchmark machine learning algorithm An example problem which I expect a neural network would never be able to solve: Given an integer, classify as prime or not-prime. I believe this could be solved perfectly with a simple algorithm that iterates over all valid programs in ascending length and finds the shortest program which correctly identifies the prime numbers. Indeed, this 13 character regex string can match prime numbers, which wouldn't be computationally intractable to search. Can regularization take a model from one that overfits to the one that has its representational power severely hamstrung by regularization? Won't there always be that sweet spot in between? Yes, there is a sweet spot, but it is usually way before you stop overfitting. See this figure: If you flip the horizontal axis and relabel it as "amount of regularization", it's pretty accurate -- if you regularize until there is no overfitting at all, your error will be huge. The "sweet spot" occurs when there is a bit of overfitting, but not too much. How is a 'simple algorithm that iterates over all valid programs in ascending length and finds the shortest program which correctly identifies the prime numbers.' an algorithm that learns? It finds the parameters $\theta$ such that we have a hypothesis $H(\theta)$ which explains the data, just like backpropagation finds the parameters $\theta$ which minimize the loss (and by proxy, explains the data). Only in this case, the parameter is a string instead of many floating point values. so if i get you correctly you are making the argument that if the data is not substantial the deep network will never hit the validation accuracy of the best shallow network given the best hyperparameters for both? Yes. Here is an ugly but hopefully effective figure to illustrate my point. but that doesnt make sense. a deep network can just learn a 1-1 mapping above the shallow The question is not "can it", but "will it", and if you are training backpropagation, the answer is probably not. We discussed the fact that larger networks will always work better than smaller networks Without further qualification, that claim is just wrong.
{ "source": [ "https://stats.stackexchange.com/questions/344258", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/155692/" ] }
344,498
For statistical and machine learning models, there are multiple levels of interpretability: 1) the algorithm as a whole, 2) parts of the algorithm in general 3) parts of the algorithm on particular inputs, and these three levels split into two parts each, one for training and one for function eval. The last two parts are much closer than to the first. I'm asking about #2, which usually leads to better understanding of #3). (if those are not what 'interpretability' means then what should I be thinking?) As far as interpretability goes, logistic regression is one of the easiest to interpret. Why did this instance pass the threshold? Because that instance had this particular positive feature and it has a larger coefficient in the model. It's so obvious! A neural network is the classic example of a model that is difficult to interpret. What do all those coefficients mean ? They all add up in such complicated crazy ways that it is hard to say what any particular coefficient is really doing. But with all the deep neural nets coming out, it feels like things are becoming clearer. The DL models (for say vision) seem to capture things like edges or orientation in early layers, and in later layers it seems like some nodes are actually semantic (like the proverbial 'grandmother cell' ). For example: ( from 'Learning About Deep Learning' ) This is a graphic ( of many out there ) created by hand for presentation so I am very skeptical. But it is evidence that somebody thinks that is how it works. Maybe in the past there just weren't enough layers for us to find recognizable features; the models were successful, just not easy to post-hoc analyze particular ones. But maybe the graphic is just wishful thinking. Maybe NNs are truly inscrutable. But the many graphics with their nodes labeled with pictures are also really compelling. Do DL nodes really correspond to features?
Interpretation of deep models is still challenging. Your post only mentions CNNs for computer vision applications, but (deep or shallow) feed-forward networks and recurrent networks remain challenging to understand. Even in the case of CNNs which have obvious "feature detector" structures, such as edges and orientation of pixel patches, it's not completely obvious how these lower-level features are aggregated upwards, or what, precisely, is going on when these vision features are aggregated in a fully-connected layer. Adversarial examples show how interpretation of the network is difficult. An adversarial example has some tiny modification made to it, but results in a dramatic shift in the decision made by the model. In the context of image classification, a tiny amount of noise added to an image can change an image of a lizard to have a highly confident classification as another animal, like a (species of) dog. This is related to interpretability in the sense that there is a strong, unpredictable relationship between the (small) amount of noise and the (large) shift in the classification decision. Thinking about how these networks operate, it makes some sense: computations at previous layers are propagated forward, so that a number of errors -- small, unimportant errors to a human -- are magnified and accumulate as more and more computations are performed using the "corrupted" inputs. On the other hand, the existence of adversarial examples shows that the interpretation of any node as a particular feature or class is difficult, since the fact that the node is activated might have little to do with the actual content of the original image, and that this relationship is not really predictable in terms of the original image. But in the example images below, no humans are deceived about the content of the images: you wouldn't confuse the flag pole for a dog. How can we interpret these decisions, either in aggregate (a small noise pattern "transmutes" a lizard into dog, or a flagpole into a dog) or in smaller pieces (that several feature detectors are more sensitive to the noise pattern than the actual image content)? HAAM is a promising new method to generate adversarial images using harmonic functions. ("Harmonic Adversarial Attack Method" Wen Heng, Shuchang Zhou, Tingting Jiang.) Images generated using this method can be used to emulate lighting/shadow effects and are generally even more challenging for humans to detect as having been altered. As an example, see this image, taken from " Universal adversarial perturbations ", by Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. I chose this image just because it was one of the first adversarial images I came across. This image establishes that a particular noise pattern has a strange effect on the image classification decision, specifically that you can make a small modification to an input image and make the classifier think the result is a dog. Note that the underlying, original image is still obvious: in all cases, a human would not be confused into thinking that any of the non-dog images are dogs. Here's a second example from a more canonical paper, " EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES " by Ian J. Goodfellow, Jonathon Shlens & Christian Szegedy. The added noise is completely indistinguishable in the resulting image, yet the result is very confidently classified as the wrong result, a gibbon instead of a panda. In this case, at least, there is at least a passing similarity between the two classes, since gibbons and pandas are at least somewhat biologically and aesthetically similar in the broadest sense. This third example is taken from " Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch " by João Monteiro, Zahid Akhtar and Tiago H. Falk. It establishes that the noise pattern can be indistinguishable to a human yet still confuse the classifier. For reference, a mudpuppy is a dark-colored animal with four limbs and a tail, so it does not really have much resemblance to a goldfish. I just found this paper today. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus. " Intriguing properties of neural networks ". The abstract includes this intriguing quotation: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. So, rather than having 'feature detectors' at the higher levels, the nodes merely represent coordinates in a feature space which the network uses to model the data.
{ "source": [ "https://stats.stackexchange.com/questions/344498", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3186/" ] }
344,507
In trying to compute a discrete probability of some event $E$, call it, $P(E)$, one typically takes $P(E) = n(E) / n(S)$, where $n(E)$ is the number in the event, and $n(S)$ is the number in the sample space. (I may be a bit loose with terminology here, but hopefully, everyone will understand what I'm trying to express!). Anyhow, my question is how does $P(E) = n(E) / n(S)$ generalize to the case of a conditional probability, something like $P(E|F)$? Thanks!
Interpretation of deep models is still challenging. Your post only mentions CNNs for computer vision applications, but (deep or shallow) feed-forward networks and recurrent networks remain challenging to understand. Even in the case of CNNs which have obvious "feature detector" structures, such as edges and orientation of pixel patches, it's not completely obvious how these lower-level features are aggregated upwards, or what, precisely, is going on when these vision features are aggregated in a fully-connected layer. Adversarial examples show how interpretation of the network is difficult. An adversarial example has some tiny modification made to it, but results in a dramatic shift in the decision made by the model. In the context of image classification, a tiny amount of noise added to an image can change an image of a lizard to have a highly confident classification as another animal, like a (species of) dog. This is related to interpretability in the sense that there is a strong, unpredictable relationship between the (small) amount of noise and the (large) shift in the classification decision. Thinking about how these networks operate, it makes some sense: computations at previous layers are propagated forward, so that a number of errors -- small, unimportant errors to a human -- are magnified and accumulate as more and more computations are performed using the "corrupted" inputs. On the other hand, the existence of adversarial examples shows that the interpretation of any node as a particular feature or class is difficult, since the fact that the node is activated might have little to do with the actual content of the original image, and that this relationship is not really predictable in terms of the original image. But in the example images below, no humans are deceived about the content of the images: you wouldn't confuse the flag pole for a dog. How can we interpret these decisions, either in aggregate (a small noise pattern "transmutes" a lizard into dog, or a flagpole into a dog) or in smaller pieces (that several feature detectors are more sensitive to the noise pattern than the actual image content)? HAAM is a promising new method to generate adversarial images using harmonic functions. ("Harmonic Adversarial Attack Method" Wen Heng, Shuchang Zhou, Tingting Jiang.) Images generated using this method can be used to emulate lighting/shadow effects and are generally even more challenging for humans to detect as having been altered. As an example, see this image, taken from " Universal adversarial perturbations ", by Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. I chose this image just because it was one of the first adversarial images I came across. This image establishes that a particular noise pattern has a strange effect on the image classification decision, specifically that you can make a small modification to an input image and make the classifier think the result is a dog. Note that the underlying, original image is still obvious: in all cases, a human would not be confused into thinking that any of the non-dog images are dogs. Here's a second example from a more canonical paper, " EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES " by Ian J. Goodfellow, Jonathon Shlens & Christian Szegedy. The added noise is completely indistinguishable in the resulting image, yet the result is very confidently classified as the wrong result, a gibbon instead of a panda. In this case, at least, there is at least a passing similarity between the two classes, since gibbons and pandas are at least somewhat biologically and aesthetically similar in the broadest sense. This third example is taken from " Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch " by João Monteiro, Zahid Akhtar and Tiago H. Falk. It establishes that the noise pattern can be indistinguishable to a human yet still confuse the classifier. For reference, a mudpuppy is a dark-colored animal with four limbs and a tail, so it does not really have much resemblance to a goldfish. I just found this paper today. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus. " Intriguing properties of neural networks ". The abstract includes this intriguing quotation: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. So, rather than having 'feature detectors' at the higher levels, the nodes merely represent coordinates in a feature space which the network uses to model the data.
{ "source": [ "https://stats.stackexchange.com/questions/344507", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/109091/" ] }
344,794
Every IID sequence of random variables is considered to be exchangeable, i understand why its necessary for the random variables to be identically distributed to assume exchangeability, but why the need for independence, (or is there a need)? In the context of the definition which loosely states that any permutation of the random variables has the same joint distribution, isn't it sufficient for the random variables to be identically distributed to be able to reorder them, or must both conditions be met?
I think, the word "identically distributed" is mostly misleading when not used to discuss independent random variables. Consider the following example: $$\begin{pmatrix}X_1 \\ X_2 \\ X_3\end{pmatrix} \sim \mathrm{N}\left(\begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix},\begin{pmatrix}1 &0 & 0 \\ 0&1&0.1 \\ 0&0.1&1\end{pmatrix} \right)$$ The components of the vector $(X_1, X_2, X_3)^T$ are neither independent, nor exchangeable, but they are identically distributed: the marginal distributions are all standard normal: $X_i \sim \mathrm{N}(0,1)$, $i = 1,2,3$. Next example: $$\begin{pmatrix}Y_1 \\ Y_2 \\ Y_3\end{pmatrix} \sim \mathrm{N}\left(\begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix},\begin{pmatrix}1 &0.1 & 0.1 \\ 0.1&1&0.1 \\ 0.1&0.1&1\end{pmatrix} \right)$$ The components are now not independent but exchangeable. The marginal distributions are again identical, standard normal: $Y_i \sim \mathrm{N}(0,1)$, $i = 1,2,3$. We have in the end the following implications: $$ \text{i.i.d. } \Rightarrow \text{ exchangeability } \Rightarrow \text{marginals identical}.$$ The counterexamples above show, that the converse implications are all wrong.
{ "source": [ "https://stats.stackexchange.com/questions/344794", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/198413/" ] }
345,068
I think this is kinda basic, but say I have a random variable $X$, is the probability $P(X \leq a)$ the same as $P(f(X) \leq f(a))$ for any real-valued continuous function $f$?
This holds only if $f$ is monotonically increasing. If $f$ is monotonically decreasing, then $P(f(X)\leq f(a)) = P(X \geq a)$. For instance, if $f(x) = -x$, and X is a normal die roll, then $P(X \leq 5) = \frac56 $ but $P(-X \leq -5) = \frac16$. If $f$ switches between increasing and decreasing, then it's even more complicated. Note there's also the trivial case of $f(x) \equiv 0$, in which $P(f(X) \leq a)$ is equal to 1 if $a \geq 0$ and 0 otherwise.
{ "source": [ "https://stats.stackexchange.com/questions/345068", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/190494/" ] }
345,737
An issue I've seen frequently brought up in the context of Neural Networks in general, and Deep Neural Networks in particular, is that they're "data hungry" - that is they don't perform well unless we have a large data set with which to train the network. My understanding is that this is due to the fact that NNets, especially Deep NNets, have a large number of degrees of freedom. So as a model, a NNet has a very large number of parameters, and if the number of parameters of the model is large relative to the number of training data points, there is an increased tendency to over fit. But why isn't this issue solved by regularization? As far as I know NNets can use L1 and L2 regularization and also have their own regularization methods like dropout which can reduce the number of parameters in the network. Can we choose our regularizations methods such that they enforce parsimony and limit the size of the network? To clarify my thinking: Say we are using a large Deep NNet to try to model our data, but the data set is small and could actually be modeled by a linear model. Then why don't the network weights converge in such a way that one neuron simulates the linear regression and all the others converge to zeros? Why doesn't regularization help with this?
The simple way to explain it is that regularization helps to not fit to the noise, it doesn't do much in terms of determining the shape of the signal. If you think of deep learning as a giant glorious function approximator, then you realize that it needs a lot of data to define the shape of the complex signal. If there was no noise then increasing complexity of NN would produce a better approximation. There would not be any penalty to the size of the NN, bigger would have been better in every case. Consider a Taylor approximation, more terms is always better for non-polynomial function (ignoring numerical precision issues). This breaks down in presence of a noise, because you start fitting to the noise. So, here comes regularization to help: it may reduce fitting to the noise, thus allowing us to build bigger NN to fit nonlinear problems. The following discussion is not essential to my answer, but I added in part to answer some comments and motivate the main body of the answer above. Basically, the rest of my answer is like french fires that come with a burger meal, you can skip it. (Ir)relevant Case: Polynomial regression Let's look at a toy example of a polynomial regression. It is also a pretty good approximator for many functions. We'll look at the $\sin(x)$ function in $x\in(-3,3)$ region. As you can see from its Taylor series below, 7th order expansion is already a pretty good fit, so we can expect that a polynomial of 7+ order should be a very good fit too: Next, we're going to fit polynomials with progressively higher order to a small very noisy data set with 7 observations: We can observe what we've been told about polynomials by many people in-the-know: they're unstable, and start to oscillate wildly with increase in the order of polynomials. However, the problem is not the polynomials themselves. The problem is the noise. When we fit polynomials to noisy data, part of the fit is to the noise, not to the signal. Here's the same exact polynomials fit to the same data set but with noise completely removed. The fits are great! Notice a visually perfect fit for order 6. This shouldn't be surprising since 7 observations is all we need to uniquely identify order 6 polynomial, and we saw from Taylor approximation plot above that order 6 is already a very good approximation to $\sin(x)$ in our data range. Also notice that higher order polynomials do not fit as well as order 6, because there is not enough observations to define them. So, let's look at what happens with 100 observations. On a chart below you see how a larger data set allowed us to fit higher order polynomials, thus accomplishing a better fit! Great, but the problem is that we usually deal with noisy data. Look at what happens if you fit the same to 100 observations of very noisy data, see the chart below. We're back to square one: higher order polynomials produce horrible oscillating fits. So, increasing data set didn't help that much in increasing the complexity of the model to better explain the data. This is, again, because complex model is fitting better not only to the shape of the signal, but to the shape of the noise too. Finally, let's try some lame regularization on this problem. The chart below shows regularization (with different penalties) applied to order 9 polynomial regression. Compare this to order (power) 9 polynomial fit above: at an appropriate level of regularization it is possible to fit higher order polynomials to noisy data. Just in case it wasn't clear: I'm not suggesting to use polynomial regression this way. Polynomials are good for local fits, so a piece-wise polynomial can be a good choice. To fit the entire domain with them is often a bad idea, because they are sensitive to noise, indeed, as it should be evident from plots above. Whether the noise is numerical or from some other source is not that important in this context. the noise is noise, and polynomials will react to it passionately.
{ "source": [ "https://stats.stackexchange.com/questions/345737", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/89649/" ] }
346,327
I know from previous studies that $Var(A+B) = Var(A) + Var(B) + 2 Cov (A,B)$ However, I don't understand why that is. I can see that the effect will be to 'push up' the variance when A and B covary highly. It makes sense that when you create a composite from two highly correlated variables you will tend to be adding the high observations from A with the high observations from B, and the low observations from A with the low observations from B. This will tend tend to create extreme high and low values in the composite variable, increasing the variance of the composite. But why does it work to multiply the covariance by exactly 2?
Simple answer: The variance involves a square: $$Var(X) = E[(X - E[X])^2]$$ So, your question boils down to the factor 2 in the square identity: $$(a+b)^2 = a^2 + b^2 + 2ab$$ Which can be understood visually as a decomposition of the area of a square of side $(a+b)$ into the area of the smaller squares of sides $a$ and $b$, in addition to two rectangles of sides $a$ and $b$: More involved answer: If you want a mathematically more involved answer, the covariance is a bilinear form, meaning that it is linear in both its first and second arguments, this leads to: $$\begin{aligned} Var(A+B) &= Cov(A+B, A+B) \\ &= Cov(A, A+B) + Cov(B, A+B) \\ &= Cov(A,A) + Cov(A,B) + Cov(B,A) + Cov(B,B) \\ &= Var(A) + 2 Cov(A,B) + Var(B) \end{aligned}$$ In the last line, I used the fact that the covariance is symmetrical: $$Cov(A,B) = Cov(B,A)$$ To sum up: It is two because you have to account for both $cov(A,B)$ and $cov(B,A)$.
{ "source": [ "https://stats.stackexchange.com/questions/346327", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9162/" ] }
347,431
The question might sound a bit strange because I am new to statistical inference and neural networks. When in classification problems using neural networks we say that we want to learn a function $f^*$ that maps the space of the inputs $x$, to the space of the outputs $y$: $$f^*(x; \theta) = y$$ Are we fitting the parameters ($\theta$) to model a non-linear function, or to model a probability density function? I don't really know how to write the question in a better way. I have read several times both things (probability density function, or function just like that) so therefore my confusion.
Strictly speaking, neural networks are fitting a non-linear function. They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditions are respected (Values must be positive and $\leq$ 1, etc...). But that is a question of how you choose to interpret their output, not of what they are actually doing. Under the hood, they are still non-linear function estimators, which you are choosing to apply to the specific problem of PDF estimation.
{ "source": [ "https://stats.stackexchange.com/questions/347431", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/202471/" ] }
347,530
Whenever regularization is used, it is often added onto the cost function such as in the following cost function. $$ J(\theta)=\frac 1 2(y-\theta X^T)(y-\theta X^T)^T+\alpha\|\theta\|_2^2 $$ This makes intuitive sense to me since minimize the cost function means minimizing the error (the left term) and minimizing the magnitudes of the coefficients (the right term) at the same time (or at least balancing the two minimizations). My question is why is this regularization term $\alpha\|\theta\|_2^2$ added onto the original cost function and not multiplied or something else which keeps the spirit of the motivation behind the idea of regularization? Is it because if we simply add the term on it is sufficiently simple and enables us to solve this analytically or is there some deeper reason?
It has quite a nice intuition in the Bayesian framework. Consider that the regularized cost function $J$ has a similar role as the probability of a parameter configuration $\theta$ given the observations $X, y$. Applying the Bayes theorem, we get: $$P(\theta|X,y) = \frac{P(X,y|\theta)P(\theta)}{P(X,y)}.$$ Taking the log of the expression gives us: $$\log P(\theta|X,y) = \log P(X,y|\theta) + \log P(\theta) - \log P(X,y).$$ Now, let's say $J(\theta)$ is the negative 1 log-posterior, $-\log P(\theta|X,y)$. Since the last term does not depend on $\theta$, we can omit it without changing the minimum. You are left with two terms: 1) the likelihood term $\log P(X,y|\theta)$ depending on $X$ and $y$, and 2) the prior term $ \log P(\theta)$ depending on $\theta$ only. These two terms correspond exactly to the data term and the regularization term in your formula. You can go even further and show that the loss function which you posted corresponds exactly to the following model: $$P(X,y|\theta) = \mathcal{N}(y|\theta X, \sigma_1^2),$$ $$P(\theta) = \mathcal{N}(\theta | 0, \sigma_2^2),$$ where parameters $\theta$ come from a zero-mean Gaussian distribution and the observations $y$ have zero-mean Gaussian noise. For more details see this answer . 1 Negative since you want to maximize the probability but minimize the cost.
{ "source": [ "https://stats.stackexchange.com/questions/347530", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/208638/" ] }
347,727
The following excerpt is from the entry, What are the differences between one-tailed and two-tailed tests? , on UCLA's statistics help site. ... consider the consequences of missing an effect in the other direction. Imagine you have developed a new drug that you believe is an improvement over an existing drug. You wish to maximize your ability to detect the improvement, so you opt for a one-tailed test. In doing so, you fail to test for the possibility that the new drug is less effective than the existing drug. After learning the absolute basics of hypothesis testing and getting to the part about one vs two tailed tests... I understand the basic math and increased detection ability of one tailed tests, etc... But I just can't wrap around my head around one thing... What's the point? I'm really failing to understand why you should split your alpha between the two extremes when your is sample result can only be in one or the other, or neither. Take the example scenario from the quoted text above. How could you possibly "fail to test" for a result in the opposite direction? You have your sample mean. You have your population mean. Simple arithmetic tells you which is higher. What is there to test, or fail to test, in the opposite direction? What's stopping you just starting from scratch with the opposite hypothesis if you clearly see that the sample mean is way off in the other direction? Another quote from the same page: Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed test was. I assume this also applies to switching the polarity of your one-tailed test. But how is this "doctored" result any less valid than if you had simply chosen the correct one-tailed test in the first place? Clearly I am missing a big part of the picture here. It all just seems too arbitrary. Which it is, I guess, in the sense that what denotes "statistically significant" - 95%, 99%, 99.9%... Is arbitrary to begin with.
Think of the data as the tip of the iceberg – all you can see above the water is the tip of the iceberg but in reality you are interested in learning something about the entire iceberg. Statisticians, data scientists and others working with data are careful to not let what they see above the water line influence and bias their assessment of what's hidden below the water line. For this reason, in a hypothesis testing situation, they tend to formulate their null and alternative hypotheses before they see the tip of the iceberg, based on their expectations (or lack thereof) of what might happen if they could view the iceberg in its entirety. Looking at the data to formulate your hypotheses is a poor practice and should be avoided – it's like putting the cart before the horse. Recall that the data come from a single sample selected (hopefully using a random selection mechanism) from the target population/universe of interest. The sample has its own idiosyncracies, which may or may not be reflective of the underlying population. Why would you want your hypotheses to reflect a narrow slice of the population instead of the entire population? Another way to think about this is that, every time you select a sample from your target population (using a random selection mechanism), the sample will yield different data. If you use the data (which you shouldn't!!!) to guide your specification of the null and alternative hypotheses, your hypotheses will be all over the map, essentially driven by the idiosyncratic features of each sample. Of course, in practice we only draw one sample, but it would be a very disquieting thought to know that if someone else performed the same study with a different sample of the same size, they would have to change their hypotheses to reflect the realities of their sample. One of my graduate school professors used to have a very wise saying: "We don't care about the sample, except that it tells us something about the population" . We want to formulate our hypotheses to learn something about the target population, not about the one sample we happened to select from that population.
{ "source": [ "https://stats.stackexchange.com/questions/347727", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/208720/" ] }
348,494
I have read the most popular books in statistical learning 1- The elements of statistical learning. 2- An introduction to statistical learning . Both mention that ridge regression has two formulas that are equivalent. Is there an understandable mathematical proof of this result? I also went through Cross Validated , but I can not find a definite proof there. Furthermore, will LASSO enjoy the same type of proof?
The classic Ridge Regression ( Tikhonov Regularization ) is given by: $$ \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| x \right\|}_{2}^{2} $$ The claim above is that the following problem is equivalent: $$\begin{align*} \arg \min_{x} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} \\ \text{subject to} \quad & {\left\| x \right\|}_{2}^{2} \leq t \end{align*}$$ Let's define $ \hat{x} $ as the optimal solution of the first problem and $ \tilde{x} $ as the optimal solution of the second problem. The claim of equivalence means that $ \forall t, \: \exists \lambda \geq 0 : \hat{x} = \tilde{x} $ . Namely you can always have a pair of $ t $ and $ \lambda \geq 0 $ such the solution of the problem is the same. How could we find a pair? Well, by solving the problems and looking at the properties of the solution. Both problems are Convex and smooth so it should make things simpler. The solution for the first problem is given at the point the gradient vanishes which means: $$ \hat{x} - y + 2 \lambda \hat{x} = 0 $$ The KKT Conditions of the second problem states: $$ \tilde{x} - y + 2 \mu \tilde{x} = 0 $$ and $$ \mu \left( {\left\| \tilde{x} \right\|}_{2}^{2} - t \right) = 0 $$ The last equation suggests that either $ \mu = 0 $ or $ {\left\| \tilde{x} \right\|}_{2}^{2} = t $ . Pay attention that the 2 base equations are equivalent. Namely if $ \hat{x} = \tilde{x} $ and $ \mu = \lambda $ both equations hold. So it means that in case $ {\left\| y \right\|}_{2}^{2} \leq t $ one must set $ \mu = 0 $ which means that for $ t $ large enough in order for both to be equivalent one must set $ \lambda = 0 $ . On the other case one should find $ \mu $ where: $$ {y}^{t} \left( I + 2 \mu I \right)^{-1} \left( I + 2 \mu I \right)^{-1} y = t $$ This is basically when $ {\left\| \tilde{x} \right\|}_{2}^{2} = t $ Once you find that $ \mu $ the solutions will collide. Regarding the $ {L}_{1} $ (LASSO) case, well, it works with the same idea. The only difference is we don't have closed for solution hence deriving the connection is trickier. Have a look at my answer at StackExchange Cross Validated Q291962 and StackExchange Signal Processing Q21730 - Significance of $ \lambda $ in Basis Pursuit . Remark What's actually happening? In both problems, $ x $ tries to be as close as possible to $ y $ . In the first case, $ x = y $ will vanish the first term (The $ {L}_{2} $ distance) and in the second case it will make the objective function vanish. The difference is that in the first case one must balance $ {L}_{2} $ Norm of $ x $ . As $ \lambda $ gets higher the balance means you should make $ x $ smaller. In the second case there is a wall, you bring $ x $ closer and closer to $ y $ until you hit the wall which is the constraint on its Norm (By $ t $ ). If the wall is far enough (High value of $ t $ ) and enough depends on the norm of $ y $ then i has no meaning, just like $ \lambda $ is relevant only of its value multiplied by the norm of $ y $ starts to be meaningful. The exact connection is by the Lagrangian stated above. Resources I found this paper today (03/04/2019): Approximation Hardness for A Class of Sparse Optimization Problems .
{ "source": [ "https://stats.stackexchange.com/questions/348494", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/120788/" ] }
348,937
Some sources say likelihood function is not conditional probability, some say it is. This is very confusing to me. According to most sources I have seen, the likelihood of a distribution with parameter $\theta$, should be a product of probability mass functions given $n$ samples of $x_i$: $$L(\theta) = L(x_1,x_2,...,x_n;\theta) = \prod_{i=1}^n p(x_i;\theta)$$ For example in Logistic Regression, we use an optimization algorithm to maximize the likelihood function (Maximum Likelihood Estimation) to obtain the optimal parameters and therefore the final LR model. Given the $n$ training samples, which we assume to be independent from each other, we want to maximize the product of probabilities (or the joint probability mass functions). This seems quite obvious to me. According to Relation between: Likelihood, conditional probability and failure rate , "likelihood is not a probability and it is not a conditional probability". It also mentioned, "likelihood is a conditional probability only in Bayesian understanding of likelihood, i.e., if you assume that $\theta$ is a random variable." I read about the different perspectives of treating a learning problem between frequentist and Bayesian. According to a source, for Bayesian inference, we have a priori $P(\theta)$, likelihood $P(X|\theta)$, and we want to obtain the posterior $P(\theta|X)$, using Bayesian theorem: $$P(\theta|X)=\dfrac{P(X|\theta) \times P(\theta)}{P(X)}$$ I'm not familiar with Bayesian Inference. How come $P(X|\theta)$ which is the distribution of the observed data conditional on its parameters, is also termed the likelihood? In Wikipedia , it says sometimes it is written $L(\theta|X)=p(X|\theta)$. What does this mean? is there a difference between Frequentist and Bayesian's definitions on likelihood?? Thanks. EDIT: There are different ways of interpreting Bayes' theorem - Bayesian interpretation and Frequentist interpretation (See: Bayes' theorem - Wikipedia ).
There is no difference in the definition - in both cases, the likelihood function is any function of the parameter that is proportional to the sampling density. Strictly speaking we do not require that the likelihood be equal to the sampling density; it needs only be proportional, which allows removal of multiplicative parts that do not depend on the parameters. Whereas the sampling density is interpreted as a function of the data, conditional on a specified value of the parameter, the likelihood function is interpreted as a function of the parameter for a fixed data vector. So in the standard case of IID data you have: $$L_\mathbf{x}(\theta) \propto \prod_{i=1}^n p(x_i|\theta).$$ In Bayesian statistics, we usually express Bayes' theorem in its simplest form as: $$\pi (\theta|\mathbf{x}) \propto \pi(\theta) \cdot L_\mathbf{x}(\theta).$$ This expression for Bayes' theorem stresses that both of its multilicative elements are functions of the parameter, which is the object of interest in the posterior density. (This proportionality result fully defines the rule, since the posterior is a density, and so there is a unique multiplying constant that makes it integrate to one.) As you point out in your update, Bayesian and frequentist philosophy have different interpretive structures. Within the frequentist paradigm the parameter is generally treated as a "fixed constant" and so it is not ascribed a probability measure. Frequentists therefore reject the ascription of a prior or posterior distribution to the parameter (for more discussion on these philosophic and interpretive differences, see e.g., O'Neill 2009 ).
{ "source": [ "https://stats.stackexchange.com/questions/348937", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14144/" ] }
348,943
I am trying to write a bespoke learning curve function. I was wondering how is it usually implemented. When the size of the training set is increased - Is it normally increased by adding new samples to the already existing set incrementally? Or is the training size size is selected randomly? To give an example: Suppose the train-set size ratios are [0.2, 0.3, 0.4 ...], then when we go from 0.2 to 0.3, do we add an extra 0.1 on top of what existed before(0.2) incrementally? Or do we just get another random sample from the full set?
There is no difference in the definition - in both cases, the likelihood function is any function of the parameter that is proportional to the sampling density. Strictly speaking we do not require that the likelihood be equal to the sampling density; it needs only be proportional, which allows removal of multiplicative parts that do not depend on the parameters. Whereas the sampling density is interpreted as a function of the data, conditional on a specified value of the parameter, the likelihood function is interpreted as a function of the parameter for a fixed data vector. So in the standard case of IID data you have: $$L_\mathbf{x}(\theta) \propto \prod_{i=1}^n p(x_i|\theta).$$ In Bayesian statistics, we usually express Bayes' theorem in its simplest form as: $$\pi (\theta|\mathbf{x}) \propto \pi(\theta) \cdot L_\mathbf{x}(\theta).$$ This expression for Bayes' theorem stresses that both of its multilicative elements are functions of the parameter, which is the object of interest in the posterior density. (This proportionality result fully defines the rule, since the posterior is a density, and so there is a unique multiplying constant that makes it integrate to one.) As you point out in your update, Bayesian and frequentist philosophy have different interpretive structures. Within the frequentist paradigm the parameter is generally treated as a "fixed constant" and so it is not ascribed a probability measure. Frequentists therefore reject the ascription of a prior or posterior distribution to the parameter (for more discussion on these philosophic and interpretive differences, see e.g., O'Neill 2009 ).
{ "source": [ "https://stats.stackexchange.com/questions/348943", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/132059/" ] }
348,952
In a sample size of 100, we identified the existence of two attributes A and B . Our goal is to assess whether there is any association between these two attributes. The data looks like following: A Present Absent| Present x1 x2 | B | Absent x3 x4 | ----------------------------|--- 100 Since only the total sample size was fixed here, we conducted "Boschloo's exact test with a multinomial model". Attribute A can be divided into two parts, pathogenic A and non-pathogenic A . Now, with the same sample of 100 , we test whether there is any association between attribute B and pathogenic A . Since here the margin of attribute B is fixed, we conducted "Boschloo's exact test with a binomial model". Again, we assessed whether there is any association between attribute B and non-pathogenic A . Here we also used "Boschloo's exact test with a binomial model" as a test procedure. My question: In a same study, we are conducting 3 different inferences with the same sample of 100 . Is it valid to performing several tests to draw conclusions of several inferences with the same sample (data) ?
There is no difference in the definition - in both cases, the likelihood function is any function of the parameter that is proportional to the sampling density. Strictly speaking we do not require that the likelihood be equal to the sampling density; it needs only be proportional, which allows removal of multiplicative parts that do not depend on the parameters. Whereas the sampling density is interpreted as a function of the data, conditional on a specified value of the parameter, the likelihood function is interpreted as a function of the parameter for a fixed data vector. So in the standard case of IID data you have: $$L_\mathbf{x}(\theta) \propto \prod_{i=1}^n p(x_i|\theta).$$ In Bayesian statistics, we usually express Bayes' theorem in its simplest form as: $$\pi (\theta|\mathbf{x}) \propto \pi(\theta) \cdot L_\mathbf{x}(\theta).$$ This expression for Bayes' theorem stresses that both of its multilicative elements are functions of the parameter, which is the object of interest in the posterior density. (This proportionality result fully defines the rule, since the posterior is a density, and so there is a unique multiplying constant that makes it integrate to one.) As you point out in your update, Bayesian and frequentist philosophy have different interpretive structures. Within the frequentist paradigm the parameter is generally treated as a "fixed constant" and so it is not ascribed a probability measure. Frequentists therefore reject the ascription of a prior or posterior distribution to the parameter (for more discussion on these philosophic and interpretive differences, see e.g., O'Neill 2009 ).
{ "source": [ "https://stats.stackexchange.com/questions/348952", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81411/" ] }
348,972
Wikipedia says - In probability theory, the central limit theorem (CLT) establishes that, in most situations , when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed... When it says "in most situations", in which situations does the central limit theorem not work?
To understand this, you need to first state a version of the Central Limit Theorem. Here's the "typical" statement of the central limit theorem: Lindeberg–Lévy CLT. Suppose ${X_1, X_2, \dots}$ is a sequence of i.i.d. random variables with $E[X_i] = \mu$ and $Var[X_i] = \sigma^2 < \infty$. Let $S_{n}:={\frac {X_{1}+\cdots +X_{n}}{n}}$. Then as $n$ approaches infinity, the random variables $\sqrt{n}(S_n − \mu)$ converge in distribution to a normal $N(0,\sigma^2)$ i.e. $${\displaystyle {\sqrt {n}}\left(\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)-\mu \right)\ {\xrightarrow {d}}\ N\left(0,\sigma ^{2}\right).}$$ So, how does this differ from the informal description, and what are the gaps? There are several differences between your informal description and this description, some of which have been discussed in other answers, but not completely. So, we can turn this into three specific questions: What happens if the variables are not identically distributed? What if the variables have infinite variance, or infinite mean? How important is independence? Taking these one at a time, Not identically distributed , The best general results are the Lindeberg and Lyaponov versions of the central limit theorem. Basically, as long as the standard deviations don't grow too wildly, you can get a decent central limit theorem out of it. Lyapunov CLT.[5] Suppose ${X_1, X_2, \dots}$ is a sequence of independent random variables, each with finite expected value $\mu_i$ and variance $\sigma^2$ Define: $s_{n}^{2}=\sum _{i=1}^{n}\sigma _{i}^{2}$ If for some $\delta > 0$, Lyapunov’s condition ${\displaystyle \lim _{n\to \infty }{\frac {1}{s_{n}^{2+\delta }}}\sum_{i=1}^{n}\operatorname {E} \left[|X_{i}-\mu _{i}|^{2+\delta }\right]=0}$ is satisfied, then a sum of $X_i − \mu_i / s_n$ converges in distribution to a standard normal random variable, as n goes to infinity: ${{\frac {1}{s_{n}}}\sum _{i=1}^{n}\left(X_{i}-\mu_{i}\right)\ {\xrightarrow {d}}\ N(0,1).}$ Infinite Variance Theorems similar to the central limit theorem exist for variables with infinite variance, but the conditions are significantly more narrow than for the usual central limit theorem. Essentially the tail of the probability distribution must be asymptotic to $|x|^{-\alpha-1}$ for $0 < \alpha < 2$. In this case, appropriate scaled summands converge to a Levy-Alpha stable distribution. Importance of Independence There are many different central limit theorems for non-independent sequences of $X_i$. They are all highly contextual. As Batman points out, there's one for Martingales. This question is an ongoing area of research, with many, many different variations depending upon the specific context of interest. This Question on Math Exchange is another post related to this question.
{ "source": [ "https://stats.stackexchange.com/questions/348972", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/210042/" ] }
349,155
Particularly in the context of kaggle competitions I have noticed that model's performance is all about feature selection / engineering. While I can fully understand why that is in the case when dealing with the more conventional / old-school ML algorithms, I don't see why this would be the case when using deep neural networks. Citing the Deep Learning book: Deep learning solves this central problem in representation learning by introducing representations that are expressed in terms of other, simpler representations. Deep learning enables the computer to build complex concepts out of simpler concepts. Hence I always thought that if "information is in the data", a sufficiently deep, well-parameterised neural network would pick up the right features given sufficient training time.
What if the "sufficiently deep" network is intractably huge, either making model training too expensive (AWS fees add up!) or because you need to deploy the network in a resource-constrained environment? How can you know, a priori that the network is well-parameterized? It can take a lot of experimentation to find a network that works well. What if the data you're working with is not "friendly" to standard analysis methods, such as a binary string comprising thousands or millions of bits, where each sequence has a different length? What if you're interested in user-level data, but you're forced to work with a database that only collects transaction-level data? Suppose your data are the form of integers such as $12, 32, 486, 7$, and your task is to predict the sum of the digits, so the target in this example is $3, 5, 18, 7$. It's dirt simple to parse each digit into an array and then sum the array ("feature engineering") but challenging otherwise. We would like to live in a world where data analysis is "turnkey," but these kinds of solutions usually only exist in special instances. Lots of work went into developing deep CNNs for image classification - prior work had a step that transformed each image into a fixed-length vector. Feature engineering lets the practitioner directly transform knowledge about the problem into a fixed-length vector amenable to feed-forward networks. Feature selection can solve the problem of including so many irrelevant features that any signal is lost, as well as dramatically reducing the number of parameters to the model.
{ "source": [ "https://stats.stackexchange.com/questions/349155", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/148881/" ] }
349,850
I am creating a graph to show trends in death rates (per 1000 ppl.) in different countries and the story that should come from the plot is that Germany (light blue line) is the only one whose trend is increasing after 1932. This is my first (basic) try In my opinion, this graph is already showing what we want it to tell but it is not super intuitive. Do you have any suggestion to make it clearer that distinction among trends? I was thinking of plotting growth rates but I tried and it is not that better. The data are the following year de fr be nl den ch aut cz pl 1927 10.9 16.5 13 10.2 11.6 12.4 15 16 17.3 1928 11.2 16.4 12.8 9.6 11 12 14.5 15.1 16.4 1929 11.4 17.9 14.4 10.7 11.2 12.5 14.6 15.5 16.7 1930 10.4 15.6 12.8 9.1 10.8 11.6 13.5 14.2 15.6 1931 10.4 16.2 12.7 9.6 11.4 12.1 14 14.4 15.5 1932 10.2 15.8 12.7 9 11 12.2 13.9 14.1 15 1933 10.8 15.8 12.7 8.8 10.6 11.4 13.2 13.7 14.2 1934 10.6 15.1 11.7 8.4 10.4 11.3 12.7 13.2 14.4 1935 11.4 15.7 12.3 8.7 11.1 12.1 13.7 13.5 14 1936 11.7 15.3 12.2 8.7 11 11.4 13.2 13.3 14.2 1937 11.5 15 12.5 8.8 10.8 11.3 13.3 13.3 14
Sometimes less is more. With less detail about the year-to-year variations and the country distinctions you can provide more information about the trends. Since the other countries are moving mostly together you can get by without separate colors. In using a smoother you're requiring the reader to trust that you haven't smoothed over any interesting variation. Update after getting a couple requests for code : I made this in JMP 's interactive Graph Builder. The JMP script is: Graph Builder( Size( 528, 456 ), Show Control Panel( 0 ), Show Legend( 0 ), // variable role assignments: Variables( X( :year ), Y( :Deaths ), Overlay( :Country ) ), // spline smoother: Elements( Smoother( X, Y, Legend( 3 ) ) ), // customizations: SendToReport( // x scale, leaving room for annotations Dispatch( {},"year",ScaleBox, {Min( 1926.5 ), Max( 1937.9 ), Inc( 2 ), Minor Ticks( 1 )} ), // customize colors and DE line width Dispatch( {}, "400", ScaleBox, {Legend Model( 3, Properties( 0, {Line Color( "gray" )}, Item ID( "aut", 1 ) ), Properties( 1, {Line Color( "gray" )}, Item ID( "be", 1 ) ), Properties( 2, {Line Color( "gray" )}, Item ID( "ch", 1 ) ), Properties( 3, {Line Color( "gray" )}, Item ID( "cz", 1 ) ), Properties( 4, {Line Color( "gray" )}, Item ID( "den", 1 ) ), Properties( 5, {Line Color( "gray" )}, Item ID( "fr", 1 ) ), Properties( 6, {Line Color( "gray" )}, Item ID( "nl", 1 ) ), Properties( 7, {Line Color( "gray" )}, Item ID( "pl", 1 ) ), Properties( 8, {Line Color("dark red"), Line Width( 3 )}, Item ID( "de", 1 )) )}), // add line annotations (omitted) ));
{ "source": [ "https://stats.stackexchange.com/questions/349850", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/77446/" ] }
350,211
When having real valued entries (e.g. floats between 0 and 1 as normalized representation for greyscale values from 0 to 256) in our label vector, I always thought that we use MSE(R2-loss) if we want to measure the distance/error between input and output or in general input and label of the network. On the other hand, I also always thought, that binary cross entropy is only used, when we try to predict probabilities and the ground truth label entries are actual probabilities. Now when working with the mnist dataset loaded via tensorflow like so: from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) Each entry is a float32 and ranges between 0 and 1. The tensorflow tutorial for autoencoder uses R2-loss/MSE-loss for measuring the reconstruction loss. Where as the tensorflow tutorial for variational autoencoder uses binary cross-entropy for measuring the reconstruction loss. Can some please tell me WHY, based on the same dataset with same values (they are all numerical values which in effect represent pixel values) they use R2-loss/MSE-loss for the autoencoder and Binary-Cross-Entropy loss for the variational autoencoder. I think it is needless to say, that both loss functions are applied on sigmoid outputs.
I don't believe there's some kind of deep, meaningful rationale at play here - it's a showcase example running on MNIST, it's pretty error-tolerant. Optimizing for MSE means your generated output intensities are symmetrically close to the input intensities. A higher-than-training intensity is penalized by the same amount as an equally valued lower intensity. Cross-entropy loss is assymetrical. If your true intensity is high , e.g. 0.8, generating a pixel with the intensity of 0.9 is penalized more than generating a pixel with intensity of 0.7. Conversely if it's low , e.g. 0.3, predicting an intensity of 0.4 is penalized less than a predicted intensity of 0.2. You might have guessed by now - cross-entropy loss is biased towards 0.5 whenever the ground truth is not binary. For a ground truth of 0.5, the per-pixel zero-normalized loss is equal to 2*MSE. This is quite obviously wrong! The end result is that you're training the network to always generate images that are blurrier than the inputs. You're actively penalizing any result that would enhance the output sharpness more than those that make it worse ! MSE is not immune to the this behavior either, but at least it's just unbiased and not biased in the completely wrong direction . However, before you run off to write a loss function with the opposite bias - just keep in mind pushing outputs away from 0.5 will in turn mean the decoded images will have very hard, pixellized edges. That is - or at least I very strongly suspect is - why adversarial methods yield better results - the adversarial component is essentially a trainable, 'smart' loss function for the (possibly variational) autoencoder.
{ "source": [ "https://stats.stackexchange.com/questions/350211", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/210764/" ] }
350,220
I often see people create new features based on existing features on a machine learning problem. For example, here : https://triangleinequality.wordpress.com/2013/09/08/basic-feature-engineering-with-the-titanic-data/ people have considered the size of a person’s family as a new feature, based on the number of brothers, sisters, and parents, which were existing features. But what the point of this ? I don't understand why the creation of correlated new features is useful. Isn't it the job of the algorithm to do that on its own ?
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could expect your machine learning algorithm to find out the correct decision boundary by itself, but if you generated additional feature $z=xy$ , then the problem becomes trivial as $z>0$ gives you nearly perfect decision criterion for classification and you used just simple arithmetic! So while in many cases you could expect from the algorithm to find the solution, alternatively, by feature engineering you could simplify the problem. Simple problems are easier and faster to solve, and need less complicated algorithms. Simple algorithms are often more robust, the results are often more interpretable, they are more scalable (less computational resources, time to train, etc.), and portable. You can find more examples and explanations in the wonderful talk by Vincent D. Warmerdam, given on from PyData conference in London . Moreover, don't believe everything the machine learning marketers tell you. In most cases, the algorithms won't "learn by themselves". You usually have limited time, resources, computational power, and the data has usually a limited size and is noisy, neither of these helps. Taking this to the extreme, you could provide your data as photos of handwritten notes of the experiment result and pass them to the complicated neural network. It would first learn to recognize the data on pictures, then learn to understand it, and make predictions. To do so, you would need a powerful computer and lots of time for training and tuning the model and need huge amounts of data because of using a complicated neural network. Providing the data in a computer-readable format (as tables of numbers), simplifies the problem tremendously, since you don't need all the character recognition. You can think of feature engineering as a next step, where you transform the data in such a way to create meaningful features so that your algorithm has even less to figure out on its own. To give an analogy, it is like you wanted to read a book in a foreign language, so that you needed to learn the language first, versus reading it translated in the language that you understand. In the Titanic data example, your algorithm would need to figure out that summing family members makes sense, to get the "family size" feature (yes, I'm personalizing it in here). This is an obvious feature for a human, but it is not obvious if you see the data as just some columns of the numbers. If you don't know what columns are meaningful when considered together with other columns, the algorithm could figure it out by trying each possible combination of such columns. Sure, we have clever ways of doing this, but still, it is much easier if the information is given to the algorithm right away.
{ "source": [ "https://stats.stackexchange.com/questions/350220", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/210865/" ] }
350,228
I am running a Logistic regression on 6 independent variables and running chisquare tests shows high degree of association between 4 independent variables. Most of the topics suggest running first uni-variate regression and then keep on adding variables. I havent got any good guide for how to run this. PLease help!
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could expect your machine learning algorithm to find out the correct decision boundary by itself, but if you generated additional feature $z=xy$ , then the problem becomes trivial as $z>0$ gives you nearly perfect decision criterion for classification and you used just simple arithmetic! So while in many cases you could expect from the algorithm to find the solution, alternatively, by feature engineering you could simplify the problem. Simple problems are easier and faster to solve, and need less complicated algorithms. Simple algorithms are often more robust, the results are often more interpretable, they are more scalable (less computational resources, time to train, etc.), and portable. You can find more examples and explanations in the wonderful talk by Vincent D. Warmerdam, given on from PyData conference in London . Moreover, don't believe everything the machine learning marketers tell you. In most cases, the algorithms won't "learn by themselves". You usually have limited time, resources, computational power, and the data has usually a limited size and is noisy, neither of these helps. Taking this to the extreme, you could provide your data as photos of handwritten notes of the experiment result and pass them to the complicated neural network. It would first learn to recognize the data on pictures, then learn to understand it, and make predictions. To do so, you would need a powerful computer and lots of time for training and tuning the model and need huge amounts of data because of using a complicated neural network. Providing the data in a computer-readable format (as tables of numbers), simplifies the problem tremendously, since you don't need all the character recognition. You can think of feature engineering as a next step, where you transform the data in such a way to create meaningful features so that your algorithm has even less to figure out on its own. To give an analogy, it is like you wanted to read a book in a foreign language, so that you needed to learn the language first, versus reading it translated in the language that you understand. In the Titanic data example, your algorithm would need to figure out that summing family members makes sense, to get the "family size" feature (yes, I'm personalizing it in here). This is an obvious feature for a human, but it is not obvious if you see the data as just some columns of the numbers. If you don't know what columns are meaningful when considered together with other columns, the algorithm could figure it out by trying each possible combination of such columns. Sure, we have clever ways of doing this, but still, it is much easier if the information is given to the algorithm right away.
{ "source": [ "https://stats.stackexchange.com/questions/350228", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/210262/" ] }
350,923
This is an interview question for a quantitative analyst position, reported here . Suppose we are drawing from a uniform $[0,1]$ distribution and the draws are iid, what is the expected length of a monotonically increasing distribution? I.e., we stop drawing if the current draw is smaller than or equal to the previous draw. I've gotten the first few: $$ \Pr(\text{length} = 1) = \int_0^1 \int_0^{x_1} \mathrm{d}x_2\, \mathrm{d}x_1 = 1/2 $$ $$ \Pr(\text{length} = 2) = \int_0^1 \int_{x_1}^1 \int_0^{x_2} \mathrm{d}x_3 \, \mathrm{d}x_2 \, \mathrm{d}x_1 = 1/3 $$ $$ \Pr(\text{length} = 3) = \int_0^1 \int_{x_1}^1 \int_{x_2}^1 \int_0^{x_3} \mathrm{d}x_4\, \mathrm{d}x_3\, \mathrm{d}x_2\, \mathrm{d}x_1 = 1/8 $$ but I find calculating these nested integrals increasingly difficult and I'm not getting the "trick" to generalize to $\Pr(\text{length} = n)$ . I know the final answer is structured $$ \mathbb E(\text{length}) = \sum_{n=1}^{\infty}n\Pr(\text{length} = n) $$ Any ideas on how to answer this question?
Here are some general hints on solving this question: You have a sequence of continuous IID random variables which means they are exchangeable . What does this imply about the probability of getting a particular order for the first $n$ values? Based on this, what is the probability of getting an increasing order for the first $n$ values? It is possible to figure this out without integrating over the distribution of the underlying random variables. If you do this well, you will be able to derive an answer without any assumption of a uniform distribution - i.e., you get an answer that applies for any exchangeable sequences of continuous random variables. Here is the full solution ( don't look if you are supposed to figure this out yourself ): Let $U_1, U_2, U_3, \cdots \sim \text{IID Continuous Dist}$ be your sequence of independent continuous random variables, and let $N \equiv \max \{ n \in \mathbb{N} | U_1 < U_2 < \cdots < U_n \}$ be the number of increasing elements at the start of the sequence. Because these are continuous exchangeable random variables, they are almost surely unequal to each other, and any ordering is equally likely, so we have: $$\mathbb{P}(N \geqslant n) = \mathbb{P}(U_1 < U_2 < \cdots < U_n) = \frac{1}{n!}.$$ (Note that this result holds for any IID sequence of continuous random variables; they don't have to have a uniform distribution.) So the random variable $N$ has probability mass function $$p_N(n) = \mathbb{P}(N=n) = \frac{1}{n!} - \frac{1}{(n+1)!} = \frac{n}{(n+1)!}.$$ You will notice that this result accords with the values you have calculated using integration over the underlying values. (This part isn't needed for the solution; it is included for completeness.) Using a well-known rule for the expected value of a non-negative random variable , we have: $$\mathbb{E}(N) = \sum_{n=1}^\infty \mathbb{P}(N \geqslant n) = \sum_{n=1}^\infty \frac{1}{n!} = e - 1 = 1.718282.$$ Note again that there is nothing in our working that used the underlying uniform distribution. Hence, this is a general result that applies to any exchangeable sequence of continuous random variables. Some further insights: From the above working we see that this distributional result and resulting expected value do not depend on the underlying distribution, so long as it is a continuous distribution. This is really not surprising once we consider the fact that every continuous scalar random variable can be obtained via a monotonic transformation of a uniform random variable (with the transformation being its quantile function). Since monotonic transformations preserve rank-order, looking at the probabilities of orderings of arbitrary IID continuous random variables is the same as looking at the probabilities of orderings of IID uniform random variables.
{ "source": [ "https://stats.stackexchange.com/questions/350923", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/181565/" ] }
351,549
Context The Multivariate Gaussian appears frequently in Machine Learning and the following results are used in many ML books and courses without the derivations. Given data in form of a matrix $\mathbf{X} $ of dimensions $ m \times p$, if we assume that the data follows a $p$-variate Gaussian distribution with parameters mean $\mu$ ( $p \times 1 $) and covariance matrix $\Sigma$ ($p \times p$) the Maximum Likelihood Estimators are given by: $\hat \mu = \frac{1}{m} \sum_{i=1}^m \mathbf{ x^{(i)} } = \mathbf{\bar{x}}$ $\hat \Sigma = \frac{1}{m} \sum_{i=1}^m \mathbf{(x^{(i)} - \hat \mu) (x^{(i)} -\hat \mu)}^T $ I understand that knowledge of the multivariate Gaussian is a pre-requisite for many ML courses, but it would be helpful to have the full derivation in a self contained answer once and for all as I feel many self-learners are bouncing around the stats.stackexchange and math.stackexchange websites looking for answers. Question What is the full derivation of the Maximum Likelihood Estimators for the multivariate Gaussian Examples: These lecture notes (page 11) on Linear Discriminant Analysis, or these ones make use of the results and assume previous knowledge. There are also a few posts which are partly answered or closed: Maximum likelihood estimator for multivariate normal distribution Need help to understand Maximum Likelihood Estimation for multivariate normal distribution?
Deriving the Maximum Likelihood Estimators Assume that we have $m$ random vectors, each of size $p$ : $\mathbf{X^{(1)}, X^{(2)}, \dotsc, X^{(m)}}$ where each random vectors can be interpreted as an observation (data point) across $p$ variables. If each $\mathbf{X}^{(i)}$ are i.i.d. as multivariate Gaussian vectors: $$ \mathbf{X^{(i)}} \sim \mathcal{N}_p(\mu, \Sigma) $$ Where the parameters $\mu, \Sigma$ are unknown. To obtain their estimate we can use the method of maximum likelihood and maximize the log likelihood function. Note that by the independence of the random vectors, the joint density of the data $\mathbf{ \{X^{(i)}}, i = 1,2, \dotsc ,m\}$ is the product of the individual densities, that is $\prod_{i=1}^m f_{\mathbf{X^{(i)}}}(\mathbf{x^{(i)} ; \mu , \Sigma })$ . Taking the logarithm gives the log-likelihood function \begin{aligned} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \log \prod_{i=1}^m f_{\mathbf{X^{(i)}}}(\mathbf{x^{(i)} | \mu , \Sigma }) \\ & = \log \ \prod_{i=1}^m \frac{1}{(2 \pi)^{p/2} |\Sigma|^{1/2}} \exp \left( - \frac{1}{2} \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \right) \\ & = \sum_{i=1}^m \left( - \frac{p}{2} \log (2 \pi) - \frac{1}{2} \log |\Sigma| - \frac{1}{2} \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \right) \end{aligned} \begin{aligned} l(\mu, \Sigma ; ) & = - \frac{mp}{2} \log (2 \pi) - \frac{m}{2} \log |\Sigma| - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \end{aligned} Deriving $\hat \mu$ To take the derivative with respect to $\mu$ and equate to zero we will make use of the following matrix calculus identity: $\mathbf{ \frac{\partial w^T A w}{\partial w} = 2Aw}$ if $\mathbf{w}$ does not depend on $\mathbf{A}$ and $\mathbf{A}$ is symmetric. \begin{aligned} \frac{\partial }{\partial \mu} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \sum_{i=1}^m \mathbf{ \Sigma^{-1} ( x^{(i)} - \mu ) } = 0 \\ & \text{Since $\Sigma$ is positive definite} \\ 0 & = m \mu - \sum_{i=1}^m \mathbf{ x^{(i)} } \\ \hat \mu &= \frac{1}{m} \sum_{i=1}^m \mathbf{ x^{(i)} } = \mathbf{\bar{x}} \end{aligned} Which is often called the sample mean vector. Deriving $\hat \Sigma$ Deriving the MLE for the covariance matrix requires more work and the use of the following linear algebra and calculus properties: The trace is invariant under cyclic permutations of matrix products: $\mathrm{tr}\left[ABC\right] = \mathrm{tr}\left[CAB\right] = \mathrm{tr}\left[BCA\right]$ Since $x^TAx$ is scalar, we can take its trace and obtain the same value: $x^TAx = \mathrm{tr}\left[x^TAx\right] = \mathrm{tr}\left[xx^TA\right]$ $\frac{\partial}{\partial A} \mathrm{tr}\left[AB\right] = B^T$ $\frac{\partial}{\partial A} \log |A| = (A^{-1})^T = (A^T)^{-1}$ The determinant of the inverse of an invertible matrix is the inverse of the determinant: $|A| = \frac{1}{|A^{-1}|}$ Combining these properties allows us to calculate $$ \frac{\partial}{\partial A} x^TAx =\frac{\partial}{\partial A} \mathrm{tr}\left[xx^TA\right] = [xx^T]^T = \left(x^{T}\right)^Tx^T = xx^T $$ Which is the outer product of the vector $x$ with itself. We can now re-write the log-likelihood function and compute the derivative w.r.t. $\Sigma^{-1}$ (note $C$ is constant) \begin{aligned} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \text{C} - \frac{m}{2} \log |\Sigma| - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \\ & = \text{C} + \frac{m}{2} \log |\Sigma^{-1}| - \frac{1}{2} \sum_{i=1}^m \mathrm{tr}\left[ \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)^T \Sigma^{-1} } \right] \\ \frac{\partial }{\partial \Sigma^{-1}} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \frac{m}{2} \Sigma - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)}^T \ \ \text{Since $\Sigma^T = \Sigma$} \end{aligned} Equating to zero and solving for $\Sigma$ \begin{aligned} 0 &= m \Sigma - \sum_{i=1}^m \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)}^T \\ \hat \Sigma & = \frac{1}{m} \sum_{i=1}^m \mathbf{(x^{(i)} - \hat \mu) (x^{(i)} -\hat \mu)}^T \end{aligned} Sources https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/other-readings/chapter13.pdf http://ttic.uchicago.edu/~shubhendu/Slides/Estimation.pdf
{ "source": [ "https://stats.stackexchange.com/questions/351549", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/192854/" ] }
351,781
I am trying to plot presence/absence (1/0) of a sample species against various environmental variables. I have put presence/absence on the y-axis and the environmental variable (in this case barometric pressure) on the x axis, however the resulting plot looks terrible. Is there a better way to do this? I thought of plotting presence/absence against the frequency of the environmental variable, would this be possible?
If I understood the question correctly - you might want to use a "conditional density plot". Such a plot provides a smoothed overview of how a categorical variable changes across various levels of continuous numerical variable. Example For a real-world example here is the distribution of Sepal Width across 3 different species in the iris dataset: cdplot(Species ~ Sepal.Width, data=iris) Interpretation These plots represent smoothed proportions of each category within various levels of the continuous variable. In order to interpret them you should look across at the x-axis and see how the different proportions for each category (represented by different colors) change with the different values of the numerical variable. For example consider the picture above: it is quite easy to see that when sepal width reaches 3.5 or above you are most likely dealing with setosa type of flower. At sepal width 2.0 the versicolor dominates. And at 3.0 there are about 20% setosa, 35% versicolor and 45% virginica (judging by eye according to the scales on the y-axis on the right.) For another discussion about interpretation of such plots consider reading answers in this question: Interpretation of conditional density plots Your case Of course in your case you would have 2 categories on the y-axis. So the final picture would look closer to this example: set.seed(14) presence <- factor(rbinom(20, 1, 0.5)) presence [1] 0 1 1 1 1 1 1 0 0 0 1 0 0 1 1 1 0 1 1 1 Levels: 0 1 pressure <- runif(20, 1000, 1035) pressure [1] 1012.282 1014.687 1021.619 1024.159 1026.247 1021.663 1013.469 1018.317 1024.054 1002.747 1028.396 1004.806 1033.906 1022.898 1033.127 1004.378 1019.386 1016.432 1030.160 1021.567 cdplot(presence ~ pressure) Interpretation stays the same, except you will be dealing with a binary categorical variable. In this particular case the plot would suggest that the presence (1, light grey area) is increasing with increasing values of pressure (x-axis).
{ "source": [ "https://stats.stackexchange.com/questions/351781", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/211869/" ] }
352,036
I'm training a neural network but the training loss doesn't decrease. How can I fix this? I'm not asking about overfitting or regularization. I'm asking about how to solve the problem where my network's performance doesn't improve on the training set . This question is intentionally general so that other questions about how to train a neural network can be closed as a duplicate of this one, with the attitude that "if you give a man a fish you feed him for a day, but if you teach a man to fish, you can feed him for the rest of his life." See this Meta thread for a discussion: What's the best way to answer "my neural network doesn't work, please fix" questions? If your neural network does not generalize well, see: What should I do when my neural network doesn't generalize well?
Verify that your code is bug free There's a saying among writers that "All writing is re-writing" -- that is, the greater part of writing is revising. For programmers (or at least data scientists) the expression could be re-phrased as "All coding is debugging." Any time you're writing code, you need to verify that it works as intended. The best method I've ever found for verifying correctness is to break your code into small segments, and verify that each segment works. This can be done by comparing the segment output to what you know to be the correct answer. This is called unit testing . Writing good unit tests is a key piece of becoming a good statistician/data scientist/machine learning expert/neural network practitioner. There is simply no substitute. You have to check that your code is free of bugs before you can tune network performance! Otherwise, you might as well be re-arranging deck chairs on the RMS Titanic . There are two features of neural networks that make verification even more important than for other types of machine learning or statistical models. Neural networks are not "off-the-shelf" algorithms in the way that random forest or logistic regression are. Even for simple, feed-forward networks, the onus is largely on the user to make numerous decisions about how the network is configured, connected, initialized and optimized. This means writing code, and writing code means debugging. Even when a neural network code executes without raising an exception, the network can still have bugs! These bugs might even be the insidious kind for which the network will train, but get stuck at a sub-optimal solution, or the resulting network does not have the desired architecture. ( This is an example of the difference between a syntactic and semantic error .) This Medium post, " How to unit test machine learning code ," by Chase Roberts discusses unit-testing for machine learning models in more detail. I borrowed this example of buggy code from the article: def make_convnet(input_image): net = slim.conv2d(input_image, 32, [11, 11], scope="conv1_11x11") net = slim.conv2d(input_image, 64, [5, 5], scope="conv2_5x5") net = slim.max_pool2d(net, [4, 4], stride=4, scope='pool1') net = slim.conv2d(input_image, 64, [5, 5], scope="conv3_5x5") net = slim.conv2d(input_image, 128, [3, 3], scope="conv4_3x3") net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.conv2d(input_image, 128, [3, 3], scope="conv5_3x3") net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.conv2d(input_image, 32, [1, 1], scope="conv6_1x1") return net Do you see the error? Many of the different operations are not actually used because previous results are over-written with new variables. Using this block of code in a network will still train and the weights will update and the loss might even decrease -- but the code definitely isn't doing what was intended. (The author is also inconsistent about using single- or double-quotes but that's purely stylistic.) The most common programming errors pertaining to neural networks are Variables are created but never used (usually because of copy-paste errors); Expressions for gradient updates are incorrect; Weight updates are not applied; Loss functions are not measured on the correct scale (for example, cross-entropy loss can be expressed in terms of probability or logits) The loss is not appropriate for the task (for example, using categorical cross-entropy loss for a regression task). Dropout is used during testing, instead of only being used for training. Make sure you're minimizing the loss function $L(x)$ , instead of minimizing $-L(x)$ . Make sure your loss is computed correctly . Unit testing is not just limited to the neural network itself. You need to test all of the steps that produce or transform data and feed into the network. Some common mistakes here are NA or NaN or Inf values in your data creating NA or NaN or Inf values in the output, and therefore in the loss function. Shuffling the labels independently from the samples (for instance, creating train/test splits for the labels and samples separately); Accidentally assigning the training data as the testing data; When using a train/test split, the model references the original, non-split data instead of the training partition or the testing partition. Forgetting to scale the testing data; Scaling the testing data using the statistics of the test partition instead of the train partition; Forgetting to un-scale the predictions (e.g. pixel values are in [0,1] instead of [0, 255]). Here's an example of a question where the problem appears to be one of model configuration or hyperparameter choice, but actually the problem was a subtle bug in how gradients were computed. Is this drop in training accuracy due to a statistical or programming error? For the love of all that is good, scale your data The scale of the data can make an enormous difference on training. Sometimes, networks simply won't reduce the loss if the data isn't scaled. Other networks will decrease the loss, but only very slowly. Scaling the inputs (and certain times, the targets) can dramatically improve the network's training. Prior to presenting data to a neural network, standardizing the data to have 0 mean and unit variance, or to lie in a small interval like $[-0.5, 0.5]$ can improve training. This amounts to pre-conditioning, and removes the effect that a choice in units has on network weights. For example, length in millimeters and length in kilometers both represent the same concept, but are on different scales. The exact details of how to standardize the data depend on what your data look like. Data normalization and standardization in neural networks Why does $[0,1]$ scaling dramatically increase training time for feed forward ANN (1 hidden layer)? Batch or Layer normalization can improve network training. Both seek to improve the network by keeping a running mean and standard deviation for neurons' activations as the network trains. It is not well-understood why this helps training, and remains an active area of research. " Understanding Batch Normalization " by Johan Bjorck, Carla Gomes, Bart Selman " Towards a Theoretical Understanding of Batch Normalization " by Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, Thomas Hofmann " How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift) " by Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry Crawl Before You Walk; Walk Before You Run Wide and deep neural networks, and neural networks with exotic wiring, are the Hot Thing right now in machine learning. But these networks didn't spring fully-formed into existence; their designers built up to them from smaller units. First, build a small network with a single hidden layer and verify that it works correctly. Then incrementally add additional model complexity, and verify that each of those works as well. Too few neurons in a layer can restrict the representation that the network learns, causing under-fitting. Too many neurons can cause over-fitting because the network will "memorize" the training data. Even if you can prove that there is, mathematically, only a small number of neurons necessary to model a problem, it is often the case that having "a few more" neurons makes it easier for the optimizer to find a "good" configuration. (But I don't think anyone fully understands why this is the case.) I provide an example of this in the context of the XOR problem here: Aren't my iterations needed to train NN for XOR with MSE < 0.001 too high? . Choosing the number of hidden layers lets the network learn an abstraction from the raw data. Deep learning is all the rage these days, and networks with a large number of layers have shown impressive results. But adding too many hidden layers can make risk overfitting or make it very hard to optimize the network. Choosing a clever network wiring can do a lot of the work for you. Is your data source amenable to specialized network architectures? Convolutional neural networks can achieve impressive results on "structured" data sources, image or audio data. Recurrent neural networks can do well on sequential data types, such as natural language or time series data. Residual connections can improve deep feed-forward networks. Neural Network Training Is Like Lock Picking To achieve state of the art, or even merely good, results, you have to set up all of the parts configured to work well together . Setting up a neural network configuration that actually learns is a lot like picking a lock: all of the pieces have to be lined up just right. Just as it is not sufficient to have a single tumbler in the right place, neither is it sufficient to have only the architecture, or only the optimizer, set up correctly. Tuning configuration choices is not really as simple as saying that one kind of configuration choice (e.g. learning rate) is more or less important than another (e.g. number of units), since all of these choices interact with all of the other choices, so one choice can do well in combination with another choice made elsewhere . This is a non-exhaustive list of the configuration options which are not also regularization options or numerical optimization options. All of these topics are active areas of research. The network initialization is often overlooked as a source of neural network bugs. Initialization over too-large an interval can set initial weights too large, meaning that single neurons have an outsize influence over the network behavior. The key difference between a neural network and a regression model is that a neural network is a composition of many nonlinear functions, called activation functions . (See: What is the essential difference between neural network and linear regression ) Classical neural network results focused on sigmoidal activation functions (logistic or $\tanh$ functions). A recent result has found that ReLU (or similar) units tend to work better because the have steeper gradients, so updates can be applied quickly. (See: Why do we use ReLU in neural networks and how do we use it? ) One caution about ReLUs is the "dead neuron" phenomenon, which can stymie learning; leaky relus and similar variants avoid this problem. See Why can't a single ReLU learn a ReLU? My ReLU network fails to launch There are a number of other options. See: Comprehensive list of activation functions in neural networks with pros/cons Residual connections are a neat development that can make it easier to train neural networks. "Deep Residual Learning for Image Recognition" Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun In: CVPR. (2016). Additionally, changing the order of operations within the residual block can further improve the resulting network. " Identity Mappings in Deep Residual Networks " by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Non-convex optimization is hard The objective function of a neural network is only convex when there are no hidden units, all activations are linear, and the design matrix is full-rank -- because this configuration is identically an ordinary regression problem. In all other cases, the optimization problem is non-convex, and non-convex optimization is hard. The challenges of training neural networks are well-known (see: Why is it hard to train deep neural networks? ). Additionally, neural networks have a very large number of parameters, which restricts us to solely first-order methods (see: Why is Newton's method not widely used in machine learning? ). This is a very active area of research. Setting the learning rate too large will cause the optimization to diverge, because you will leap from one side of the "canyon" to the other. Setting this too small will prevent you from making any real progress, and possibly allow the noise inherent in SGD to overwhelm your gradient estimates. See: How can change in cost function be positive? Gradient clipping re-scales the norm of the gradient if it's above some threshold. I used to think that this was a set-and-forget parameter, typically at 1.0, but I found that I could make an LSTM language model dramatically better by setting it to 0.25. I don't know why that is. Learning rate scheduling can decrease the learning rate over the course of training. In my experience, trying to use scheduling is a lot like regex : it replaces one problem ("How do I get learning to continue after a certain epoch?") with two problems ("How do I get learning to continue after a certain epoch?" and "How do I choose a good schedule?"). Other people insist that scheduling is essential. I'll let you decide. Choosing a good minibatch size can influence the learning process indirectly, since a larger mini-batch will tend to have a smaller variance ( law-of-large-numbers ) than a smaller mini-batch. You want the mini-batch to be large enough to be informative about the direction of the gradient, but small enough that SGD can regularize your network. There are a number of variants on stochastic gradient descent which use momentum, adaptive learning rates, Nesterov updates and so on to improve upon vanilla SGD. Designing a better optimizer is very much an active area of research. Some examples: No change in accuracy using Adam Optimizer when SGD works fine How does the Adam method of stochastic gradient descent work? Why does momentum escape from a saddle point in this famous image? When it first came out, the Adam optimizer generated a lot of interest. But some recent research has found that SGD with momentum can out-perform adaptive gradient methods for neural networks. " The Marginal Value of Adaptive Gradient Methods in Machine Learning " by Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht But on the other hand, this very recent paper proposes a new adaptive learning-rate optimizer which supposedly closes the gap between adaptive-rate methods and SGD with momentum. " Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks " by Jinghui Chen, Quanquan Gu Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method (Padam), which unifies the Adam/Amsgrad with SGD to achieve the best from both worlds. Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks. Specifically for triplet-loss models, there are a number of tricks which can improve training time and generalization. See: In training a triplet network, I first have a solid drop in loss, but eventually the loss slowly but consistently increases. What could cause this? Regularization Choosing and tuning network regularization is a key part of building a model that generalizes well (that is, a model that is not overfit to the training data). However, at the time that your network is struggling to decrease the loss on the training data -- when the network is not learning -- regularization can obscure what the problem is. When my network doesn't learn, I turn off all regularization and verify that the non-regularized network works correctly. Then I add each regularization piece back, and verify that each of those works along the way. This tactic can pinpoint where some regularization might be poorly set. Some examples are $L^2$ regularization (aka weight decay) or $L^1$ regularization is set too large, so the weights can't move. Two parts of regularization are in conflict. For example, it's widely observed that layer normalization and dropout are difficult to use together. Since either on its own is very useful, understanding how to use both is an active area of research. " Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift " by Xiang Li, Shuo Chen, Xiaolin Hu, Jian Yang " Adjusting for Dropout Variance in Batch Normalization and Weight Initialization " by Dan Hendrycks, Kevin Gimpel. " Self-Normalizing Neural Networks " by Günter Klambauer, Thomas Unterthiner, Andreas Mayr and Sepp Hochreiter Keep a Logbook of Experiments When I set up a neural network, I don't hard-code any parameter settings. Instead, I do that in a configuration file (e.g., JSON) that is read and used to populate network configuration details at runtime. I keep all of these configuration files. If I make any parameter modification, I make a new configuration file. Finally, I append as comments all of the per-epoch losses for training and validation. The reason that I'm so obsessive about retaining old results is that this makes it very easy to go back and review previous experiments. It also hedges against mistakenly repeating the same dead-end experiment. Psychologically, it also lets you look back and observe "Well, the project might not be where I want it to be today, but I am making progress compared to where I was $k$ weeks ago." As an example, I wanted to learn about LSTM language models, so I decided to make a Twitter bot that writes new tweets in response to other Twitter users. I worked on this in my free time, between grad school and my job. It took about a year, and I iterated over about 150 different models before getting to a model that did what I wanted: generate new English-language text that (sort of) makes sense. (One key sticking point, and part of the reason that it took so many attempts, is that it was not sufficient to simply get a low out-of-sample loss, since early low-loss models had managed to memorize the training data, so it was just reproducing germane blocks of text verbatim in reply to prompts -- it took some tweaking to make the model more spontaneous and still have low loss.)
{ "source": [ "https://stats.stackexchange.com/questions/352036", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22311/" ] }
352,688
When building a predictive model using machine learning techniques, what is the point of doing an exploratory data analysis (EDA)? Is it okay to jump straight to feature generation and building your model(s)? How are descriptive statistics used in EDA important?
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit of a few hours. I went through each of the variables in turn, graphing them, calculating summary statistics etc. I also calculated correlations between the numerical variables. Among the things I found were: One categorical variable almost perfectly matched the target. Two or three variables had over half of their values missing. A couple of variables had extreme outliers. Two of the numerical variables were perfectly correlated. etc. My point is that these were things which had been put in deliberately to see whether people would notice them before trying to build a model. The company put them in because they are the sort of thing which can happen in real life, and drastically affect model performance. So yes, EDA is important when doing machine learning!
{ "source": [ "https://stats.stackexchange.com/questions/352688", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/43414/" ] }
352,750
I have already trained a neural network on my data. In the future, I will receive some more data. How can I incorporate this data into my model without rebuilding it from scratch?
In keras, you can save your model using model.save and then load that model using model.load . If you call .fit again on the model that you've loaded, it will continue training from the save point and will not restart from scratch. Each time you call .fit , keras will continue training on the model. .fit does not reset model weights. I would like to point out one issue that might arise from training your model this way though, and that issue is catastrophic forgetting . If you feed your model examples that differ significantly from previous training examples, it might be prone to catastrophic forgetting. This is basically when the neural network learns your new examples well and forgets all the previous examples because you are no longer feeding those examples to it. It arises because as optimizers get more efficient, the neural network will be get more efficient at fitting new data quickly - and the best way to fit the new data quickly might be to forget old data. If your future data is very similar to your current data, then this won't be a problem. But imagine you trained a named entity recognition system to recognize organizations. If in the future you feed it a bunch of data that teach it how to recognize people names, it might catastrophically forget how to recognize organizations.
{ "source": [ "https://stats.stackexchange.com/questions/352750", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/178353/" ] }
353,857
I'll propose this question by means of an example. Suppose I have a data set, such as the boston housing price data set, in which I have continuous and categorical variables. Here, we have a "quality" variable, from 1 to 10, and the sale price. I can separate the data into "low", "medium" and "high" quality houses by (arbitrarily) creating cutoffs for the quality. Then, using these groupings, I can plot histograms of the sale price against each other. Like so: Here, "low" is $\leq 3$, and "high" is $>7$ on the "quality" score. We now have a distribution of the sale prices for each of the three groups. It is clear that there is a difference in the center of location for the medium and high quality houses. Now, having done all this, I think "Hm. There appears to be a difference in center of location! Why don't I do a t-test on the means?". Then, I get a p-value that appears to correctly reject the null hypothesis that there is no difference in means. Now, suppose that I had nothing in mind for testing this hypothesis until I plotted the data. Is this data dredging? Is it still data dredging if I thought: "Hm, I bet the higher quality houses cost more, since I am a human that has lived in a house before. I'm going to plot the data. Ah ha! Looks different! Time to t-test!" Naturally, it is not data-dredging if the data set were collected with the intention of testing this hypothesis from the get-go. But often one has to work with data sets given to us, and are told to "look for patterns". How does someone avoid data dredging with this vague task in mind? Create hold out sets for testing data? Does visualization "count" as snooping for an opportunity to test a hypothesis suggested by the data?
Briefly disagreeing with/giving a counterpoint to @ingolifs's answer: yes, visualizing your data is essential. But visualizing before deciding on the analysis leads you into Gelman and Loken's garden of forking paths . This is not the same as data-dredging or p-hacking, partly through intent (the GoFP is typically well-meaning) and partly because you may not run more than one analysis. But it is a form of snooping: because your analysis is data-dependent, it can lead you to false or overconfident conclusions. You should in some way determine what your intended analysis is (e.g. "high quality houses should be higher in price") and write it down (or even officially preregister it) before looking at your data (it's OK to look at your predictor variables in advance, just not the response variable(s), but if you really have no a priori ideas then you don't even know which variables might be predictors and which might be responses); if your data suggest some different or additional analyses, then your write-up can state both what you meant to do initially and what (and why) you ended up doing it. If you are really doing pure exploration (i.e., you have no a priori hypotheses, you just want to see what's in the data): your thoughts about holding out a sample for confirmation are good. In my world (I don't work with huge data sets) the loss of resolution due to having a lower sample size would be agonizing you need to be a bit careful in selecting your holdout sample if your data are structured in any way (geographically, time series, etc. etc.). Subsampling as though the data are iid leads to overconfidence (see Wenger and Olden Methods in Ecology and Evolution 2012), so you might want to pick out geographic units to hold out (see DJ Harris Methods in Ecology and Evolution 2015 for an example) you can admit that you're being purely exploratory. Ideally you would eschew p-values entirely in this case, but at least telling your audience that you are wandering in the GoFP lets them know that they can take the p-values with enormous grains of salt. My favorite reference for "safe statistical practices" is Harrell's Regression Modeling Strategies (Springer); he lays out best practices for inference vs. prediction vs. exploration, in a rigorous but practical way.
{ "source": [ "https://stats.stackexchange.com/questions/353857", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/205400/" ] }
354,098
I'm using a binomial logistic regression to identify if exposure to has_x or has_y impacts the likelihood that a user will click on something. My model is the following: fit = glm(formula = has_clicked ~ has_x + has_y, data=df, family = binomial()) This the output from my model: Call: glm(formula = has_clicked ~ has_x + has_y, family = binomial(), data = active_domains) Deviance Residuals: Min 1Q Median 3Q Max -0.9869 -0.9719 -0.9500 1.3979 1.4233 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.504737 0.008847 -57.050 < 2e-16 *** has_xTRUE -0.056986 0.010201 -5.586 2.32e-08 *** has_yTRUE 0.038579 0.010202 3.781 0.000156 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 217119 on 164182 degrees of freedom Residual deviance: 217074 on 164180 degrees of freedom AIC: 217080 Number of Fisher Scoring iterations: 4 As each coefficient is significant, using this model I'm able to tell what the value of any of these combinations is using the following approach: predict(fit, data.frame(has_x = T, has_y=T), type = "response") I don't understand how I can report on the Std. Error of the prediction. Do I just need to use $1.96*SE$? Or do I need to convert the $SE$ using an approach described here ? If I want to understand the standard-error for both variables how would I consider that? Unlike this question , I am interested in understanding what the upper and lower bounds of the error are in a percentage. For example, of my prediction shows a value of 37% for True,True can I calculate that this is $+/- 0.3%$ for a $95\% CI$? (0.3% chosen to illustrate my point)
Your question may come from the fact that you are dealing with Odds Ratios and Probabilities which is confusing at first. Since the logistic model is a non linear transformation of $\beta^Tx$ computing the confidence intervals is not as straightforward. Background Recall that for the Logistic regression model Probability of $(Y = 1)$ : $p = \frac{e^{\alpha + \beta_1x_1 + \beta_2 x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_2 x_2}}$ Odds of $(Y = 1)$ : $ \left( \frac{p}{1-p}\right) = e^{\alpha + \beta_1x_1 + \beta_2 x_2}$ Log Odds of $(Y = 1)$ : $ \log \left( \frac{p}{1-p}\right) = \alpha + \beta_1x_1 + \beta_2 x_2$ Consider the case where you have a one unit increase in variable $x_1$ , i.e. $x_1 + 1$ , then the new odds are $$ \text{Odds}(Y = 1) = e^{\alpha + \beta_1(x_1 + 1) + \beta_2x_2} = e^{\alpha + \beta_1 x_1 + \beta_1 + \beta_2x_2 } $$ Odds Ratio (OR) are therefore $$ \frac{\text{Odds}(x_1 + 1)}{\text{Odds}(x_1)} = \frac{e^{\alpha + \beta_1(x_1 + 1) + \beta_2x_2} }{e^{\alpha + \beta_1 x_1 + \beta_2x_2}} = e^{\beta_1} $$ Log Odds Ratio = $\beta_1$ Relative risk or (probability ratio) = $\frac{ \frac{e^{\alpha + \beta_1x_1 + \beta_1 + \beta_2 x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_1 + \beta_2 x_2}}}{ \frac{e^{\alpha + \beta_1x_1 + \beta_2 x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_2 x_2}}}$ Interpreting coefficients How would you interpret the coefficient value $\beta_j$ ? Assuming that everything else remains fixed: For every unit increase in $x_j$ the log-odds ratio increases by $\beta_j$ . For every unit increase in $x_j$ the odds ratio increases by $e^{\beta_j}$ . For every increase of $x_j$ from $k$ to $k + \Delta$ the odds ratio increases by $e^{\beta_j \Delta}$ If the coefficient is negative, then an increase in $x_j$ leads to a decrease in the odds ratio. Confidence intervals for a single parameter $\beta_j$ Do I just need to use $1.96∗SE$ ? Or do I need to convert the SE using an approach described here? Since the parameter $\beta_j$ is estimated using Maxiumum Likelihood Estimation, MLE theory tells us that it is asymptotically normal and hence we can use the large sample Wald confidence interval to get the usual $$ \beta_j \pm z^* SE(\beta_j)$$ Which gives a confidence interval on the log-odds ratio. Using the invariance property of the MLE allows us to exponentiate to get $$ e^{\beta_j \pm z^* SE(\beta_j)}$$ which is a confidence interval on the odds ratio. Note that these intervals are for a single parameter only. If I want to understand the standard-error for both variables how would I consider that? If you include several parameters you can use the Bonferroni procedure, otherwise for all parameters you can use the confidence interval for probability estimates Bonferroni procedure for several parameters If $g$ parameters are to be estimated with family confidence coefficient of approximately $1 - \alpha$ , the joint Bonferroni confidence limits are $$ \beta_g \pm z_{(1 - \frac{\alpha}{2g})}SE(\beta_g)$$ Confidence intervals for probability estimates The logistic model outputs an estimation of the probability of observing a one and we aim to construct a frequentist interval around the true probability $p$ such that $Pr(p_{L} \leq p \leq p_{U}) = .95$ One approach called endpoint transformation does the following: Compute the upper and lower bounds of the confidence interval for the linear combination $x^T\beta$ (using the Wald CI) Apply a monotonic transformation to the endpoints $F(x^T\beta)$ to obtain the probabilities. Since $Pr(x^T\beta) = F(x^T\beta)$ is a monotonic transformation of $x^T\beta$ $$ [Pr(x^T\beta)_L \leq Pr(x^T\beta) \leq Pr(x^T\beta)_U] = [F(x^T\beta)_L \leq F(x^T\beta) \leq F(x^T\beta)_U] $$ Concretely this means computing $\beta^Tx \pm z^* SE(\beta^Tx)$ and then applying the logit transform to the result to get the lower and upper bounds: $$[\frac{e^{x^T\beta - z^* SE(x^T\beta)}}{1 + e^{x^T\beta - z^* SE(x^T\beta)}}, \frac{e^{x^T\beta + z^* SE(x^T\beta)}}{1 + e^{x^T\beta + z^* SE(x^T\beta)}},] $$ The estimated approximate variance of $x^T\beta$ can be calculated using the covariance matrix of the regression coefficients using $$ Var(x^T\beta) = x^T \Sigma x$$ The advantage of this method is that the bounds cannot be outside the range $(0,1)$ There are several other approaches as well, using the delta method, bootstrapping etc.. which each have their own assumptions, advantages and limits. Sources and info My favorite book on this topic is "Applied Linear Statistical Models" by Kutner, Neter, Li, Chapter 14 Otherwise here are a few online sources: Plotting confidence intervals for the predicted probabilities from a logistic regression https://stackoverflow.com/questions/47414842/confidence-interval-of-probability-prediction-from-logistic-regression-statsmode Edit October 2021 - New links https://fdocuments.net/reader/full/5logreg-beamer-online https://jslsoc.sitehost.iu.edu/stata/ci_computations/xulong-prvalue-23aug2005.pdf
{ "source": [ "https://stats.stackexchange.com/questions/354098", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2635/" ] }
354,373
I tried some usual google search etc. but most of the answers I find are either somewhat ambiguous or language/library specific such as Python or C++ stdlib.h etc. I am looking for a language agnostic, mathematical answer, not the specifics of a library. As an example, many say that the seed is a starting point of random number generator and the same seed always produces the same random number. What does it mean? Does it mean the output number is a deterministic function of a specific seed, and the randomness comes from the value of the seed? But if that is the case, then by supplying the seed, are not we, the programmers, creating the randomness instead of letting the machine do it? Also, what does a starting point mean in this context? Is this a non-rigorous way of saying an element $x\in\mathfrak{X}$ of the domain of a map $f:\mathfrak{X}\rightarrow\mathfrak{Y}$? Or am I getting something wrong?
Most pseudo-random number generators (PRNGs) are build on algorithms involving some kind of recursive method starting from a base value that is determined by an input called the "seed". The default PRNG in most statistical software (R, Python, Stata, etc.) is the Mersenne Twister algorithm MT19937, which is set out in Matsumoto and Nishimura (1998) . This is a complicated algorithm, so it would be best to read the paper on it if you want to know how it works in detail. In this particular algorithm, there is a recurrence relation of degree $n$, and your input seed is an initial set of vectors $\mathbf{x}_0, \mathbf{x}_1, ..., \mathbf{x}_{n-1}$. The algorithm uses a linear recurrence relation that generates: $$\mathbf{x}_{n+k} = f(\mathbf{x}_k, \mathbf{x}_{k+1}, \mathbf{x}_{k+m}, r, \mathbf{A}),$$ where $1 \leqslant m \leqslant n$ and $r$ and $\mathbf{A}$ are objects that can be specified as parameters in the algorithm. Since the seed gives the initial set of vectors (and given other fixed parameters for the algorithm), the series of pseudo-random numbers generated by the algorithm is fixed. If you change the seed then you change the initial vectors, which changes the pseudo-random numbers generated by the algorithm. This is, of course, the function of the seed. Now, it is important to note that this is just one example, using the MT19937 algorithm. There are many PRNGs that can be used in statistical software, and they each involve different recursive methods, and so the seed means a different thing (in technical terms) in each of them. You can find a library of PRNGs for R in this documentation , which lists the available algorithms and the papers that describe these algorithms. The purpose of the seed is to allow the user to "lock" the pseudo-random number generator, to allow replicable analysis. Some analysts like to set the seed using a true random-number generator (TRNG) which uses hardware inputs to generate an initial seed number, and then report this as a locked number. If the seed is set and reported by the original user then an auditor can repeat the analysis and obtain the same sequence of pseudo-random numbers as the original user. If the seed is not set then the algorithm will usually use some kind of default seed (e.g., from the system clock), and it will generally not be possible to replicate the randomisation.
{ "source": [ "https://stats.stackexchange.com/questions/354373", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196283/" ] }
354,678
This is a question I found on Glassdoor : How does one generate 7 integers with equal probability using a coin that has a $\mathbb{Pr}(\text{Head}) = p\in(0,1)$? Basically, you have a coin that may or may not be fair, and this is the only random-number generating process you have, so come up with random number generator that outputs integers from 1 to 7 where the probability of getting each of these integers is 1/7. Efficiency of the data-generates process matters.
Flip the coin twice. If it lands HH or TT , ignore it and flip it twice again. Now, the coin has equal probability of coming up HT or TH . If it comes up HT , call this H1 . If it comes up TH , call this T1 . Keep obtaining H1 or T1 until you have three in a row. These three results give you a number based on the table below: H1 H1 H1 -> 1 H1 H1 T1 -> 2 H1 T1 H1 -> 3 H1 T1 T1 -> 4 T1 H1 H1 -> 5 T1 H1 T1 -> 6 T1 T1 H1 -> 7 T1 T1 T1 -> [Throw out all results so far and repeat] I argue that this would work perfectly fine, although you would have a lot of wasted throws in the process!
{ "source": [ "https://stats.stackexchange.com/questions/354678", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/181565/" ] }
354,781
Consider the following statements w.r.t the Titanic: Assumption 1: Only men and women were on the ship Assumption 2: There were a large number of men as well as women Statement 1: 90 percent of all women survived Statement 2: 90 percent of all those who survived, were women The first indicates that saving women was probably of high priority (irrespective of whether saving men was) When is the second statistic useful? Can we say that one of them is almost always more useful than the other?
As they stand, neither one of Statement 1 or 2 is very useful. If 90% of passengers were women and 90% of people survived at random, then both statements would be true. The statements need to be considered in the context of the overall composition of the passengers. And the overall chance of surviving. Suppose we had as many men as women, 100 each. Here are a few possible matrices of men (M) against women (W) and surviving (S) against dead (D): | M | W ------------ S | 90 | 90 ------------ D | 10 | 10 90% of women survived. As did 90% of men. Statement 1 is true, Statement 2 is false, since half of survivors were women. This is consistent with many survivors, but no difference between genders . | M | W ------------ S | 10 | 90 ------------ D | 90 | 10 90% of women survived, but only 10% of men. 90% of the survivors were women. Both statements are true. This is consistent with a difference between genders : women were more likely to survive than men. | M | W ------------ S | 1 | 9 ------------ D | 99 | 91 9% of women survived, but only 1% of men. 90% of the survivors were women. Statement 1 is false, Statement 2 is true. This is again consistent with a difference between genders : women were more likely to survive than men.
{ "source": [ "https://stats.stackexchange.com/questions/354781", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/213755/" ] }
355,390
I'd hope the title is self explanatory. In Kaggle, most winners use stacking with sometimes hundreds of base models, to squeeze a few extra % of MSE, accuracy... In general, in your experience, how important is fancy modelling such as stacking vs simply collecting more data and more features for the data?
By way of background, I have been doing forecasting store $\times$ SKU time series for retail sales for 12 years now. Tens of thousands of time series across hundreds or thousands of stores. I like saying that we have been doing Big Data since before the term became popular. I have consistently found that the single most important thing is to understand your data . If you don't understand major drivers like Easter or promotions, you are doomed. Often enough, this comes down to understanding the specific business well enough to ask the correct questions and telling known unknowns from unknown unknowns . Once you understand your data, you need to work to get clean data. I have supervised quite a number of juniors and interns, and the one thing they had never experienced in all their statistics and data science classes was how much sheer crap there can be in the data you have. Then you need to either go back to the source and try to get it to bring forth good data, or try to clean it, or even just throw some stuff away. Changing a running system to yield better data can be surprisingly hard. Once you understand your data and actually have somewhat-clean data, you can start fiddling with it. Unfortunately, by this time, I have often found myself out of time and resources. I personally am a big fan of model combination ("stacking"), at least in an abstract sense , less so of fancy feature engineering, which often crosses the line into overfitting territory - and even if your fancier model performs slightly better on average, one often finds that the really bad predictions get worse with a more complex model. This is a dealbreaker in my line of business. A single really bad forecast can pretty completely destroy the trust in the entire system, so robustness is extremely high in my list of priorities. Your mileage may vary. In my experience, yes, model combination can improve accuracy. However, the really big gains are made with the first two steps: understanding your data, and cleaning it (or getting clean data in the first place).
{ "source": [ "https://stats.stackexchange.com/questions/355390", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/184997/" ] }
355,538
From the Forecasting: Principles and Practice textbook by Rob J Hyndman and George Athanasopoulos , specifically the section on accuracy measurement : A forecast method that minimizes the MAE will lead to forecasts of the median, while minimizing the RMSE will lead to forecasts of the mean Can someone give an intuitive explanation of why minimizing the MAE leads to the forecasting the median and not the mean? And what does this means in practice? I have asked a customer: "what is more important for you to make mean forecasts more accurate or to avoid very inaccurate forecasts?". He said that to made mean forecasts more accurate have higher priority. So, in this case, should I use MAE or RMSE? Before I read this citation I believed that MAE will be better for such condition. And now I doubt.
It's useful to take a step back and forget about the forecasting aspect for a minute. Let's consider just any distribution $F$ and assume we wish to summarize it using a single number. You learn very early in your statistics classes that using the expectation of $F$ as a single number summary will minimize the expected squared error. The question now is: why does using the median of $F$ minimize the expected absolute error? For this, I often recommend "Visualizing the Median as the Minimum-Deviation Location" by Hanley et al. (2001, The American Statistician ) . They did set up a little applet along with their paper, which unfortunately probably doesn't work with modern browsers any more, but we can follow the logic in the paper. Suppose you stand in front of a bank of elevators. They may be arranged equally spaced, or some distances between elevator doors may be larger than others (e.g., some elevators may be out of order). In front of which elevator should you stand to have the minimal expected walk when one of the elevators does arrive? Note that this expected walk plays the role of the expected absolute error! Suppose you have three elevators A, B and C. If you wait in front of A, you may need to walk from A to B (if B arrives), or from A to C (if C arrives) - passing B! If you wait in front of B, you need to walk from B to A (if A arrives) or from B to C (if C arrives). If you wait in front of C, you need to walk from C to A (if A arrives) - passing B - or from C to B (if B arrives). Note that from the first and last waiting position, there is a distance - AB in the first, BC in the last position - that you need to walk in multiple cases of elevators arriving. Therefore, your best bet is to stand right in front of the middle elevator - regardless of how the three elevators are arranged. Here is Figure 1 from Hanley et al.: This generalizes easily to more than three elevators. Or to elevators with different chances of arriving first. Or indeed to countably infinitely many elevators. So we can apply this logic to all discrete distributions and then pass to the limit to arrive at continuous distributions. To double back to forecasting, you need to consider that underlying your point forecast for a particular future time bucket, there is a (usually implicit) density forecast or predictive distribution, which we summarize using a single number point forecast. The above argument shows why the median of your predictive density $\hat{F}$ is the point forecast that minimizes the expected absolute error or MAE. (To be more precise, any median may do, since it may not be uniquely defined - in the elevator example, this corresponds to having an even number of elevators.) And of course the median may be quite different than the expectation if $\hat{F}$ is asymmetric. One important example is with low-volume count-data , especially intermittent-time-series . Indeed, if you have a 50% or higher chance of zero sales, e.g., if sales are Poisson distributed with parameter $\lambda\leq \ln 2$, then you will minimize your expected absolute error by forecasting a flat zero - which is rather unintuitive, even for highly intermittent time series. I wrote a little paper on this ( Kolassa, 2016, International Journal of Forecasting ). Thus, if you suspect that your predictive distribution is (or should be) asymmetric, as in the two cases above, then if you wish to get unbiased expectation forecasts, use the rmse . If the distribution can be assumed symmetric (typically for high-volume series), then the median and the mean coincide, and using the mae will also guide you to unbiased forecasts - and the MAE is easier to understand. Similarly, minimizing the mape can lead to biased forecasts, even for symmetric distributions. This earlier answer of mine contains a simulated example with an asymmetrically distributed strictly positive (lognormally distributed) series can meaningfully be point forecasted using three different point forecasts, depending on whether we want to minimize the MSE, the MAE or the MAPE.
{ "source": [ "https://stats.stackexchange.com/questions/355538", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74943/" ] }
355,781
In the MIT OpenCourseWare notes for 18.05 Introduction to Probability and Statistics, Spring 2014 (currently available here ), it states: The bootstrap percentile method is appealing due to its simplicity. However it depends on the bootstrap distribution of $\bar{x}^{*}$ based on a particular sample being a good approximation to the true distribution of $\bar{x}$. Rice says of the percentile method, "Although this direct equation of quantiles of the bootstrap sampling distribution with confidence limits may seem initially appealing, it’s rationale is somewhat obscure."[2] In short, don’t use the bootstrap percentile method . Use the empirical bootstrap instead (we have explained both in the hopes that you won’t confuse the empirical bootstrap for the percentile bootstrap). [2] John Rice, Mathematical Statistics and Data Analysis , 2nd edition, p. 272 After a bit of searching online, this is the only quote I've found which outright states that the percentile bootstrap should not be used. What I recall reading from the text Principles and Theory for Data Mining and Machine Learning by Clarke et al. is that the main justification for bootstrapping is the fact that $$\dfrac{1}{n}\sum_{i=1}^{n}\hat{F}_n(x) \overset{p}{\to} F(x)$$ where $\hat{F}_n$ is the empirical CDF. (I don't recall details beyond this.) Is it true that the percentile bootstrap method should not be used? If so, what alternatives are there for when $F$ isn't necessarily known (i.e., not enough information is available to do a parametric bootstrap)? Update Because clarification has been requested, the "empirical bootstrap" from these MIT notes refers to the following procedure: they compute $\delta_1 = (\hat{\theta}^{*}-\hat{\theta})_{\alpha/2}$ and $\delta_2 = (\hat{\theta}^{*}-\hat{\theta})_{1-\alpha/2}$ with $\hat{\theta}^{*}$ the bootstrapped estimates of $\theta$ and $\hat{\theta}$ the full-sample estimate of $\theta$, and the resulting estimated confidence interval would be $[\hat{\theta}-\delta_2, \hat{\theta} - \delta_1]$. In essence, the main idea is this: empirical bootstrapping estimates an amount proportional to the difference between the point estimate and the actual parameter, i.e., $\hat{\theta}-\theta$, and uses this difference to come up with the lower and upper CI bounds. The "percentile bootstrap" refers to the following: use $[\hat{\theta}^*_{\alpha/2}, \hat{\theta}^*_{1-\alpha/2}]$ as the confidence interval for $\theta$. In this situation, we use bootstrapping to compute estimates of the parameter of interest and take the percentiles of these estimates for the confidence interval.
There are some difficulties that are common to all nonparametric bootstrapping estimates of confidence intervals (CI), some that are more of an issue with both the "empirical" (called "basic" in the boot.ci() function of the R boot package and in Ref. 1 ) and the "percentile" CI estimates (as described in Ref. 2 ), and some that can be exacerbated with percentile CIs. TL;DR : In some cases percentile bootstrap CI estimates might work adequately, but if certain assumptions don't hold then the percentile CI might be the worst choice, with the empirical/basic bootstrap the next worst. Other bootstrap CI estimates can be more reliable, with better coverage. All can be problematic. Looking at diagnostic plots, as always, helps avoid potential errors incurred by just accepting the output of a software routine. Bootstrap setup Generally following the terminology and arguments of Ref. 1 , we have a sample of data $y_1, ..., y_n$ drawn from independent and identically distributed random variables $Y_i$ sharing a cumulative distribution function $F$. The empirical distribution function (EDF) constructed from the data sample is $\hat F$. We are interested in a characteristic $\theta$ of the population, estimated by a statistic $T$ whose value in the sample is $t$. We would like to know how well $T$ estimates $\theta$, for example, the distribution of $(T - \theta)$. Nonparametric bootstrap uses sampling from the EDF $\hat F$ to mimic sampling from $F$, taking $R$ samples each of size $n$ with replacement from the $y_i$. Values calculated from the bootstrap samples are denoted with "*". For example, the statistic $T$ calculated on bootstrap sample j provides a value $T_j^*$. Empirical/basic versus percentile bootstrap CIs The empirical/basic bootstrap uses the distribution of $(T^*-t)$ among the $R$ bootstrap samples from $\hat F$ to estimate the distribution of $(T-\theta)$ within the population described by $F$ itself. Its CI estimates are thus based on the distribution of $(T^*-t)$, where $t$ is the value of the statistic in the original sample. This approach is based on the fundamental principle of bootstrapping ( Ref. 3 ): The population is to the sample as the sample is to the bootstrap samples. The percentile bootstrap instead uses quantiles of the $T_j^*$ values themselves to determine the CI. These estimates can be quite different if there is skew or bias in the distribution of $(T-\theta)$. Say that there is an observed bias $B$ such that: $$\bar T^*=t+B,$$ where $\bar T^*$ is the mean of the $T_j^*$. For concreteness, say that the 5th and 95th percentiles of the $T_j^*$ are expressed as $\bar T^*-\delta_1$ and $\bar T^*+\delta_2$, where $\bar T^*$ is the mean over the bootstrap samples and $\delta_1,\delta_2$ are each positive and potentially different to allow for skew. The 5th and 95th CI percentile-based estimates would directly be given respectively by: $$\bar T^*-\delta_1=t+B-\delta_1; \bar T^*+\delta_2=t+B+\delta_2.$$ The 5th and 95th percentile CI estimates by the empirical/basic bootstrap method would be respectively ( Ref. 1 , eq. 5.6, page 194): $$2t-(\bar T^*+\delta_2) = t-B-\delta_2; 2t-(\bar T^*-\delta_1) = t-B+\delta_1.$$ So percentile-based CIs both get the bias wrong and flip the directions of the potentially asymmetric positions of the confidence limits around a doubly-biased center . The percentile CIs from bootstrapping in such a case do not represent the distribution of $(T-\theta)$. This behavior is nicely illustrated on this page , for bootstrapping a statistic so negatively biased that the original sample estimate is below the 95% CIs based on the empirical/basic method (which directly includes appropriate bias correction). The 95% CIs based on the percentile method, arranged around a doubly-negatively biased center, are actually both below even the negatively biased point estimate from the original sample! Should the percentile bootstrap never be used? That might be an overstatement or an understatement, depending on your perspective. If you can document minimal bias and skew, for example by visualizing the distribution of $(T^*-t)$ with histograms or density plots, the percentile bootstrap should provide essentially the same CI as the empirical/basic CI. These are probably both better than the simple normal approximation to the CI. Neither approach, however, provides the accuracy in coverage that can be provided by other bootstrap approaches. Efron from the beginning recognized potential limitations of percentile CIs but said: "Mostly we will be content to let the varying degrees of success of the examples speak for themselves." ( Ref. 2 , page 3) Subsequent work, summarized for example by DiCiccio and Efron ( Ref. 4 ), developed methods that "improve by an order of magnitude upon the accuracy of the standard intervals" provided by the empirical/basic or percentile methods. Thus one might argue that neither the empirical/basic nor the percentile methods should be used, if you care about accuracy of the intervals. In extreme cases, for example sampling directly from a lognormal distribution without transformation, no bootstrapped CI estimates might be reliable, as Frank Harrell has noted . What limits the reliability of these and other bootstrapped CIs? Several issues can tend to make bootstrapped CIs unreliable. Some apply to all approaches, others can be alleviated by approaches other than the empirical/basic or percentile methods. The first, general, issue is how well the empirical distribution $\hat F$ represents the population distribution $F$. If it doesn't, then no bootstrapping method will be reliable. In particular, bootstrapping to determine anything close to extreme values of a distribution can be unreliable. This issue is discussed elsewhere on this site, for example here and here . The few, discrete, values available in the tails of $\hat F$ for any particular sample might not represent the tails of a continuous $F$ very well. An extreme but illustrative case is trying to use bootstrapping to estimate the maximum order statistic of a random sample from a uniform $\;\mathcal{U}[0,\theta]$ distribution, as explained nicely here . Note that bootstrapped 95% or 99% CI are themselves at tails of a distribution and thus could suffer from such a problem, particularly with small sample sizes. Second, there is no assurance that sampling of any quantity from $\hat F$ will have the same distribution as sampling it from $F$. Yet that assumption underlies the fundamental principle of bootstrapping. Quantities with that desirable property are called pivotal . As AdamO explains : This means that if the underlying parameter changes, the shape of the distribution is only shifted by a constant, and the scale does not necessarily change. This is a strong assumption! For example, if there is bias it's important to know that sampling from $F$ around $\theta$ is the same as sampling from $\hat F$ around $t$. And this is a particular problem in nonparametric sampling; as Ref. 1 puts it on page 33: In nonparametric problems the situation is more complicated. It is now unlikely (but not strictly impossible) that any quantity can be exactly pivotal. So the best that's typically possible is an approximation. This problem, however, can often be addressed adequately. It's possible to estimate how closely a sampled quantity is to pivotal, for example with pivot plots as recommended by Canty et al . These can display how distributions of bootstrapped estimates $(T^*-t)$ vary with $t$, or how well a transformation $h$ provides a quantity $(h(T^*)-h(t))$ that is pivotal. Methods for improved bootstrapped CIs can try to find a transformation $h$ such that $(h(T^*)-h(t))$ is closer to pivotal for estimating CIs in the transformed scale, then transform back to the original scale. The boot.ci() function provides studentized bootstrap CIs (called "bootstrap- t " by DiCiccio and Efron ) and $BC_a$ CIs (bias corrected and accelerated, where the "acceleration" deals with skew) that are "second-order accurate" in that the difference between the desired and achieved coverage $\alpha$ (e.g., 95% CI) is on the order of $n^{-1}$, versus only first-order accurate (order of $n^{-0.5}$) for the empirical/basic and percentile methods ( Ref 1 , pp. 212-3; Ref. 4 ). These methods, however, require keeping track of the variances within each of the bootstrapped samples, not just the individual values of the $T_j^*$ used by those simpler methods. In extreme cases, one might need to resort to bootstrapping within the bootstrapped samples themselves to provide adequate adjustment of confidence intervals. This "Double Bootstrap" is described in Section 5.6 of Ref. 1 , with other chapters in that book suggesting ways to minimize its extreme computational demands. Davison, A. C. and Hinkley, D. V. Bootstrap Methods and their Application, Cambridge University Press, 1997 . Efron, B. Bootstrap Methods: Another look at the jacknife, Ann. Statist. 7: 1-26, 1979 . Fox, J. and Weisberg, S. Bootstrapping regression models in R. An Appendix to An R Companion to Applied Regression, Second Edition (Sage, 2011). Revision as of 10 October 2017 . DiCiccio, T. J. and Efron, B. Bootstrap confidence intervals. Stat. Sci. 11: 189-228, 1996 . Canty, A. J., Davison, A. C., Hinkley, D. V., and Ventura, V. Bootstrap diagnostics and remedies. Can. J. Stat. 34: 5-27, 2006 .
{ "source": [ "https://stats.stackexchange.com/questions/355781", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46427/" ] }
357,255
Two random variables A and B are statistically independent. That means that in the DAG of the process: $(A {\perp\!\!\!\perp} B)$ and of course $P(A|B)=P(A)$. But does that also mean that there's no front-door from B to A? Because then we should get $P(A|do(B))=P(A)$. So if that's the case, does statistical independence automatically mean lack of causation?
So if that's the case, does statistical independence automatically mean lack of causation? No, and here's a simple counter example with a multivariate normal, set.seed(100) n <- 1e6 a <- 0.2 b <- 0.1 c <- 0.5 z <- rnorm(n) x <- a*z + sqrt(1-a^2)*rnorm(n) y <- b*x - c*z + sqrt(1- b^2 - c^2 +2*a*b*c)*rnorm(n) cor(x, y) With corresponding graph, Here we have that $x$ and $y$ are marginally independent (in the multivariate normal case, zero correlation implies independence). This happens because the backdoor path via $z$ exactly cancels out the direct path from $x$ to $y$, that is, $cov(x,y) = b - a*c = 0.1 - 0.1 = 0$. Thus $E[Y|X =x] =E[Y] =0$. Yet, $x$ directly causes $y$, and we have that $E[Y|do(X= x)] = bx$, which is different from $E[Y]=0$. Associations, interventions and counterfactuals I think it's important to make some clarifications here regarding associations, interventions and counterfactuals. Causal models entail statements about the behavior of the system: (i) under passive observations, (ii) under interventions, as well as (iii) counterfactuals. And independence on one level does not necessarily translate to the other. As the example above shows, we can have no association between $X$ and $Y$, that is, $P(Y|X) = P(Y)$, and still be the case that manipulations on $X$ changes the distribution of $Y$, that is, $P(Y|do(x)) \neq P(Y)$. Now, we can go one step further. We can have causal models where intervening on $X$ does not change the population distribution of $Y$, but that does not mean lack of counterfactual causation! That is, even though $P(Y|do(x)) = P(Y)$, for every individual their outcome $Y$ would have been different had you changed his $X$. This is precisely the case described by user20160, as well as in my previous answer here. These three levels make a hierarchy of causal inference tasks , in terms of the information needed to answer queries on each of them.
{ "source": [ "https://stats.stackexchange.com/questions/357255", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/42514/" ] }
357,358
I have a question about the treatment of data in a meta-analysis. It is quite common in this particular field for researchers to essentially use data twice for analysis: once as a normal bivariate correlation between variable A and B and then again in a multiple-regression context where they model the relationship between A and B as a quadratic function, while also including a couple of other predictors. Theoretically it makes more sense to treat the relationship as non-linear, but some researchers treat the relationship as linear, while others treat it as non-linear, while some even just report both, as mentioned above. What is the most appropriate way of dealing with this situation? Can I just aggregate the unstandardized coefficients, regardless of how the researchers treated the relationship?
So if that's the case, does statistical independence automatically mean lack of causation? No, and here's a simple counter example with a multivariate normal, set.seed(100) n <- 1e6 a <- 0.2 b <- 0.1 c <- 0.5 z <- rnorm(n) x <- a*z + sqrt(1-a^2)*rnorm(n) y <- b*x - c*z + sqrt(1- b^2 - c^2 +2*a*b*c)*rnorm(n) cor(x, y) With corresponding graph, Here we have that $x$ and $y$ are marginally independent (in the multivariate normal case, zero correlation implies independence). This happens because the backdoor path via $z$ exactly cancels out the direct path from $x$ to $y$, that is, $cov(x,y) = b - a*c = 0.1 - 0.1 = 0$. Thus $E[Y|X =x] =E[Y] =0$. Yet, $x$ directly causes $y$, and we have that $E[Y|do(X= x)] = bx$, which is different from $E[Y]=0$. Associations, interventions and counterfactuals I think it's important to make some clarifications here regarding associations, interventions and counterfactuals. Causal models entail statements about the behavior of the system: (i) under passive observations, (ii) under interventions, as well as (iii) counterfactuals. And independence on one level does not necessarily translate to the other. As the example above shows, we can have no association between $X$ and $Y$, that is, $P(Y|X) = P(Y)$, and still be the case that manipulations on $X$ changes the distribution of $Y$, that is, $P(Y|do(x)) \neq P(Y)$. Now, we can go one step further. We can have causal models where intervening on $X$ does not change the population distribution of $Y$, but that does not mean lack of counterfactual causation! That is, even though $P(Y|do(x)) = P(Y)$, for every individual their outcome $Y$ would have been different had you changed his $X$. This is precisely the case described by user20160, as well as in my previous answer here. These three levels make a hierarchy of causal inference tasks , in terms of the information needed to answer queries on each of them.
{ "source": [ "https://stats.stackexchange.com/questions/357358", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/49037/" ] }
357,401
I have trained a caret model using bootstrapping and the default metric (accuracy, since I'm doing logistic regression). Now I'd like to know other performance parameters for the trained model: sensitivity, specificity, ROC etc. How do I do that?
So if that's the case, does statistical independence automatically mean lack of causation? No, and here's a simple counter example with a multivariate normal, set.seed(100) n <- 1e6 a <- 0.2 b <- 0.1 c <- 0.5 z <- rnorm(n) x <- a*z + sqrt(1-a^2)*rnorm(n) y <- b*x - c*z + sqrt(1- b^2 - c^2 +2*a*b*c)*rnorm(n) cor(x, y) With corresponding graph, Here we have that $x$ and $y$ are marginally independent (in the multivariate normal case, zero correlation implies independence). This happens because the backdoor path via $z$ exactly cancels out the direct path from $x$ to $y$, that is, $cov(x,y) = b - a*c = 0.1 - 0.1 = 0$. Thus $E[Y|X =x] =E[Y] =0$. Yet, $x$ directly causes $y$, and we have that $E[Y|do(X= x)] = bx$, which is different from $E[Y]=0$. Associations, interventions and counterfactuals I think it's important to make some clarifications here regarding associations, interventions and counterfactuals. Causal models entail statements about the behavior of the system: (i) under passive observations, (ii) under interventions, as well as (iii) counterfactuals. And independence on one level does not necessarily translate to the other. As the example above shows, we can have no association between $X$ and $Y$, that is, $P(Y|X) = P(Y)$, and still be the case that manipulations on $X$ changes the distribution of $Y$, that is, $P(Y|do(x)) \neq P(Y)$. Now, we can go one step further. We can have causal models where intervening on $X$ does not change the population distribution of $Y$, but that does not mean lack of counterfactual causation! That is, even though $P(Y|do(x)) = P(Y)$, for every individual their outcome $Y$ would have been different had you changed his $X$. This is precisely the case described by user20160, as well as in my previous answer here. These three levels make a hierarchy of causal inference tasks , in terms of the information needed to answer queries on each of them.
{ "source": [ "https://stats.stackexchange.com/questions/357401", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/169343/" ] }
357,466
TL;DR See title. Motivation I am hoping for a canonical answer along the lines of "(1) No, (2) Not applicable, because (1)", which we can use to close many wrong questions about unbalanced datasets and oversampling. I would be quite as happy to be proven wrong in my preconceptions. Fabulous Bounties await the intrepid answerer. My argument I am baffled by the many questions we get in the unbalanced-classes tag. Unbalanced classes seem to be self-evidently bad . And oversampling the minority class(es) is quite as self-evidently seen as helping to address the self-evident problems. Many questions that carry both tags proceed to ask how to perform oversampling in some specific situation. I understand neither what problem unbalanced classes pose, nor how oversampling is supposed to address these problems. In my opinion, unbalanced data do not pose a problem at all. One should model class membership probabilities, and these may be small. As long as they are correct, there is no problem. One should, of course, not use accuracy as a KPI to be maximized in a classification problem. Or calculate classification thresholds . Instead, one should assess the quality of the entire predictive distribution using proper scoring-rules . Tetlock's Superforecasting serves as a wonderful and very readable introduction to predicting unbalanced classes, even if this is nowhere explicitly mentioned in the book. Related The discussion in the comments has brought up a number of related threads. What problem does oversampling, undersampling, and SMOTE solve? IMO, this question does not have a satisfactory answer. (Per my suspicion, this may be because there is no problem .) When is unbalanced data really a problem in Machine Learning? The consensus appears to be "it isn't". I'll probably vote to close this question as a duplicate of that one. IcannotFixThis' answer , seems to presume (1) that the KPI we attempt to maximize is accuracy, and (2) that accuracy is an appropriate KPI for classification model evaluation. It isn't. This may be one key to the entire discussion. AdamO's answer focuses on the low precision of estimates from unbalanced factors. This is of course a valid concern and probably the answer to my titular question. But oversampling does not help here, any more than we can get more precise estimates in any run-of-the-mill regression by simply duplicating each observation ten times. What is the root cause of the class imbalance problem? Some of the comments here echo my suspicion that there is no problem . The single answer again implicitly presumes that we use accuracy as a KPI, which I find unsatisfactory . Are there Imbalanced learning problems where re-balancing/re-weighting demonstrably improves accuracy ? is related, but presupposes accuracy as an evaluation measure. (Which I argue is not a good choice.) Summary The threads above can apparently be summarized as follows. Rare classes (both in the outcome and in predictors) are a problem, because parameter estimates and predictions have high variance/low precision. This cannot be addressed through oversampling. (In the sense that it is always better to get more data that is representative of the population, and selective sampling will induce bias per my and others' simulations.) Rare classes are a "problem" if we assess our model by accuracy. But accuracy is not a good measure for assessing classification models . (I did think about including accuracy in my simulations, but then I would have needed to set a classification threshold, which is a closely related wrong question , and the question is long enough as it is.) An example Let's simulate for an illustration. Specifically, we will simulate ten predictors, only a single one of which actually has an impact on a rare outcome. We will look at two algorithms that can be used for probabilistic classification: logistic-regression and random-forests . In each case, we will apply the model either to the full dataset, or to an oversampled balanced one, which contains all the instances of the rare class and the same number of samples from the majority class (so the oversampled dataset is smaller than the full dataset). For the logistic regression, we will assess whether each model actually recovers the original coefficients used to generate the data. In addition, for both methods, we will calculate probabilistic class membership predictions and assess these on holdout data generated using the same data generating process as the original training data. Whether the predictions actually match the outcomes will be assessed using the Brier score , one of the most common proper scoring rules . We will run 100 simulations. (Cranking this up only makes the beanplots more cramped and makes the simulation run longer than one cup of coffee.) Each simulation contains $n=10,000$ samples. The predictors form a $10,000\times 10$ matrix with entries uniformly distributed in $[0,1]$ . Only the first predictor actually has an impact; the true DGP is $$ \text{logit}(p_i) = -7+5x_{i1}. $$ This makes for simulated incidences for the minority TRUE class between 2 and 3%: Let's run the simulations. Feeding the full dataset into a logistic regression, we (unsurprisingly) get unbiased parameter estimates (the true parameter values are indicated by the red diamonds): However, if we feed the oversampled dataset to the logistic regression, the intercept parameter is heavily biased : Let's compare the Brier scores between models fitted to the "raw" and the oversampled datasets, for both the logistic regression and the Random Forest. Remember that smaller is better: In each case, the predictive distributions derived from the full dataset are much better than those derived from an oversampled one. I conclude that unbalanced classes are not a problem, and that oversampling does not alleviate this non-problem, but gratuitously introduces bias and worse predictions. Where is my error? A caveat I'll happily concede that oversampling has one application: if we are dealing with a rare outcome, and assessing the outcome is easy or cheap, but assessing the predictors is hard or expensive A prime example would be genome-wide association studies (GWAS) of rare diseases. Testing whether one suffers from a particular disease can be far easier than genotyping their blood. (I have been involved with a few GWAS of PTSD .) If budgets are limited, it may make sense to screen based on the outcome and ensure that there are "enough" of the rarer cases in the sample. However, then one needs to balance the monetary savings against the losses illustrated above - and my point is that the questions on unbalanced datasets at CV do not mention such a tradeoff, but treat unbalanced classes as a self-evident evil, completely apart from any costs of sample collection . R code library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_randomForest <- brier_score_randomForest_oversampled <- rep(NA, n_sims) pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled[, 2] - (outcome_test==TRUE))^2) } close(pb) hist(incidence_train, breaks=seq(min(incidence_train)-.5, max(incidence_train) + .5), col="lightgray", main=paste("Minority class incidence out of", nn_train,"training samples"), xlab="") ylim <- range(c(model_logistic_coefficients, model_logistic_oversampled_coefficients)) beanplot(data.frame(model_logistic_coefficients), what=c(0,1,0,0), col="lightgray", xaxt="n", ylim=ylim, main="Logistic regression: estimated coefficients") axis(1, at=seq_along(true_coefficients), c("Intercept", paste("Predictor", 1:(length(true_coefficients) - 1))), las=3) points(true_coefficients, pch=23, bg="red") beanplot(data.frame(model_logistic_oversampled_coefficients), what=c(0, 1, 0, 0), col="lightgray", xaxt="n", ylim=ylim, main="Logistic regression (oversampled): estimated coefficients") axis(1, at=seq_along(true_coefficients), c("Intercept", paste("Predictor", 1:(length(true_coefficients) - 1))), las=3) points(true_coefficients, pch=23, bg="red") beanplot(data.frame(Raw=brier_score_logistic, Oversampled=brier_score_logistic_oversampled), what=c(0,1,0,0), col="lightgray", main="Logistic regression: Brier scores") beanplot(data.frame(Raw=brier_score_randomForest, Oversampled=brier_score_randomForest_oversampled), what=c(0,1,0,0), col="lightgray", main="Random Forest: Brier scores")
I'd like to start by seconding a statement in the question: ... my point is that the questions on unbalanced datasets at CV do not mention such a tradeoff, but treat unbalanced classes as a self-evident evil, completely apart from any costs of sample collection. I also have the same concern, my questions here and here are intended to invite counter-evidence that it is a "self-evident evil" the lack of answers (even with a bounty) suggests it isn't. A lot of blog posts and academic papers don't make this clear either. Classifiers can have a problem with imbalanced datasets, but only where the dataset is very small, so my answer is concerned with exceptional cases, and does not justify resampling the dataset in general. There is a class imbalance problem, but it is not caused by the imbalance per se , but because there are too few examples of the minority class to adequately describe it's statistical distribution. As mentioned in the question, this means that the parameter estimates can have high variance, which is true, but that can give rise to a bias in favour of the majority class (rather than affecting both classes equally). In the case of logistic regression, this is discussed by King and Zeng, 3 Gary King and Langche Zeng. 2001. “Logistic Regression in Rare Events Data.” Political Analysis, 9, Pp. 137–163. https://j.mp/2oSEnmf [In my experiments I have found that sometimes there can be a bias in favour of the minority class, but that is caused by wild over-fitting where the class-overlap dissapears due to random sampling, so that doesn't really count and (Bayesian) regularisation ought to fix that] The good thing is that MLE is asymptotically unbiased, so we can expect this bias against the minority class to go away as the overall size of the dataset increases, regardless of the imbalance. As this is an estimation problem, anything that makes estimation more difficult (e.g. high dimensionality) seems likely to make the class imbalance problem worse. Note that probabilistic classifiers (such as logistic regression) and proper scoring rules will not solve this problem as "popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events" 3 . This means that your probability estimates will not be well calibrated, so you will have to do things like adjust the threshold (which is equivalent to re-sampling or re-weighting the data). So if we look at a logistic regression model with 10,000 samples, we should not expect to see an imbalance problem as adding more data tends to fix most estimation problems. So an imbalance might be problematic, if you have an extreme imbalance and the dataset is small (and/or high dimensional etc.), but in that case it may be difficult to do much about it (as you don't have enough data to estimate how big a correction to the sampling is needed to correct the bias). If you have lots of data, the only reason to resample is because operational class frequencies are different to those in the training set or different misclassification costs etc. (if either are unknown or variable, your really ought to use a probabilistic classifier). This is mostly a stub, I hope to be able to add more to it later.
{ "source": [ "https://stats.stackexchange.com/questions/357466", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1352/" ] }
357,745
I mean some of those variables are strongly correlated among themselves. How / why / in what context do we define them as independent variables?
If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good deal of sense. In controlled experimental studies, the choices of a drug and its concentrations, or the choices of a fertilizer and its amounts per acre, are made independently by the investigator. The interest is in how a response variable of interest (e.g., blood pressure, crop yield) depends on these experimental manipulations. Ideally, the characteristics of the independent variables are tightly specified, with essentially no errors in knowing their values. Then standard linear regression, for example, models the differences among values of dependent variables in terms of the values of the independent variables plus residual errors. The same mathematical formalism used for regression in the context of controlled experimental studies also can be applied to analysis of observed data sets with little to no experimental manipulation, so it's perhaps not surprising that the phrase "independent variables" has carried over to such types of studies. But, as others on this page note, that's probably an unfortunate choice, with "predictors" or "features" more appropriate in such contexts.
{ "source": [ "https://stats.stackexchange.com/questions/357745", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/213117/" ] }
357,963
Both the cross-entropy and the KL divergence are tools to measure the distance between two probability distributions, but what is the difference between them? $$ H(P,Q) = -\sum_x P(x)\log Q(x) $$ $$ KL(P | Q) = \sum_{x} P(x)\log {\frac{P(x)}{Q(x)}} $$ Moreover, it turns out that the minimization of KL divergence is equivalent to the minimization of cross-entropy. I want to know them instinctively.
You will need some conditions to claim the equivalence between minimizing cross entropy and minimizing KL divergence. I will put your question under the context of classification problems using cross entropy as loss functions. Let us first recall that entropy is used to measure the uncertainty of a system, which is defined as \begin{equation} S(v)=-\sum_ip(v_i)\log p(v_i)\label{eq:entropy}, \end{equation} for $p(v_i)$ as the probabilities of different states $v_i$ of the system. From an information theory point of view, $S(v)$ is the amount of information is needed for removing the uncertainty. For instance, the event $I$ I will die within 200 years is almost certain (we may solve the aging problem for the word almost ), therefore it has low uncertainty which requires only the information of the aging problem cannot be solved to make it certain. However, the event $II$ I will die within 50 years is more uncertain than event $I$ , thus it needs more information to remove the uncertainties. Here entropy can be used to quantify the uncertainty of the distribution When will I die? , which can be regarded as the expectation of uncertainties of individual events like $I$ and $II$ . Now look at the definition of KL divergence between distributions A and B \begin{equation} D_{KL}(A\parallel B) = \sum_ip_A(v_i)\log p_A(v_i) - p_A(v_i)\log p_B(v_i)\label{eq:kld}, \end{equation} where the first term of the right hand side is the entropy of distribution A, the second term can be interpreted as the expectation of distribution B in terms of A. And the $D_{KL}$ describes how different B is from A from the perspective of A. It's worth of noting $A$ usually stands for the data, i.e. the measured distribution, and $B$ is the theoretical or hypothetical distribution. That means, you always start from what you observed. To relate cross entropy to entropy and KL divergence, we formalize the cross entropy in terms of distributions $A$ and $B$ as \begin{equation} H(A, B) = -\sum_ip_A(v_i)\log p_B(v_i)\label{eq:crossentropy}. \end{equation} From the definitions, we can easily see \begin{equation} H(A, B) = D_{KL}(A\parallel B)+S_A\label{eq:entropyrelation}. \end{equation} If $S_A$ is a constant, then minimizing $H(A, B)$ is equivalent to minimizing $D_{KL}(A\parallel B)$ . A further question follows naturally as how the entropy can be a constant. In a machine learning task, we start with a dataset (denoted as $P(\mathcal D)$ ) which represent the problem to be solved, and the learning purpose is to make the model estimated distribution (denoted as $P(model)$ ) as close as possible to true distribution of the problem (denoted as $P(truth)$ ). $P(truth)$ is unknown and represented by $P(\mathcal D)$ . Therefore in an ideal world, we expect \begin{equation} P(model)\approx P(\mathcal D) \approx P(truth) \end{equation} and minimize $D_{KL}(P(\mathcal D)\parallel P(model))$ . And luckily, in practice $\mathcal D$ is given, which means its entropy $S(D)$ is fixed as a constant.
{ "source": [ "https://stats.stackexchange.com/questions/357963", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/209729/" ] }
359,992
Let's say we have the following dataframe: TY_MAX 141 1.004622 142 1.004645 143 1.004660 144 1.004672 145 1.004773 146 1.004820 147 1.004814 148 1.004807 149 1.004773 150 1.004820 151 1.004814 152 1.004834 153 1.005117 154 1.005023 155 1.004928 156 1.004834 157 1.004827 158 1.005023 159 1.005248 160 1.005355 25th: 1.0031185409705132 50th: 1.004634349800723 75th: 1.0046683578907745 Calculated 50th: 1.003893449430644 I am a bit confused here. If we get 75th prcentile, 75% of data should be below that percentile. And if we can 25th percentile, 25% of data should be below that 25th. Now i am thinking that 50% of data should be between 25th and 50th. And also 50th percentile gives me a different value. Fair enough, which means 50% of data should be below this value. But my question is if my approach correct? EDIT: And also can we say 98% of data will be between 1st-99th of percentile?
Yes. 75% of your data are below the 75th percentile. 25% of your data are below the 25th percentile. Therefore, 50% (=75%-25%) of your data are between the two, i.e., between the 25th and the 75th percentile. Completely analogously, 98% of your data are between the 1st and the 99th percentile. And the bottom half of your data, again 50%, are below the 50th percentile. These numbers may not be completely correct, especially if you have low numbers of data. Note also that there are different conventions on how quantiles and percentiles are actually computed .
{ "source": [ "https://stats.stackexchange.com/questions/359992", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/214651/" ] }
361,018
I am using "add" and "concatenate" as it is defined in keras. Basically, from my understanding , add will sum the inputs (which are the layers, in essence tensors). So if the first layer had a particular weight as 0.4 and another layer with the same exact shape had the corresponding weight being 0.5 , then after the add the new weight becomes 0.9 . However, with concatenate, let's say the first layer has dimensions 64x128x128 and the second layer had dimensions 32x128x128 , then after concatenate, the new dimensions are 96x128128 (assuming you pass in the second layer as the first input into concatenate). Assuming my above intuition is true, when would I use one over the other? Conceptually, add seems a sharing of information that potentially results in information distortion while concatenate is a sharing of information in the literal sense.
Adding is nice if you want to interpret one of the inputs as a residual "correction" or "delta" to the other input. For example, the residual connections in ResNet are often interpreted as successively refining the feature maps. Concatenating may be more natural if the two inputs aren't very closely related. However, the difference is smaller than you may think. Note that $W[x,y] = W_1x + W_2y$ where $[\ ]$ denotes concat and $W$ is split horizontally into $W_1$ and $W_2$. Compare this to $W(x+y) = Wx + Wy$. So you can interpret adding as a form of concatenation where the two halves of the weight matrix are constrained to $W_1 = W_2$.
{ "source": [ "https://stats.stackexchange.com/questions/361018", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/133742/" ] }
363,144
http://www.chioka.in/differences-between-l1-and-l2-as-loss-function-and-regularization/ If you look at the top of this post, the writer mentions that L2 norm has a unique solution and L1 norm has possibly many solutions. I understand this in terms of regularization, but not in terms of using L1 norm or L2 norm in the loss function. If you look at graphs of functions of scalar x (x^2 and |x|), you can easily see both have one unique solution.
Let's consider a one-dimensional problem for the simplest possible exposition. (Higher dimensional cases have similar properties.) While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\sum_i |x_i-\mu|$ (a sum of absolute value functions with different x-offsets) often doesn't. Consider $x_1=1$ and $x_2=3$ : (NB in spite of the label on the x-axis, this is really a function of $\mu$ ; I should have modified the label but I'll just leave it as is) In higher dimensions, you can get regions of constant minimum with the $L_1$ -norm. There's an example in the case of fitting lines here . Sums of quadratics are still quadratic, so $\sum_i (x_i-\mu)^2 = n(\bar{x}-\mu)^2+k(\mathbf{x})$ will have a unique solution. In higher dimensions (multiple regression say) the quadratic problem may not automatically have a unique minimum -- you may have multicollinearity leading to a lower-dimensional ridge in the negative of the loss in the parameter space; that's a somewhat different issue than the one presented here. A warning. The page you link to claims that $L_1$ -norm regression is robust. I'd have to say I don't completely agree. It's robust against large deviations in the y-direction, as long as they aren't influential points (discrepant in x-space). It can be arbitrarily-badly screwed up by even a single influential outlier. There's an example here . Since (outside some specific circumstances) you don't usually have any such guarantee of no highly influential observations, I wouldn't call L1-regression robust. R code for plot: fi <- function(x,i=0) abs(x-i) f <- function(x) fi(x,1)+fi(x,3) plot(f,-1,5,ylim=c(0,6),col="blue",lwd=2) curve(fi(x,1),-1,5,lty=3,col="dimgrey",add=TRUE) curve(fi(x,3),-1,5,lty=3,col="dimgrey",add=TRUE)
{ "source": [ "https://stats.stackexchange.com/questions/363144", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/184213/" ] }
363,791
I seem to be missing some vital piece of information. I am aware that the coefficient of logistic regression are in log(odds), called the logit scale. Therefore to interpret them, exp(coef) is taken and yields OR, the odds ratio. If $\beta_1 = 0.012$ the interpretation is as follows: For one unit increase in the covariate $X_1$ , the log odds ratio is 0.012 - which does not provide meaningful information as it is. Exponentiation yields that for one unit increase in the covariate $X_1$ , the odds ratio is 1.012 ( $\exp(0.012)=1.012$ ), or $Y=1$ is 1.012 more likely than $Y=0$ . But I would like to express the coefficient as percentage. According to Gelman and Hill in Data Analysis Using Regression and Multilevel/Hierarchical Models , pg 111: The coefficients β can be exponentiated and treated as multiplicative effects." Such that if β1=0.012, then "the expected multiplicative increase is exp(0.012)=1.012, or a 1.2% positive difference ... However, according to my scripts $$\text{ODDS} = \frac{p}{1-p} $$ and the inverse logit formula states $$ P=\frac{OR}{1+OR}=\frac{1.012}{2.012}= 0.502$$ Which i am tempted to interpret as if the covariate increases by one unit the probability of Y=1 increases by 50% - which I assume is wrong, but I do not understand why. How can logit coefficients be interpreted in terms of probabilities?
These odds ratios are the exponential of the corresponding regression coefficient: $$\text{odds ratio} = e^{\hat\beta}$$ For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odds ratio is $e^{0.25} = 1.28$. The odds ratio is the multiplier that shows how the odds change for a one-unit increase in the value of the X. The odds ratio increases by a factor of 1.28. So if the initial odds ratio was, say 0.25, the odds ratio after one unit increase in the covariate becomes $0.25 \times 1.28$. Another way to try to interpret the odds ratio is to look at the fractional part and interpret it as a percentage change. For example, the odds ratio of 1.28 corresponds to a 28% increase in the odds for a 1-unit increase in the corresponding X. In case we are dealing with an decreasing effect (OR < 1), for example odds ratio = 0.94, then there is a 6% decrease in the odds for a 1-unit increase in the corresponding X. The formula is: $$ \text{Percent Change in the Odds} = \left( \text{Odds Ratio} - 1 \right) \times 100 $$
{ "source": [ "https://stats.stackexchange.com/questions/363791", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/182590/" ] }
365,604
In hypothesis testing, alternative hypothesis doesn't have to be the opposite of null hypothesis. For example, for $H_0: \mu=0$, $H_a$ is allowed to be $\mu>1$, or $\mu=1$. My question: Why is this allowed ? What if in reality, $\mu=-1$ or $\mu=2$, in which case if one applies, say, likelihood ratio test, one may (wrongly) conclude that $H_0$ is accepted, or $H_0$ is rejected and hence $H_a$ is accepted? What about this proposal: $H_a$ should always be the opposite of $H_0$? That is, $H_a: H_0$ is not true. This way, we are effectively testing only a single hypothesis $H_0$, rejecting it if the p-value is below a predefined significance level, and not have to test two hypotheses at the same time that can be both wrong.
What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of. In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing.
{ "source": [ "https://stats.stackexchange.com/questions/365604", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/148309/" ] }
365,778
I'm training a neural network and the training loss decreases, but the validation loss doesn't, or it decreases much less than what I would expect, based on references or experiments with very similar architectures and data. How can I fix this? As for question What should I do when my neural network doesn't learn? to which this question is inspired, the question is intentionally left general so that other questions about how to reduce the generalization error of a neural network down to a level which has been proved to be attainable , can be closed as duplicates of this one. See also dedicated thread on Meta: Is there a generic question to which we can redirect questions of the type "why does my neural network not generalize well?"
First of all, let's mention what does "my neural network doesn't generalize well" mean and what's the difference with saying "my neural network doesn't perform well" . When training a Neural Network, you are constantly evaluating it on a set of labelled data called the training set . If your model isn't working properly and doesn't appear to learn from the training set, you don't have a generalization issue yet, instead please refer to this post . However, if your model is achieving a satisfactory performance on the training set, but cannot perform well on previously unseen data (e.g. validation/test sets), then you do have a generalization problem. Why is your model not generalizing properly? The most important part is understanding why your network doesn't generalize well. High-capacity Machine Learning models have the ability to memorize the training set, which can lead to overfitting . Overfitting is the state where an estimator has begun to learn the training set so well that it has started to model the noise in the training samples (besides all useful relationships). For example, in the image below we can see how the blue on the right line has clearly overfit. But why is this bad? When attempting to evaluate our model on new, previously unseen data (i.e. validation/test set), the model's performance will be much worse than what we expect. How to prevent overfitting? In the beginning of the post I implied that the complexity of your model is what is actually causing the overfitting, as it is allowing the model to extract unnecessary relationships from the training set, that map its inherent noise. The easiest way to reduce overfitting is to essentially limit the capacity of your model. These techniques are called regularization techniques. Parameter norm penalties . These add an extra term to the weight update function of each model, that is dependent on the norm of the parameters. This term's purpose is to counter the actual update (i.e. limit how much each weight can be updated). This makes the models more robust to outliers and noise. Examples of such regularizations are L1 and L2 regularizations, which can be found on the Lasso , Ridge and Elastic Net regressors. Since each (fully connected) layer in a neural network functions much like a simple linear regression, these are used in Neural Networks. The most common use is to regularize each layer individually. keras implementation . Early stopping . This technique attempts to stop an estimator's training phase prematurely, at the point where it has learned to extract all meaningful relationships from the data, before beginning to model its noise. This is done by monitoring the validation loss (or a validation metric of your choosing) and terminating the training phase when this metric stops improving . This way we give the estimator enough time to learn the useful information but not enough to learn from the noise. keras implementation . Neural Network specific regularizations. Some examples are: Dropout . Dropout is an interesting technique that works surprisingly well. Dropout is applied between two successive layers in a network. At each iteration a specified percentage of the connections (selected randomly), connecting the two layers, are dropped . This causes the subsequent layer rely on all of its connections to the previous layer. keras implementation Transfer learning . This is especially used in Deep Learning. This is done by initializing the weights of your network to the ones of another network with the same architecture pre-trained on a large, generic dataset. Other things that may limit overfitting in Deep Neural Networks are: Batch Normalization , which can act as a regulizer and in some cases (e.g. inception modules) works as well as dropout; relatively small sized batches in SGD, which can also prevent overfitting; adding small random noise to weights in hidden layers. Another way of preventing overfitting, besides limiting the model's capacity, is by improving the quality of your data. The most obvious choice would be outlier/noise removal, however in practice their usefulness is limited. A more common way (especially in image-related tasks) is data augmentation . Here we attempt randomly transform the training examples so that while they appear to the model to be different, they convey the same semantic information (e.g. left-right flipping on images). Data augmentation overview Practical suggestions: By far the most effective regularization technique is dropout , meaning that it should be the first you should use. However, you don't need to (and probably shouldn't) place dropout everywhere! The most prone layers to overfitting are the Fully Connected (FC) layers, because they contain the most parameters. Dropout should be applied to these layers (impacting their connections to the next layer). Batch normalization , besides having a regularization effect aids your model in several other ways (e.g. speeds up convergence, allows for the use of higher learning rates). It too should be used in FC layers. As mentioned previously it also may be beneficial to stop your model earlier in the training phase than scheduled. The problem with early stopping is that there is no guarantee that, at any given point, the model won't start improving again. A more practical approach than early stopping is storing the weights of the model that achieve the best performance on the validation set. Be cautious, however, as this is not an unbiased estimate of the performance of your model (just better than the training set). You can also overfit on the validation set. More on that later. keras implementation In some applications (e.g. image related tasks), it is highly recommended to follow an already established architecture (e.g. VGG, ResNet, Inception), that you can find ImageNet weights for. The generic nature of this dataset, allows the features to be in turn generic enough to be used for any image related task. Besides being robust to overfitting this will greatly reduce the training time. Another use of the similar concept is the following: if your task doesn't have much data, but you can find another similar task that does, you can use transfer learning to reduce overfitting. First train your network for the task that has the larger dataset and then attempt to fine-tune the model to the one you initially wanted. The initial training will, in most cases, make your model more robust to overfitting. Data augmentation . While it always helps to have a larger dataset, data augmentation techniques do have their shortcomings. More specifically, you have to be careful not to augment too strongly , as this might ruin the semantic content of the data. For example in image augmentation if you translate/shift/scale or adjust the brighness/contrast the image too much you'll lose much of the information it contains. Furthermore, augmentation schemes need to be implemented for each task in an ad-hoc fashion (e.g. in handwritten digit recognition the digits are usually aligned and shouldn't be rotated too much; also they shouldn't be flipped in any direction, as they aren't horizontally/vertically symetric. Same goes for medical images). In short be careful not to produce non realistic images through data augmentation. Moreover, an increased dataset size will require a longer training time. Personally, I start considering using data augmentation when I see that my model is reaching near $0$ loss on the training set.
{ "source": [ "https://stats.stackexchange.com/questions/365778", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/58675/" ] }
366,272
I have a 64 character SHA256 hash. I'm hoping to train a model that can predict if the plaintext used to generate the hash begins with a 1 or not. Regardless if this is "Possible", what algorithm would be the best approach? My initial thoughts: Generate a large sample of hashes that begin with a 1 and a large sample of hashes that do not begin with a 1 Set each of the 64 characters of a hash as a parameter for some sort of unsupervised logistic regression model. Train the model by telling it when it is right/wrong. Hopefully be able to create a model that can predict if the plaintext begins with a 1 or not with a high enough accuracy (and with a decent kappa)
This isn't really a stats answer, but: No , you can't determine the first character of the plaintext from the hash, because there's no such thing as "the plaintext" for a given hash. SHA-256 is a hashing algorithm. No matter what your plaintext, you get out a 32-byte signature, often expressed as a 64-character hex string. There are far more possible plaintexts than there are possible 64 character hex strings - the same hash can be generated from any number of different plaintexts. There's no reason to believe that the first character being/not being a '1' is uniform across all plaintexts producing a given hash.
{ "source": [ "https://stats.stackexchange.com/questions/366272", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/220172/" ] }
367,254
I've been in a debate with my graduate-level statistics professor about "normal distributions". I contend that to truly get a normal distribution one must have mean=median=mode, all the data must be contained under the bell curve, and perfectly symmetrical around the mean. Therefore, technically, there are virtually NO normal distributions in real studies, and we should call them something else, perhaps "near-normal". She says I'm too picky, and if the skew/kurtosis are less than 1.0 it is a normal distribution and took off points on an exam. The dataset is total number of falls/year in a random sampling of 52 nursing homes which is a random sample of a larger population. Any insight? Problem: QUESTION: 3. Compute measures of skewness and kurtosis for this data. Include a histogram with a normal curve. Discuss your findings. Is the data normally distributed? Statistics Number of falls N Valid 52 Missing 0 Mean 11.23 Median 11.50 Mode 4a a. Multiple modes exist. The smallest value is shown Number of falls N Valid 52 Missing 0 Skewness .114 Std. Error of Skewness .330 Kurtosis -.961 Std. Error of Kurtosis .650 My answer: The data is platykurtic and has only slight positive skewing, and it is NOT a normal distribution because the mean and median and mode are not equal and the data is not evenly distributed around the mean. In reality virtually no data is ever a perfect normal distribution, although we can discuss “approximately normal distributions” such as height, weight, temperature, or length of adult ring finger in large population groups. Professor's answer: You are correct that there is no perfectly normal distribution. But, we are not looking for perfection. We need to look at data in addition to the histogram and the measures of central tendency. What do the skewness and kurtosis statistics tell you about the distribution? Because they are both between the critical values of -1 and +1, this data is considered to be normally distributed.
A problem with your discussion with the professor is one of terminology, there's a misunderstanding that is getting in the way of conveying a potentially useful idea. In different places, you both make errors. So the first thing to address: it's important to be pretty clear about what a distribution is. A normal distribution is a specific mathematical object, which you could consider as a model for a process (which you might consider an uncountably infinite population of values; no finite population can actually have a continuous distribution). Loosely, what this distribution does (once you specify the parameters) is define (via an algebraic expression) the proportion of the population values that lies within any given interval on the real line. Slightly less loosely, it defines the probability that a single value from that population will lie in any given interval. An observed sample doesn't really have a normal distribution; a sample might (potentially) be drawn from a normal distribution, if one were to exist. If you look at the empirical cdf of the sample, it's discrete. If you bin it (as in a histogram) the sample has a "frequency distribution", but those aren't normal distributions. The distribution can tell us some things (in a probabilistic sense) about a random sample from the population, and a sample may also tell us some things about the population. A reasonable interpretation of a phrase like "normally distributed sample"* is "a random sample from a normally distributed population". *(I generally try to avoid saying it myself, for reasons that are hopefully made clear enough here; usually I manage to confine myself to the second kind of expression.) Having defined terms (if still a little loosely), let us now look at the question in detail. I'll be addressing specific pieces of the question. normal distribution one must have mean=median=mode This is certainly a condition on the normal probability distribution, though not a requirement on a sample drawn from a normal distribution; samples may be asymmetric, may have mean differ from median and so on. [We can, however, get an idea how far apart we might reasonably expect them to be if the sample really came from a normal population.] all the data must be contained under the bell curve I am not sure what "contained under" means in this sense. and perfectly symmetrical around the mean. No; you're talking about the data here, and a sample from a (definitely symmetrical) normal population would not itself be perfectly symmetric. Here's some simulated samples from normal distributions: If you generate a number of samples of about that sample size (60) and plot histograms with about 10 bins, you may see similar variation in general shape. As you can see from the histograms, these are not actually symmetric. Some, like 2, 4 and 7, are quite distinctly asymmetrical . Some have quite short tails, like 5 and 8, some have noticeably longer tails, at least on one side. Some suggest multiple modes. None actually look all that close to what an actual normal density looks like $-$ that is, even random samples don't necessarily look all that much like their populations, at least not until the sample sizes are fairly large $-$ considerably larger than the n=60 I used here. Therefore, technically, there are virtually NO normal distributions in real studies, I agree with your conclusion but the reasoning is not correct; it's not a consequence of the fact that data are not perfectly symmetric (etc); it's the fact that populations are themselves not perfectly normal . if the skew/kurtosis are less than 1.0 it is a normal distribution If she said this in just that way, she's definitely wrong. A sample skewness may be much closer to 0 than that (taking "less than" to mean in absolute magnitude not actual value), and the sample excess kurtosis may also be much closer to 0 than that (they might even, whether by chance or construction, potentially be almost exactly zero), and yet the distribution from which the sample was drawn might be distinctly non-normal (e.g. bimodal, or clearly asymmetric, or perhaps with somewhat heavier tails than the normal $-$ it's not just the tail that determines kurtosis) We can go further -- even if we were to magically know the population skewness and kurtosis were exactly that of a normal, it still wouldn't of itself tell us the population was normal, nor even something close to normal. Here's an example: This particular example is strongly bimodal, heavier tailed than the normal, but symmetric. It has the same skewness and kurtosis as the normal. Further examples can be found in this answer. Not all are symmetric, and some are discrete. The dataset is total number of falls/year in a random sampling of 52 nursing homes which is a random sample of a larger population. The population distribution of counts are never normal. Counts are discrete and non-negative, normal distributions are continuous and over the entire real line. But we're really focused on the wrong issue here. Probability models are just that, models . Let us not confuse our models with the real thing . The issue isn't "are the data themselves normal?" (they can't be), nor even "is the population from which the data were drawn normal?" (this is almost never going to be the case). A more useful question to discuss is "how badly would my inference be impacted if I treated the population as normally distributed?" That is we should not be overly focused on whether the assumption is true (we shouldn't expect that), but whether it's useful, or perhaps what and how severe might the consequences be if we were to use such a model. It's also a much harder question to answer well, and may require considerably more work than glancing at a few simple diagnostics. The sample statistics you showed are not particularly inconsistent with normality (you could see statistics like that or "worse" not terribly rarely if you had random samples of that size from normal populations), but that doesn't of itself mean that the actual population from which the sample was drawn is automatically "close enough" to normal for some particular purpose. It would be important to consider the purpose (what questions you're answering), and the robustness of the methods employed for it, and even then we may still not be sure that it's "good enough"; sometimes it may be better to simply not assume what we don't have good reason to assume a priori (e.g. on the basis of experience with similar data sets). it is NOT a normal distribution Data - even data drawn from a normal population - never have exactly the properties of the population; from those numbers alone you don't have a good basis to conclude that the population is not normal here. On the other hand neither do we have any reasonably solid basis to say that it's "sufficiently close" to normal - we haven't even considered the purpose of assuming normality, so we don't know what distributional features it might be sensitive to. For example, if I had two samples for a measurement that was bounded, that I knew would not be heavily discrete (not mostly only taking a few distinct values) and reasonably near to symmetric, I might be relatively happy to use a two-sample t-test at some not-so-small sample size; it's moderately robust to mild deviations from the assumptions (somewhat level-robust, somewhat less power-robust). But I would be considerably more cautious about as causally assuming normality when testing equality of spread, for example, because the best test under that assumption is quite sensitive to the assumption. Because they are both between the critical values of -1 and +1, this data is considered to be normally distributed." If that's really the criterion by which one decides to use a normal distributional model, then it will sometimes lead you into quite poor analyses. The values of those statistics do give us some clues about the population from which the sample was drawn, but that's not at all the same thing as suggesting that their values are in any way a 'safe guide' to choosing an analysis. Cast your mind back to the fact that there's distributional examples where the population has very different shape from the normal, but with the same population skewness and kurtosis. Add to that the inherent noise in their sample equivalents (and not least of all, the considerable downbias typical of sample kurtosis), and you may well be concluding rather too much on very limited and possibly misleading evidence. Now to address the underlying issue with even a better phrased version of such a question as the one you had: The whole process of looking at a sample to choose a model is fraught with problems -- doing so alters the properties of any subsequent choices of analysis based on what you saw! e.g for a hypothesis test, your significance levels, p-values and power are all not what you would choose/calculate them to be , because those calculations are predicated on the analysis not being based on the data. See, for example Gelman and Loken (2014), " The Statistical Crisis in Science ," American Scientist , Volume 102, Number 6, p 460 (DOI: 10.1511/2014.111.460) which discusses issues with such data-dependent analysis.
{ "source": [ "https://stats.stackexchange.com/questions/367254", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/220807/" ] }
367,467
In my work, when individuals refer to the "mean" value of a data set, they're typically referring to the arithmetic mean (i.e. "average", or "expected value"). If I provided the geometric mean, people would likely think I'm being snide or non-helpful, as the definition of "mean" is known in advance. I'm trying to determine if there are multiple definitions of the "median" of a data set. For example, one of the definitions provided by a colleague for finding the median of a data set with an even number of elements would be: Algorithm 'A' Divide the number of elements by two, round down. That value is the index of the median. i.e. For the following set, the median would be 5 . [4, 5, 6, 7] This seems to make sense, though the rounding-down aspect seems a bit arbitrary. Algorithm 'B' In any case, another colleague has proposed a separate algorithm, which was in a stats textbook of his (need to get the name and author): Divide the number of elements by 2, and keep a copy of the rounded-up and rounded-down integers. Name them n_lo and n_hi . Take the arithmetic mean of the elements at n_lo and n_hi . i.e. For the following set, the median would be (5+6)/2 = 5.5 . [4, 5, 6, 7] This seems wrong though, as the median value, 5.5 in this case, isn't actually in the original data set. When we swapped out algorithm 'A' for 'B' in some test code, it broke horribly (as we expected). Question Is there a formal "name" for these two approaches to calculating the median of a data set? i.e. "lesser-of-the-two median" versus "average-the-middle-elements-and-make-new-data median"?
TL;DR - I'm not aware of specific names being given to different estimators of sample medians. Methods to estimate sample statistics from some data are rather fussy and different resources give different definitions. In Hogg, McKean and Craig's Introduction to Mathematical Statistics , the authors provide a definition of medians of random samples , but only in the case that there are an odd number of samples! The authors write Certain functions of the order statistics are important statistics themselves... if $n$ is odd, $Y_{(n+1)/2}$ ... is called the median of the random sample. The authors provide no guidance on what to do if you have an even number of samples. (Note that $Y_i$ is the $i$ th smallest datum.) But this seems unnecessarily restrictive; I would prefer to be able to define a median of a random sample for even or odd $n$ . Moreover, I would like the median to be unique. Given these two requirements, I have to make some decisions about how to best find a unique sample median. Both Algorithm A and Algorithm B satisfy these requirements. Imposing additional requirements could eliminate either or both from consideration. Algorithm B has the property that half the data fall above the value, and half the data fall below the value. In light of the definition of the median of a random variable , this seems nice. Whether or not a particular estimator breaks unit tests is a property of the unit tests -- unit tests written against a specific estimator won't necessarily hold when you substitute another estimator. In the ideal case, the unit tests were chosen because they reflect the critical needs of your organization, not because of a doctrinaire argument over definitions.
{ "source": [ "https://stats.stackexchange.com/questions/367467", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/220977/" ] }
367,590
What is the point of time series analysis? There are plenty of other statistical methods, such as regression and machine learning, that have obvious use cases: regression can provide information on the relationship between two variables, while machine learning is great for prediction. But meanwhile, I don't see what time series analysis is good for. Sure, I can fit an ARIMA model and use it for prediction, but what good is that when the confidence intervals for that prediction are going to be huge? There's a reason nobody can predict the stock market despite it being the most data-driven industry in world history. Likewise, how do I use it to understand my process further? Sure, I can plot the ACF and go "aha! there's some dependence!", but then what? What's the point? Of course there's dependence, that's why you are doing time series analysis to begin with. You already knew there was dependence . But what are you going to use it for?
One main use is forecasting . I have been feeding my family for over a decade now by forecasting how many units of a specific product a supermarket will sell tomorrow, so he can order enough stock, but not too much. There is money in this. Other forecasting use cases are given in publications like the International Journal of Forecasting or Foresight . (Full disclosure: I'm an Associate Editor of Foresight .) Yes, sometimes the prediction-interval s are huge. (I assume you mean PIs, not confidence-interval s. There is a difference. ) This simply means that the process is hard to forecast. Then you need to mitigate. In forecasting supermarket sales, this means you need a lot of safety stock. In forecasting sea level rises, this means you need to build higher levees. I would say that a large prediction interval does provide useful information. And for all forecasting use cases, time-series analyis is useful, though forecasting is a larger topic. You can often improve forecasts by taking the dependencies in your time series into account, so you need to understand them through analysis, which is more specific than just knowing dependencies are there. Plus, people are interested in time series even if they do not forecast. Econometricians like to detect change points in macroeconomic time series. Or assess the impact of an intervention, such as a change in tax laws, on GDP or something else. You may want to skim through your favorite econometrics journal for more inspiration.
{ "source": [ "https://stats.stackexchange.com/questions/367590", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/221058/" ] }
368,002
Penalized regression estimators such as LASSO and ridge are said to correspond to Bayesian estimators with certain priors. I guess (as I do not know enough about Bayesian statistics) that for a fixed tuning parameter, there exists a concrete corresponding prior. Now a frequentist would optimize the tuning parameter by cross validation. Is there a Bayesian equivalent of doing so, and is it used at all? Or does the Bayesian approach effectively fix the tuning parameter before seeing the data? (I guess the latter would be detrimental to predictive performance.)
Penalized regression estimators such as LASSO and ridge are said to correspond to Bayesian estimators with certain priors. Yes, that is correct. Whenever we have an optimisation problem involving maximisation of the log-likelihood function plus a penalty function on the parameters, this is mathematically equivalent to posterior maximisation where the penalty function is taken to be the logarithm of a prior kernel. $^\dagger$ To see this, suppose we have a penalty function $w$ using a tuning parameter $\lambda$ . The objective function in these cases can be written as: $$\begin{equation} \begin{aligned} H_\mathbf{x}(\theta|\lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) \\[6pt] &= \ln \Big( L_\mathbf{x}(\theta) \cdot \exp ( -w(\theta|\lambda)) \Big) \\[6pt] &= \ln \Bigg( \frac{L_\mathbf{x}(\theta) \pi (\theta|\lambda)}{\int L_\mathbf{x}(\theta) \pi (\theta|\lambda) d\theta} \Bigg) + \text{const} \\[6pt] &= \ln \pi(\theta|\mathbf{x}, \lambda) + \text{const}, \\[6pt] \end{aligned} \end{equation}$$ where we use the prior $\pi(\theta|\lambda) \propto \exp ( -w(\theta|\lambda))$ . Observe here that the tuning parameter in the optimisation is treated as a fixed hyperparameter in the prior distribution. If you are undertaking classical optimisation with a fixed tuning parameter, this is equivalent to undertaking a Bayesian optimisation with a fixed hyper-parameter. For LASSO and Ridge regression the penalty functions and corresponding prior-equivalents are: $$\begin{equation} \begin{aligned} \text{LASSO Regression} & & \pi(\theta|\lambda) &= \prod_{k=1}^m \text{Laplace} \Big( 0, \frac{1}{\lambda} \Big) = \prod_{k=1}^m \frac{\lambda}{2} \cdot \exp ( -\lambda |\theta_k| ), \\[6pt] \text{Ridge Regression} & & \pi(\theta|\lambda) &= \prod_{k=1}^m \text{Normal} \Big( 0, \frac{1}{2\lambda} \Big) = \prod_{k=1}^m \sqrt{\lambda/\pi} \cdot \exp ( -\lambda \theta_k^2 ). \\[6pt] \end{aligned} \end{equation}$$ The former method penalises the regression coefficients according to their absolute magnitude, which is the equivalent of imposing a Laplace prior located at zero. The latter method penalises the regression coefficients according to their squared magnitude, which is the equivalent of imposing a normal prior located at zero. Now a frequentist would optimize the tuning parameter by cross validation. Is there a Bayesian equivalent of doing so, and is it used at all? So long as the frequentist method can be posed as an optimisation problem (rather than say, including a hypothesis test, or something like this) there will be a Bayesian analogy using an equivalent prior. Just as the frequentists may treat the tuning parameter $\lambda$ as unknown and estimate this from the data, the Bayesian may similarly treat the hyperparameter $\lambda$ as unknown. In a full Bayesian analysis this would involve giving the hyperparameter its own prior and finding the posterior maximum under this prior, which would be analogous to maximising the following objective function: $$\begin{equation} \begin{aligned} H_\mathbf{x}(\theta, \lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) - h(\lambda) \\[6pt] &= \ln \Big( L_\mathbf{x}(\theta) \cdot \exp ( -w(\theta|\lambda)) \cdot \exp ( -h(\lambda)) \Big) \\[6pt] &= \ln \Bigg( \frac{L_\mathbf{x}(\theta) \pi (\theta|\lambda) \pi (\lambda)}{\int L_\mathbf{x}(\theta) \pi (\theta|\lambda) \pi (\lambda) d\theta} \Bigg) + \text{const} \\[6pt] &= \ln \pi(\theta, \lambda|\mathbf{x}) + \text{const}. \\[6pt] \end{aligned} \end{equation}$$ This method is indeed used in Bayesian analysis in cases where the analyst is not comfortable choosing a specific hyperparameter for their prior, and seeks to make the prior more diffuse by treating it as unknown and giving it a distribution. (Note that this is just an implicit way of giving a more diffuse prior to the parameter of interest $\theta$ .) (Comment from statslearner2 below) I'm looking for numerical equivalent MAP estimates. For instance, for a fixed penalty Ridge there is a gaussian prior that will give me the MAP estimate exactly equal the ridge estimate. Now, for k-fold CV ridge, what is the hyper-prior that would give me the MAP estimate which is similar to the CV-ridge estimate? Before proceeding to look at $K$ -fold cross-validation, it is first worth noting that, mathematically, the maximum a posteriori (MAP) method is simply an optimisation of a function of the parameter $\theta$ and the data $\mathbf{x}$ . If you are willing to allow improper priors then the scope encapsulates any optimisation problem involving a function of these variables. Thus, any frequentist method that can be framed as a single optimisation problem of this kind has a MAP analogy, and any frequentist method that cannot be framed as a single optimisation of this kind does not have a MAP analogy. In the above form of model, involving a penalty function with a tuning parameter, $K$ -fold cross-validation is commonly used to estimate the tuning parameter $\lambda$ . For this method you partition the data vector $\mathbb{x}$ into $K$ sub-vectors $\mathbf{x}_1,...,\mathbf{x}_K$ . For each of sub-vector $k=1,...,K$ you fit the model with the "training" data $\mathbf{x}_{-k}$ and then measure the fit of the model with the "testing" data $\mathbf{x}_k$ . In each fit you get an estimator for the model parameters, which then gives you predictions of the testing data, which can then be compared to the actual testing data to give a measure of "loss": $$\begin{matrix} \text{Estimator} & & \hat{\theta}(\mathbf{x}_{-k}, \lambda), \\[6pt] \text{Predictions} & & \hat{\mathbf{x}}_k(\mathbf{x}_{-k}, \lambda), \\[6pt] \text{Testing loss} & & \mathscr{L}_k(\hat{\mathbf{x}}_k, \mathbf{x}_k| \mathbf{x}_{-k}, \lambda). \\[6pt] \end{matrix}$$ The loss measures for each of the $K$ "folds" can then be aggregated to get an overall loss measure for the cross-validation: $$\mathscr{L}(\mathbf{x}, \lambda) = \sum_k \mathscr{L}_k(\hat{\mathbf{x}}_k, \mathbf{x}_k| \mathbf{x}_{-k}, \lambda)$$ One then estimates the tuning parameter by minimising the overall loss measure: $$\hat{\lambda} \equiv \hat{\lambda}(\mathbf{x}) \equiv \underset{\lambda}{\text{arg min }} \mathscr{L}(\mathbf{x}, \lambda).$$ We can see that this is an optimisation problem, and so we now have two seperate optimisation problems (i.e., the one described in the sections above for $\theta$ , and the one described here for $\lambda$ ). Since the latter optimisation does not involve $\theta$ , we can combine these optimisations into a single problem, with some technicalities that I discuss below. To do this, consider the optimisation problem with objective function: $$\begin{equation} \begin{aligned} \mathcal{H}_\mathbf{x}(\theta, \lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) - \delta \mathscr{L}(\mathbf{x}, \lambda), \\[6pt] \end{aligned} \end{equation}$$ where $\delta > 0$ is a weighting value on the tuning-loss. As $\delta \rightarrow \infty$ the weight on optimisation of the tuning-loss becomes infinite and so the optimisation problem yields the estimated tuning parameter from $K$ -fold cross-validation (in the limit). The remaining part of the objective function is the standard objective function conditional on this estimated value of the tuning parameter. Now, unfortunately, taking $\delta = \infty$ screws up the optimisation problem, but if we take $\delta$ to be a very large (but still finite) value, we can approximate the combination of the two optimisation problems up to arbitrary accuracy. From the above analysis we can see that it is possible to form a MAP analogy to the model-fitting and $K$ -fold cross-validation process. This is not an exact analogy, but it is a close analogy, up to arbitrary accuracy. It is also important to note that the MAP analogy no longer shares the same likelihood function as the original problem, since the loss function depends on the data and is thus absorbed as part of the likelihood rather than the prior. In fact, the full analogy is as follows: $$\begin{equation} \begin{aligned} \mathcal{H}_\mathbf{x}(\theta, \lambda) &= \ell_\mathbf{x}(\theta) - w(\theta|\lambda) - \delta \mathscr{L}(\mathbf{x}, \lambda) \\[6pt] &= \ln \Bigg( \frac{L_\mathbf{x}^*(\theta, \lambda) \pi (\theta, \lambda)}{\int L_\mathbf{x}^*(\theta, \lambda) \pi (\theta, \lambda) d\theta} \Bigg) + \text{const}, \\[6pt] \end{aligned} \end{equation}$$ where $L_\mathbf{x}^*(\theta, \lambda) \propto \exp( \ell_\mathbf{x}(\theta) - \delta \mathscr{L}(\mathbf{x}, \lambda))$ and $\pi (\theta, \lambda) \propto \exp( -w(\theta|\lambda))$ , with a fixed (and very large) hyper-parameter $\delta$ . ( Note: For a related question looking at logistic ridge regression framed in Bayesian terms see here .) $^\dagger$ This gives an improper prior in cases where the penalty does not correspond to the logarithm of a sigma-finite density.
{ "source": [ "https://stats.stackexchange.com/questions/368002", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53690/" ] }