idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
2,301
What is the meaning of "All models are wrong, but some are useful"
I have just rephrased the above answer by considering process models as focus point. The statement can be interpreted as follows: "All models are wrong" that is, every model is wrong because it is a simplification of reality. Some models are only a little wrong. They ignore some things, For example: --> changing requirements , --> Ignoring the completion of project within the deadline, --> not considering the customer's desired level of quality etc... Other models are a lot wrong - they ignore bigger things. Classical software process models ignore a lot compared to agile Process models which ignore less. "But some are useful" - simplifications of reality can be quite useful. They can help us explain, predict and understand the overall project and all its various components. Models are used because their features correspond to most software development programs.
What is the meaning of "All models are wrong, but some are useful"
I have just rephrased the above answer by considering process models as focus point. The statement can be interpreted as follows: "All models are wrong" that is, every model is wrong because it is a s
What is the meaning of "All models are wrong, but some are useful" I have just rephrased the above answer by considering process models as focus point. The statement can be interpreted as follows: "All models are wrong" that is, every model is wrong because it is a simplification of reality. Some models are only a little wrong. They ignore some things, For example: --> changing requirements , --> Ignoring the completion of project within the deadline, --> not considering the customer's desired level of quality etc... Other models are a lot wrong - they ignore bigger things. Classical software process models ignore a lot compared to agile Process models which ignore less. "But some are useful" - simplifications of reality can be quite useful. They can help us explain, predict and understand the overall project and all its various components. Models are used because their features correspond to most software development programs.
What is the meaning of "All models are wrong, but some are useful" I have just rephrased the above answer by considering process models as focus point. The statement can be interpreted as follows: "All models are wrong" that is, every model is wrong because it is a s
2,302
What is the meaning of "All models are wrong, but some are useful"
I would like to give another interpretation of the term "useful". Probably not the one Box thought about. When you have to make decisions, and this is what all information will finally be used for, then you have to measure your success in some form. When talking about decisions with uncertain information, this measure is often called utility. So we can also think of useful models as those that enable us to make more informed decisions; to achieve our goals more effectively. This adds another dimension on top of usual criteria, such as the ability of a model to predict something correctly: It allows us to weigh the different aspects a model is about against each other.
What is the meaning of "All models are wrong, but some are useful"
I would like to give another interpretation of the term "useful". Probably not the one Box thought about. When you have to make decisions, and this is what all information will finally be used for, th
What is the meaning of "All models are wrong, but some are useful" I would like to give another interpretation of the term "useful". Probably not the one Box thought about. When you have to make decisions, and this is what all information will finally be used for, then you have to measure your success in some form. When talking about decisions with uncertain information, this measure is often called utility. So we can also think of useful models as those that enable us to make more informed decisions; to achieve our goals more effectively. This adds another dimension on top of usual criteria, such as the ability of a model to predict something correctly: It allows us to weigh the different aspects a model is about against each other.
What is the meaning of "All models are wrong, but some are useful" I would like to give another interpretation of the term "useful". Probably not the one Box thought about. When you have to make decisions, and this is what all information will finally be used for, th
2,303
What is the meaning of "All models are wrong, but some are useful"
"All models are wrong, but some are useful". Perhaps it means: We should be doing the best we can with what we know + search for new learning?
What is the meaning of "All models are wrong, but some are useful"
"All models are wrong, but some are useful". Perhaps it means: We should be doing the best we can with what we know + search for new learning?
What is the meaning of "All models are wrong, but some are useful" "All models are wrong, but some are useful". Perhaps it means: We should be doing the best we can with what we know + search for new learning?
What is the meaning of "All models are wrong, but some are useful" "All models are wrong, but some are useful". Perhaps it means: We should be doing the best we can with what we know + search for new learning?
2,304
What're the differences between PCA and autoencoder?
PCA is restricted to a linear map, while auto encoders can have nonlinear enoder/decoders. A single layer auto encoder with linear transfer function is nearly equivalent to PCA, where nearly means that the $W$ found by AE and PCA won't necessarily be the same - but the subspace spanned by the respective $W$'s will.
What're the differences between PCA and autoencoder?
PCA is restricted to a linear map, while auto encoders can have nonlinear enoder/decoders. A single layer auto encoder with linear transfer function is nearly equivalent to PCA, where nearly means tha
What're the differences between PCA and autoencoder? PCA is restricted to a linear map, while auto encoders can have nonlinear enoder/decoders. A single layer auto encoder with linear transfer function is nearly equivalent to PCA, where nearly means that the $W$ found by AE and PCA won't necessarily be the same - but the subspace spanned by the respective $W$'s will.
What're the differences between PCA and autoencoder? PCA is restricted to a linear map, while auto encoders can have nonlinear enoder/decoders. A single layer auto encoder with linear transfer function is nearly equivalent to PCA, where nearly means tha
2,305
What're the differences between PCA and autoencoder?
As bayerj points out PCA is method that assumes linear systems where as Autoencoders (AE) do not. If no non-linear function is used in the AE and the number of neurons in the hidden layer is of smaller dimension then that of the input then PCA and AE can yield the same result. Otherwise the AE may find a different subspace. One thing to note is that the hidden layer in an AE can be of greater dimensionality than that of the input. In such cases AE's may not be doing dimensionality reduction. In this case we perceive them as doing a transformation from one feature space to another wherein the data in the new feature space disentangles factors of variation. Regarding to your question about whether multiple layers means very complex non-linear in your response to bayerj. Depending on what you mean by "very complex non-linear" this could be true. However depth is really offering better generalization. Many methods require an equal number of samples equal to the number of regions. However it turns out that "a very large number of regions, e.g., $O(2^N)$, can be defined with $O(N)$ examples" according to Bengio et al. This is a result of the complexity in representation that arises from composing lower features from lower layers in the network.
What're the differences between PCA and autoencoder?
As bayerj points out PCA is method that assumes linear systems where as Autoencoders (AE) do not. If no non-linear function is used in the AE and the number of neurons in the hidden layer is of smalle
What're the differences between PCA and autoencoder? As bayerj points out PCA is method that assumes linear systems where as Autoencoders (AE) do not. If no non-linear function is used in the AE and the number of neurons in the hidden layer is of smaller dimension then that of the input then PCA and AE can yield the same result. Otherwise the AE may find a different subspace. One thing to note is that the hidden layer in an AE can be of greater dimensionality than that of the input. In such cases AE's may not be doing dimensionality reduction. In this case we perceive them as doing a transformation from one feature space to another wherein the data in the new feature space disentangles factors of variation. Regarding to your question about whether multiple layers means very complex non-linear in your response to bayerj. Depending on what you mean by "very complex non-linear" this could be true. However depth is really offering better generalization. Many methods require an equal number of samples equal to the number of regions. However it turns out that "a very large number of regions, e.g., $O(2^N)$, can be defined with $O(N)$ examples" according to Bengio et al. This is a result of the complexity in representation that arises from composing lower features from lower layers in the network.
What're the differences between PCA and autoencoder? As bayerj points out PCA is method that assumes linear systems where as Autoencoders (AE) do not. If no non-linear function is used in the AE and the number of neurons in the hidden layer is of smalle
2,306
What're the differences between PCA and autoencoder?
The currently accepted answer by @bayerj states that the weights of a linear autoencoder span the same subspace as the principal components found by PCA, but they are not the same vectors. In particular, they are not an orthogonal basis. This is true, however we can easily recover the principal components loading vectors from the autoencoder weights. A little bit of notation: let $\{\mathbf{x}_i \in \mathbb{R}^n \}_{i=1}^N $ be a set of $N$ $n-$dimensional vectors, for which we wish to compute the PCA, and let $X$ be the matrix whose columns are $\mathbf{x}_1,\dots,\mathbf{x}_N$. Then, let's define a linear autoencoder as the one-hidden layer neural network defined by the following equations: $$ \begin{align} \mathbf{h}_1 & = \mathbf{W}_1\mathbf{x} + \mathbf{b}_1 \\ \hat{\mathbf{x}} & = \mathbf{W}_2\mathbf{h}_1 + \mathbf{b}_2 \end{align} $$ where $\hat{\mathbf{x}}$ is the output of the (linear) autoencoder, denoted with a hat in order to stress the fact that the output of an autoencoder is a "reconstruction" of the input. Note that, as it's most common with autoencoders, the hidden layer has less units than the input layer, i.e., $W_1\in \mathbb{R}^{n \times m}$ and $W_2\in \mathbb{R}^{m \times n}$ with $m < n$. Now, after training your linear autoencoder, compute the first $m$ singular vectors of $W_2$. It's possible to prove that these singular vectors are actually the first $m$ principal components of $X$, and the proof is in Plaut, E.,From Principal Subspaces to Principal Components with Linear Autoencoders, Arxiv.org:1804.10253. Since SVD is actually the algorithm commonly used to compute PCA, it could seem meaningless to first train a linear autoencoder and then apply SVD to $W_2$ in order to recover then first $m$ loading vectors, rather than directly applying SVD to $X$. The point is that $X$ is a $n \times N$ matrix, while a $W_2$ is $m\times n$. Now, the time complexity of SVD for $W_2$ is $O(m^2n)$, while for $X$ is $O(n^2N)$ with $m < n $, thus some saving could be attained (even if not as big as claimed by the author of the paper I link). Of course, there are other more useful approaches to compute the PCA of Big Data (randomized online PCA comes to mind), but the main point of this equivalence between linear autoencoders and PCA is not to find a practical way to compute PCA for huge data sets: it's more about giving us an intuition on the connections between autoencoders and other statistical approaches to dimension reduction.
What're the differences between PCA and autoencoder?
The currently accepted answer by @bayerj states that the weights of a linear autoencoder span the same subspace as the principal components found by PCA, but they are not the same vectors. In particul
What're the differences between PCA and autoencoder? The currently accepted answer by @bayerj states that the weights of a linear autoencoder span the same subspace as the principal components found by PCA, but they are not the same vectors. In particular, they are not an orthogonal basis. This is true, however we can easily recover the principal components loading vectors from the autoencoder weights. A little bit of notation: let $\{\mathbf{x}_i \in \mathbb{R}^n \}_{i=1}^N $ be a set of $N$ $n-$dimensional vectors, for which we wish to compute the PCA, and let $X$ be the matrix whose columns are $\mathbf{x}_1,\dots,\mathbf{x}_N$. Then, let's define a linear autoencoder as the one-hidden layer neural network defined by the following equations: $$ \begin{align} \mathbf{h}_1 & = \mathbf{W}_1\mathbf{x} + \mathbf{b}_1 \\ \hat{\mathbf{x}} & = \mathbf{W}_2\mathbf{h}_1 + \mathbf{b}_2 \end{align} $$ where $\hat{\mathbf{x}}$ is the output of the (linear) autoencoder, denoted with a hat in order to stress the fact that the output of an autoencoder is a "reconstruction" of the input. Note that, as it's most common with autoencoders, the hidden layer has less units than the input layer, i.e., $W_1\in \mathbb{R}^{n \times m}$ and $W_2\in \mathbb{R}^{m \times n}$ with $m < n$. Now, after training your linear autoencoder, compute the first $m$ singular vectors of $W_2$. It's possible to prove that these singular vectors are actually the first $m$ principal components of $X$, and the proof is in Plaut, E.,From Principal Subspaces to Principal Components with Linear Autoencoders, Arxiv.org:1804.10253. Since SVD is actually the algorithm commonly used to compute PCA, it could seem meaningless to first train a linear autoencoder and then apply SVD to $W_2$ in order to recover then first $m$ loading vectors, rather than directly applying SVD to $X$. The point is that $X$ is a $n \times N$ matrix, while a $W_2$ is $m\times n$. Now, the time complexity of SVD for $W_2$ is $O(m^2n)$, while for $X$ is $O(n^2N)$ with $m < n $, thus some saving could be attained (even if not as big as claimed by the author of the paper I link). Of course, there are other more useful approaches to compute the PCA of Big Data (randomized online PCA comes to mind), but the main point of this equivalence between linear autoencoders and PCA is not to find a practical way to compute PCA for huge data sets: it's more about giving us an intuition on the connections between autoencoders and other statistical approaches to dimension reduction.
What're the differences between PCA and autoencoder? The currently accepted answer by @bayerj states that the weights of a linear autoencoder span the same subspace as the principal components found by PCA, but they are not the same vectors. In particul
2,307
What're the differences between PCA and autoencoder?
The general answer is that auto-associative neural networks can perform non-linear dimensionality reduction. Training the network is generally not as fast as PCA, so the trade-off is computational resources vs. expressive power. However, there was a confusion in the details, which is a common misconception. It is true that auto-associate networks with linear activation functions agree with PCA, regardless of the number of hidden layers. However, if there is only 1 hidden layer (input-hidden-output), the optimal auto-associative network still agrees with PCA, even with non-linear activation functions. For the original proof see the 1988 paper by Bourlard and Kamp. Chris Bishop's book has a nice summary of the situation, in Ch.12.4.2: It might be thought that the limitations of a linear dimensionality reduction could be overcome by using nonlinear (sigmoidal) activation functions for the hidden units in the network in Figure 12.18. However, even with nonlinear hidden units, the minimum error solution is again given by the projection onto the principal component subspace (Bourlard and Kamp, 1988). There is therefore no advantage in using two-layer neural networks to perform dimensionality reduction.
What're the differences between PCA and autoencoder?
The general answer is that auto-associative neural networks can perform non-linear dimensionality reduction. Training the network is generally not as fast as PCA, so the trade-off is computational res
What're the differences between PCA and autoencoder? The general answer is that auto-associative neural networks can perform non-linear dimensionality reduction. Training the network is generally not as fast as PCA, so the trade-off is computational resources vs. expressive power. However, there was a confusion in the details, which is a common misconception. It is true that auto-associate networks with linear activation functions agree with PCA, regardless of the number of hidden layers. However, if there is only 1 hidden layer (input-hidden-output), the optimal auto-associative network still agrees with PCA, even with non-linear activation functions. For the original proof see the 1988 paper by Bourlard and Kamp. Chris Bishop's book has a nice summary of the situation, in Ch.12.4.2: It might be thought that the limitations of a linear dimensionality reduction could be overcome by using nonlinear (sigmoidal) activation functions for the hidden units in the network in Figure 12.18. However, even with nonlinear hidden units, the minimum error solution is again given by the projection onto the principal component subspace (Bourlard and Kamp, 1988). There is therefore no advantage in using two-layer neural networks to perform dimensionality reduction.
What're the differences between PCA and autoencoder? The general answer is that auto-associative neural networks can perform non-linear dimensionality reduction. Training the network is generally not as fast as PCA, so the trade-off is computational res
2,308
Why is the L2 regularization equivalent to Gaussian prior?
Let us imagine that you want to infer some parameter $\beta$ from some observed input-output pairs $(x_1,y_1)\dots,(x_N,y_N)$. Let us assume that the outputs are linearly related to the inputs via $\beta$ and that the data are corrupted by some noise $\epsilon$: $$y_n = \beta x_n + \epsilon,$$ where $\epsilon$ is Gaussian noise with mean $0$ and variance $\sigma^2$. This gives rise to a Gaussian likelihood: $$\prod_{n=1}^N \mathcal{N}(y_n|\beta x_n,\sigma^2).$$ Let us regularise parameter $\beta$ by imposing the Gaussian prior $\mathcal{N}(\beta|0,\lambda^{-1}),$ where $\lambda$ is a strictly positive scalar ($\lambda$ quantifies of by how much we believe that $\beta$ should be close to zero, i.e. it controls the strength of the regularisation). Hence, combining the likelihood and the prior we simply have: $$\prod_{n=1}^N \mathcal{N}(y_n|\beta x_n,\sigma^2) \mathcal{N}(\beta|0,\lambda^{-1}).$$ Let us take the logarithm of the above expression. Dropping some constants we get: $$\sum_{n=1}^N -\frac{1}{\sigma^2}(y_n-\beta x_n)^2 - \lambda \beta^2 + \mbox{const}.$$ If we maximise the above expression with respect to $\beta$, we get the so called maximum a-posteriori estimate for $\beta$, or MAP estimate for short. In this expression it becomes apparent why the Gaussian prior can be interpreted as a L2 regularisation term. The relationship between the L1 norm and the Laplace prior can be understood in the same fashion. Instead of a Gaussian prior, multiply your likelihood with a Laplace prior and then take the logarithm. A good reference (perhaps slightly advanced) detailing both issues is the paper "Adaptive Sparseness for Supervised Learning", which currently does not seem easy to find online. Alternatively look at "Adaptive Sparseness using Jeffreys Prior". Another good reference is "On Bayesian classification with Laplace priors".
Why is the L2 regularization equivalent to Gaussian prior?
Let us imagine that you want to infer some parameter $\beta$ from some observed input-output pairs $(x_1,y_1)\dots,(x_N,y_N)$. Let us assume that the outputs are linearly related to the inputs via $\b
Why is the L2 regularization equivalent to Gaussian prior? Let us imagine that you want to infer some parameter $\beta$ from some observed input-output pairs $(x_1,y_1)\dots,(x_N,y_N)$. Let us assume that the outputs are linearly related to the inputs via $\beta$ and that the data are corrupted by some noise $\epsilon$: $$y_n = \beta x_n + \epsilon,$$ where $\epsilon$ is Gaussian noise with mean $0$ and variance $\sigma^2$. This gives rise to a Gaussian likelihood: $$\prod_{n=1}^N \mathcal{N}(y_n|\beta x_n,\sigma^2).$$ Let us regularise parameter $\beta$ by imposing the Gaussian prior $\mathcal{N}(\beta|0,\lambda^{-1}),$ where $\lambda$ is a strictly positive scalar ($\lambda$ quantifies of by how much we believe that $\beta$ should be close to zero, i.e. it controls the strength of the regularisation). Hence, combining the likelihood and the prior we simply have: $$\prod_{n=1}^N \mathcal{N}(y_n|\beta x_n,\sigma^2) \mathcal{N}(\beta|0,\lambda^{-1}).$$ Let us take the logarithm of the above expression. Dropping some constants we get: $$\sum_{n=1}^N -\frac{1}{\sigma^2}(y_n-\beta x_n)^2 - \lambda \beta^2 + \mbox{const}.$$ If we maximise the above expression with respect to $\beta$, we get the so called maximum a-posteriori estimate for $\beta$, or MAP estimate for short. In this expression it becomes apparent why the Gaussian prior can be interpreted as a L2 regularisation term. The relationship between the L1 norm and the Laplace prior can be understood in the same fashion. Instead of a Gaussian prior, multiply your likelihood with a Laplace prior and then take the logarithm. A good reference (perhaps slightly advanced) detailing both issues is the paper "Adaptive Sparseness for Supervised Learning", which currently does not seem easy to find online. Alternatively look at "Adaptive Sparseness using Jeffreys Prior". Another good reference is "On Bayesian classification with Laplace priors".
Why is the L2 regularization equivalent to Gaussian prior? Let us imagine that you want to infer some parameter $\beta$ from some observed input-output pairs $(x_1,y_1)\dots,(x_N,y_N)$. Let us assume that the outputs are linearly related to the inputs via $\b
2,309
Why is the L2 regularization equivalent to Gaussian prior?
First notice that median minimizes the L1 norm (see here or here for learning more on L1 and L2) $$ \DeclareMathOperator*{\argmin}{arg\,min} \text{median}(x) = \argmin_s \sum_i |x_i - s|^1 $$ while mean minimizes L2 $$ \text{mean}(x) = \argmin_s \sum_i |x_i - s|^2 $$ now, recall that Normal distributions' $\mu$ parameter can be estimated using sample mean, while the MLE estimator for Laplace distribution $\mu$ parameter is median. So using Normal distribution is equivalent to L2 norm optimization and using Laplace distribution, to using L1 optimization. In practice you can think of it as that median is less sensitive to outliers than mean, and the same, using fatter-tailed Laplace distribution as a prior makes your model less prone to outliers, than using Normal distribution. Hurley, W. J. (2009) An Inductive Approach to Calculate the MLE for the Double Exponential Distribution. Journal of Modern Applied Statistical Methods: 8(2), Article 25.
Why is the L2 regularization equivalent to Gaussian prior?
First notice that median minimizes the L1 norm (see here or here for learning more on L1 and L2) $$ \DeclareMathOperator*{\argmin}{arg\,min} \text{median}(x) = \argmin_s \sum_i |x_i - s|^1 $$ while m
Why is the L2 regularization equivalent to Gaussian prior? First notice that median minimizes the L1 norm (see here or here for learning more on L1 and L2) $$ \DeclareMathOperator*{\argmin}{arg\,min} \text{median}(x) = \argmin_s \sum_i |x_i - s|^1 $$ while mean minimizes L2 $$ \text{mean}(x) = \argmin_s \sum_i |x_i - s|^2 $$ now, recall that Normal distributions' $\mu$ parameter can be estimated using sample mean, while the MLE estimator for Laplace distribution $\mu$ parameter is median. So using Normal distribution is equivalent to L2 norm optimization and using Laplace distribution, to using L1 optimization. In practice you can think of it as that median is less sensitive to outliers than mean, and the same, using fatter-tailed Laplace distribution as a prior makes your model less prone to outliers, than using Normal distribution. Hurley, W. J. (2009) An Inductive Approach to Calculate the MLE for the Double Exponential Distribution. Journal of Modern Applied Statistical Methods: 8(2), Article 25.
Why is the L2 regularization equivalent to Gaussian prior? First notice that median minimizes the L1 norm (see here or here for learning more on L1 and L2) $$ \DeclareMathOperator*{\argmin}{arg\,min} \text{median}(x) = \argmin_s \sum_i |x_i - s|^1 $$ while m
2,310
Why is the L2 regularization equivalent to Gaussian prior?
For a linear model with multivariate normal prior and multivariate normal likelihood, you end up with a multivariate normal posterior distribution in which the mean of the posterior (and maximum a posteriori model) is exactly what you would obtain using Tikhonov regularized ($L_{2}$ regularized) least squares with an appropriate regularization parameter. Note that there is a more fundamental difference in that the Bayesian posterior is a probability distribution, while the Tikhonov regularized least squares solution is a specific point estimate. This is discussed in many textbooks on Bayesian methods for inverse problems, See for example: http://www.amazon.com/Inverse-Problem-Methods-Parameter-Estimation/dp/0898715725/ http://www.amazon.com/Parameter-Estimation-Inverse-Problems-Second/dp/0123850487/ Similarly, if you have a Laplacian prior and a multivariate normal likelihood, then the maximum of the posterior distribution occurs at a point that you could get by solving an $L_{1}$ regularized least squares problem.
Why is the L2 regularization equivalent to Gaussian prior?
For a linear model with multivariate normal prior and multivariate normal likelihood, you end up with a multivariate normal posterior distribution in which the mean of the posterior (and maximum a pos
Why is the L2 regularization equivalent to Gaussian prior? For a linear model with multivariate normal prior and multivariate normal likelihood, you end up with a multivariate normal posterior distribution in which the mean of the posterior (and maximum a posteriori model) is exactly what you would obtain using Tikhonov regularized ($L_{2}$ regularized) least squares with an appropriate regularization parameter. Note that there is a more fundamental difference in that the Bayesian posterior is a probability distribution, while the Tikhonov regularized least squares solution is a specific point estimate. This is discussed in many textbooks on Bayesian methods for inverse problems, See for example: http://www.amazon.com/Inverse-Problem-Methods-Parameter-Estimation/dp/0898715725/ http://www.amazon.com/Parameter-Estimation-Inverse-Problems-Second/dp/0123850487/ Similarly, if you have a Laplacian prior and a multivariate normal likelihood, then the maximum of the posterior distribution occurs at a point that you could get by solving an $L_{1}$ regularized least squares problem.
Why is the L2 regularization equivalent to Gaussian prior? For a linear model with multivariate normal prior and multivariate normal likelihood, you end up with a multivariate normal posterior distribution in which the mean of the posterior (and maximum a pos
2,311
Why is the L2 regularization equivalent to Gaussian prior?
For a regression problem with $k$ variables (w/o intercept) you do OLS as $$\min_{\beta} (y - X \beta)' (y - X \beta)$$ In regularized regression with $L^p$ penalty you do $$\min_{\beta} (y - X \beta)' (y - X \beta) + \lambda \sum_{i=1}^k |\beta_i|^p $$ We can equivalently do (note the sign changes) $$\max_{\beta} -(y - X \beta)' (y - X \beta) - \lambda \sum_{i=1}^k |\beta_i|^p $$ This directly relates to the Bayesian principle of $$posterior \propto likelihood \times prior$$ or equivalently (under regularity conditions) $$log(posterior) \sim log(likelihood) + log(penalty)$$ Now it is not hard to see which exponential family distribution corresponds to which penalty type.
Why is the L2 regularization equivalent to Gaussian prior?
For a regression problem with $k$ variables (w/o intercept) you do OLS as $$\min_{\beta} (y - X \beta)' (y - X \beta)$$ In regularized regression with $L^p$ penalty you do $$\min_{\beta} (y - X \beta)
Why is the L2 regularization equivalent to Gaussian prior? For a regression problem with $k$ variables (w/o intercept) you do OLS as $$\min_{\beta} (y - X \beta)' (y - X \beta)$$ In regularized regression with $L^p$ penalty you do $$\min_{\beta} (y - X \beta)' (y - X \beta) + \lambda \sum_{i=1}^k |\beta_i|^p $$ We can equivalently do (note the sign changes) $$\max_{\beta} -(y - X \beta)' (y - X \beta) - \lambda \sum_{i=1}^k |\beta_i|^p $$ This directly relates to the Bayesian principle of $$posterior \propto likelihood \times prior$$ or equivalently (under regularity conditions) $$log(posterior) \sim log(likelihood) + log(penalty)$$ Now it is not hard to see which exponential family distribution corresponds to which penalty type.
Why is the L2 regularization equivalent to Gaussian prior? For a regression problem with $k$ variables (w/o intercept) you do OLS as $$\min_{\beta} (y - X \beta)' (y - X \beta)$$ In regularized regression with $L^p$ penalty you do $$\min_{\beta} (y - X \beta)
2,312
Why is the L2 regularization equivalent to Gaussian prior?
To put the equivalence more precisely: Optimizing model weights to minimize a squared error loss function with L2 regularization is equivalent to finding the weights that are most likely under a posterior distribution evaluated using Bayes rule, with a zero-mean independent Gaussian weights prior Proof: The loss function as described above would be given by $$ L = \underbrace{\Big[ \sum_{n=1}^{N} (y^{(n)} - f_{\mathbf{w}}(\mathbf{x}^{(n)}))^{2} \Big] }_{Original \; loss \; function} + \underbrace{\lambda \sum_{i=1}^{K} w_{i}^{2}}_{L_{2} \; loss} $$ Note that the distribution for a multivariate Gaussian is $$ \mathcal{N}(\mathbf{x}; \mathbf{\mu}, \Sigma) = \frac{1}{(2 \pi)^{D/2}|\Sigma|^{1/2}} \exp\Big(-\frac{1}{2} (\mathbf{x} -\mathbf{\mu})^{\top} \Sigma^{-1} (\mathbf{x} -\mathbf{\mu})\Big) $$ Using Bayes rule, we have that $$ \begin{split} p(\mathbf{w}|\mathcal{D}) &= \frac{p(\mathcal{D}|\mathbf{w}) \; p(\mathbf{w})}{p(\mathcal{D})}\newline &\propto p(\mathcal{D}|\mathbf{w}) \; p(\mathbf{w})\newline &\propto \Big[ \prod_{n}^{N} \mathcal{N}(y^{(n)}; f_{\mathbf{w}}(\mathbf{x}^{(n)}), \sigma_{y}^{2})\Big] \; \mathcal{N}(\mathbf{w}; \mathbf{0}, \sigma_{\mathbf{w}}^{2} \mathbb{I})\newline &\propto \prod_{n}^{N} \mathcal{N}(y^{(n)};f_{\mathbf{w}}(\mathbf{x}^{(n)}) , \sigma_{y}^{2}) \prod_{i=1}^{K} \mathcal{N}(w_{i}; \, 0, \, \sigma_{\mathbf{w}}^{2}) \newline \end{split} $$ Where we are able to split the multi-dimensional Guassian into a product, because the covariance is a multiple of the identity matrix. Take negative log probability $$ \begin{split} -\log \big[p(\mathbf{w}|\mathcal{D}) \big] &= -\sum_{n=1}^{N} \log \big[\mathcal{N}(y^{(n)}; f_{\mathbf{w}}(\mathbf{x}^{(n)}), \sigma_{y}^{2}) \big] - \sum_{i=1}^{K} \log \big[ \mathcal{N}(w_{i}; \, 0, \, \sigma_{\mathbf{w}}^{2}) \big] + const. \newline &= \frac{1}{2\sigma_{y}^{2}} \sum_{n=1}^{N} \big(y^{(n)} - f_{\mathbf{w}}(\mathbf{x}^{(n)})\big)^{2} + \frac{1}{2\sigma_{\mathbf{w}}^{2}} \sum_{i=1}^{K} w_{i}^{2} + const. \newline \end{split} $$ We can of course drop the constant, and multiply by any amount without fundamentally affecting the loss function. (constant does nothing, multiplication effectively scales the learning rate. Will not affect the location of minima)So we can see that the negative log probability of the posterior distribution is an equivalent loss function to the L2 regularized square error loss function. This equivelance is general and holds for any parameterized function of weights - not just linear regression as seems to be implied above.
Why is the L2 regularization equivalent to Gaussian prior?
To put the equivalence more precisely: Optimizing model weights to minimize a squared error loss function with L2 regularization is equivalent to finding the weights that are most likely under a poste
Why is the L2 regularization equivalent to Gaussian prior? To put the equivalence more precisely: Optimizing model weights to minimize a squared error loss function with L2 regularization is equivalent to finding the weights that are most likely under a posterior distribution evaluated using Bayes rule, with a zero-mean independent Gaussian weights prior Proof: The loss function as described above would be given by $$ L = \underbrace{\Big[ \sum_{n=1}^{N} (y^{(n)} - f_{\mathbf{w}}(\mathbf{x}^{(n)}))^{2} \Big] }_{Original \; loss \; function} + \underbrace{\lambda \sum_{i=1}^{K} w_{i}^{2}}_{L_{2} \; loss} $$ Note that the distribution for a multivariate Gaussian is $$ \mathcal{N}(\mathbf{x}; \mathbf{\mu}, \Sigma) = \frac{1}{(2 \pi)^{D/2}|\Sigma|^{1/2}} \exp\Big(-\frac{1}{2} (\mathbf{x} -\mathbf{\mu})^{\top} \Sigma^{-1} (\mathbf{x} -\mathbf{\mu})\Big) $$ Using Bayes rule, we have that $$ \begin{split} p(\mathbf{w}|\mathcal{D}) &= \frac{p(\mathcal{D}|\mathbf{w}) \; p(\mathbf{w})}{p(\mathcal{D})}\newline &\propto p(\mathcal{D}|\mathbf{w}) \; p(\mathbf{w})\newline &\propto \Big[ \prod_{n}^{N} \mathcal{N}(y^{(n)}; f_{\mathbf{w}}(\mathbf{x}^{(n)}), \sigma_{y}^{2})\Big] \; \mathcal{N}(\mathbf{w}; \mathbf{0}, \sigma_{\mathbf{w}}^{2} \mathbb{I})\newline &\propto \prod_{n}^{N} \mathcal{N}(y^{(n)};f_{\mathbf{w}}(\mathbf{x}^{(n)}) , \sigma_{y}^{2}) \prod_{i=1}^{K} \mathcal{N}(w_{i}; \, 0, \, \sigma_{\mathbf{w}}^{2}) \newline \end{split} $$ Where we are able to split the multi-dimensional Guassian into a product, because the covariance is a multiple of the identity matrix. Take negative log probability $$ \begin{split} -\log \big[p(\mathbf{w}|\mathcal{D}) \big] &= -\sum_{n=1}^{N} \log \big[\mathcal{N}(y^{(n)}; f_{\mathbf{w}}(\mathbf{x}^{(n)}), \sigma_{y}^{2}) \big] - \sum_{i=1}^{K} \log \big[ \mathcal{N}(w_{i}; \, 0, \, \sigma_{\mathbf{w}}^{2}) \big] + const. \newline &= \frac{1}{2\sigma_{y}^{2}} \sum_{n=1}^{N} \big(y^{(n)} - f_{\mathbf{w}}(\mathbf{x}^{(n)})\big)^{2} + \frac{1}{2\sigma_{\mathbf{w}}^{2}} \sum_{i=1}^{K} w_{i}^{2} + const. \newline \end{split} $$ We can of course drop the constant, and multiply by any amount without fundamentally affecting the loss function. (constant does nothing, multiplication effectively scales the learning rate. Will not affect the location of minima)So we can see that the negative log probability of the posterior distribution is an equivalent loss function to the L2 regularized square error loss function. This equivelance is general and holds for any parameterized function of weights - not just linear regression as seems to be implied above.
Why is the L2 regularization equivalent to Gaussian prior? To put the equivalence more precisely: Optimizing model weights to minimize a squared error loss function with L2 regularization is equivalent to finding the weights that are most likely under a poste
2,313
Why is the L2 regularization equivalent to Gaussian prior?
There are two characteristics of Bayesian modeling that need to be emphasized, when discussing the equivalance of certain penalized maximum likelihood estimation and Bayesian procedures. In the Bayesian framework, the prior is selected based on specifics of the problem and is not motivated by computational expediency. Hence Bayesians use a variety of priors including the now-popular horseshoe prior for sparse predictor problems, and don't need to rely so much on priors that are equivalent to L1 or L2 penalties. With a full Bayesian approach you have access to all inferential procedures when you're done. For example you can quantify evidence for large regression coefficients and you can get credible intervals on regression coefficients and overall predicted values. In the frequentist framework, once you choose penalization you lose all of the inferential machine.
Why is the L2 regularization equivalent to Gaussian prior?
There are two characteristics of Bayesian modeling that need to be emphasized, when discussing the equivalance of certain penalized maximum likelihood estimation and Bayesian procedures. In the Bayes
Why is the L2 regularization equivalent to Gaussian prior? There are two characteristics of Bayesian modeling that need to be emphasized, when discussing the equivalance of certain penalized maximum likelihood estimation and Bayesian procedures. In the Bayesian framework, the prior is selected based on specifics of the problem and is not motivated by computational expediency. Hence Bayesians use a variety of priors including the now-popular horseshoe prior for sparse predictor problems, and don't need to rely so much on priors that are equivalent to L1 or L2 penalties. With a full Bayesian approach you have access to all inferential procedures when you're done. For example you can quantify evidence for large regression coefficients and you can get credible intervals on regression coefficients and overall predicted values. In the frequentist framework, once you choose penalization you lose all of the inferential machine.
Why is the L2 regularization equivalent to Gaussian prior? There are two characteristics of Bayesian modeling that need to be emphasized, when discussing the equivalance of certain penalized maximum likelihood estimation and Bayesian procedures. In the Bayes
2,314
Why should I be Bayesian when my model is wrong?
I consider Bayesian approach when my data set is not everything that is known about the subject, and want to somehow incorporate that exogenous knowledge into my forecast. For instance, my client wants a forecast of the loan defaults in their portfolio. They have 100 loans with a few years of quarterly historical data. There were a few occurrences of delinquency (late payment) and just a couple of defaults. If I try to estimate the survival model on this data set, it'll be very little data to estimate and too much uncertainty to forecast. On the other hand, the portfolio managers are experienced people, some of them may have spent decades managing relationships with borrowers. They have ideas around what the default rates should be like. So, they're capable of coming up with reasonable priors. Note, not the priors which have nice math properties and look intellectually appealing to me. I'll chat with them and extract their experiences and knowledge in the form of those priors. Now Bayesian framework will provide me with mechanics to marry the exogenous knowledge in the form of priors with the data, and obtain the posterior that is superior to both pure qualitative judgment and pure data driven forecast, in my opinion. This is not a philosophy and I'm not a Bayesian. I'm just using the Bayesian tools to consistently incorporate expert knowledge into the data-driven estimation.
Why should I be Bayesian when my model is wrong?
I consider Bayesian approach when my data set is not everything that is known about the subject, and want to somehow incorporate that exogenous knowledge into my forecast. For instance, my client wan
Why should I be Bayesian when my model is wrong? I consider Bayesian approach when my data set is not everything that is known about the subject, and want to somehow incorporate that exogenous knowledge into my forecast. For instance, my client wants a forecast of the loan defaults in their portfolio. They have 100 loans with a few years of quarterly historical data. There were a few occurrences of delinquency (late payment) and just a couple of defaults. If I try to estimate the survival model on this data set, it'll be very little data to estimate and too much uncertainty to forecast. On the other hand, the portfolio managers are experienced people, some of them may have spent decades managing relationships with borrowers. They have ideas around what the default rates should be like. So, they're capable of coming up with reasonable priors. Note, not the priors which have nice math properties and look intellectually appealing to me. I'll chat with them and extract their experiences and knowledge in the form of those priors. Now Bayesian framework will provide me with mechanics to marry the exogenous knowledge in the form of priors with the data, and obtain the posterior that is superior to both pure qualitative judgment and pure data driven forecast, in my opinion. This is not a philosophy and I'm not a Bayesian. I'm just using the Bayesian tools to consistently incorporate expert knowledge into the data-driven estimation.
Why should I be Bayesian when my model is wrong? I consider Bayesian approach when my data set is not everything that is known about the subject, and want to somehow incorporate that exogenous knowledge into my forecast. For instance, my client wan
2,315
Why should I be Bayesian when my model is wrong?
A very interesting question...that may not have an answer (but that does not make it less interesting!) A few thoughts (and many links to my blog entries!) about that meme that all models are wrong: While the hypothetical model is indeed almost invariably and irremediably wrong, it still makes sense to act in an efficient or coherent manner with respect to this model if this is the best one can do. The resulting inference produces an evaluation of the formal model that is the "closest" to the actual data generating model (if any); There exist Bayesian approaches that can do without the model, a most recent example being the papers by Bissiri et al. (with my comments) and by Watson and Holmes (which I discussed with Judith Rousseau); In a connected way, there exists a whole branch of Bayesian statistics dealing with M-open inference; And yet another direction I like a lot is the SafeBayes approach of Peter Grünwald, who takes into account model misspecification to replace the likelihood with a down-graded version expressed as a power of the original likelihood. The very recent Read Paper by Gelman and Hennig addresses this issue, albeit in a circumvoluted manner (and I added some comments on my blog). I presume you could gather material for a discussion from the entries about your question. In a sense, Bayesians should be the least concerned among statisticians and modellers about this aspect since the sampling model is to be taken as one of several prior assumptions and the outcome is conditional or relative to all those prior assumptions.
Why should I be Bayesian when my model is wrong?
A very interesting question...that may not have an answer (but that does not make it less interesting!) A few thoughts (and many links to my blog entries!) about that meme that all models are wrong:
Why should I be Bayesian when my model is wrong? A very interesting question...that may not have an answer (but that does not make it less interesting!) A few thoughts (and many links to my blog entries!) about that meme that all models are wrong: While the hypothetical model is indeed almost invariably and irremediably wrong, it still makes sense to act in an efficient or coherent manner with respect to this model if this is the best one can do. The resulting inference produces an evaluation of the formal model that is the "closest" to the actual data generating model (if any); There exist Bayesian approaches that can do without the model, a most recent example being the papers by Bissiri et al. (with my comments) and by Watson and Holmes (which I discussed with Judith Rousseau); In a connected way, there exists a whole branch of Bayesian statistics dealing with M-open inference; And yet another direction I like a lot is the SafeBayes approach of Peter Grünwald, who takes into account model misspecification to replace the likelihood with a down-graded version expressed as a power of the original likelihood. The very recent Read Paper by Gelman and Hennig addresses this issue, albeit in a circumvoluted manner (and I added some comments on my blog). I presume you could gather material for a discussion from the entries about your question. In a sense, Bayesians should be the least concerned among statisticians and modellers about this aspect since the sampling model is to be taken as one of several prior assumptions and the outcome is conditional or relative to all those prior assumptions.
Why should I be Bayesian when my model is wrong? A very interesting question...that may not have an answer (but that does not make it less interesting!) A few thoughts (and many links to my blog entries!) about that meme that all models are wrong:
2,316
Why should I be Bayesian when my model is wrong?
I only see this today but still I think I should chip in given that I'm kind of an expert and that at least two answers (nr 3 and 20 (thanks for referring to my work Xi'an!)) mention my work on SafeBayes - in particular G. and van Ommen, "Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It" (2014). And I'd also like to add something to comment 2: 2 says: (an advantage of Bayes under misspecification is ...) "Well, Bayesian approaches regularize. That is something, to help against overfitting - whether or not your model is misspecified. Of course, that just leads to the related question about arguments for Bayesian inference against regularized classical approaches (lasso etc)" This is true, but it is crucial to add that Bayesian approaches may not regularize enough if the model is wrong. This is the main point of the work with Van Ommen - we see there that standard Bayes overfits rather terribly in some regression context with wrong-but-very-useful-models. Not as bad as MLE, but still way too much to be useful. There's a whole strand of work in (frequentist and game-theoretic) theoretical machine learning where they use methods similar to Bayes, but with a much smaller 'learning rate' - making the prior more and the data less important, thus regularizing more. These methods are designed to work well in worst-case situations (misspecification and even worse, adversarial data) - the SafeBayes approach is designed to 'learn the optimal learning rate' from the data itself - and this optimal learining rate, i.e. the optimal amount of regularization, in effect depends on geometrical aspects of model and underlying distribution (i.e. is the model convex or not). Relatedly, there is a folk theorem (mentioned by several above) saying that Bayes will have the posterior concentrate on the distribution closest in KL divergence to the 'truth'. But this only holds under very stringent conditions - MUCH more stringent than the conditions needed for convergence in the well-specified case. If you're dealing with standard low dimensional parametric models and data are i.i.d. according to some distribution (not in the model) then the posterior will indeed concentrate around the point in the model that is closest to the truth in KL divergence. Now if you're dealing with large nonparametric models and the model is correct, then (essentially) your posterior will still concentrate around the true distribution given enough data, as long as your prior puts sufficient mass in small KL balls around the true distribution. This is the weak condition that is needed for convergence in the nonparametric case if the model is correct. But if your model is nonparametric yet incorrect, then the posterior may simply not concentrate around the closest KL point, even if your prior puts mass close to 1 (!) there - your posterior may remain confused for ever, concentrating on ever-different distributions as time proceeds but never around the best one. In my papers I have several examples of this happening. THe papers that do show convergence under misspecification (e.g. Kleijn and van der Vaart) require a lot of additional conditions, e.g. the model must be convex, or the prior must obey certain (complicated) properties. This is what I mean by 'stringent' conditions. In practice we're often dealing with parametric yet very high dimensional models (think Bayesian ridge regression etc.). Then if the model is wrong, eventually your posterior will concentrate on the best KL-distribution in the model but a mini-version of the nonparametric inconsistency still holds: it may take orders of magnitude more data before convergence happens - again, my paper with Van Ommen gives examples. The SafeBayes approach modifies standard bayes in a way that guarantees convergence in nonparametric models under (essentially) the same conditions as in the well-specified case, i.e. sufficient prior mass near the KL-optimal distribution in the model (G. and Mehta, 2014). Then there's the question of whether Bayes even has justification under misspecification. IMHO (and as also mentioned by several people above), the standard justifications of Bayes (admissibility, Savage, De Finetti, Cox etc) do not hold here (because if you realize your model is misspecified, your probabilities do not represent your true beliefs!). HOWEVER many Bayes methods can also be interpreted as 'minimum description length (MDL) methods' - MDL is an information-theoretic method which equates 'learning from data' with 'trying to compress the data as much as possible'. This data compression interpretation of (some) Bayesian methods remains valid under misspecification. So there is still some underlying interpretation that holds up under misspecification - nevertheless, there are problems, as my paper with van Ommen (and the confidence interval/credible set problem mentioned in the original post) show. And then a final remark about the original post: you mention the 'admissibility' justification of Bayes (going back to Wald's complete class thm of the 1940s/50s). Whether or not this is truly a justification of Bayes really depends very much on one's precise definition of 'Bayesian inference' (which differs from researcher to researcher...). The reason is that these admissibility results allow the possibility that one uses a prior that depends on aspects of the problem such as sample size, and loss function of interest etc. Most 'real' Bayesians would not want to change their prior if the amount of data they have to process changes, or if the loss function of interest is suddenly changed. For example, with strictly convex loss functions, minimax estimators are also admissible - though not usually thought of as Bayesian! The reason is that for each fixed sample size, they are equivalent to Bayes with a particular prior, but the prior is different for each sample size. Hope this is useful!
Why should I be Bayesian when my model is wrong?
I only see this today but still I think I should chip in given that I'm kind of an expert and that at least two answers (nr 3 and 20 (thanks for referring to my work Xi'an!)) mention my work on SafeBa
Why should I be Bayesian when my model is wrong? I only see this today but still I think I should chip in given that I'm kind of an expert and that at least two answers (nr 3 and 20 (thanks for referring to my work Xi'an!)) mention my work on SafeBayes - in particular G. and van Ommen, "Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It" (2014). And I'd also like to add something to comment 2: 2 says: (an advantage of Bayes under misspecification is ...) "Well, Bayesian approaches regularize. That is something, to help against overfitting - whether or not your model is misspecified. Of course, that just leads to the related question about arguments for Bayesian inference against regularized classical approaches (lasso etc)" This is true, but it is crucial to add that Bayesian approaches may not regularize enough if the model is wrong. This is the main point of the work with Van Ommen - we see there that standard Bayes overfits rather terribly in some regression context with wrong-but-very-useful-models. Not as bad as MLE, but still way too much to be useful. There's a whole strand of work in (frequentist and game-theoretic) theoretical machine learning where they use methods similar to Bayes, but with a much smaller 'learning rate' - making the prior more and the data less important, thus regularizing more. These methods are designed to work well in worst-case situations (misspecification and even worse, adversarial data) - the SafeBayes approach is designed to 'learn the optimal learning rate' from the data itself - and this optimal learining rate, i.e. the optimal amount of regularization, in effect depends on geometrical aspects of model and underlying distribution (i.e. is the model convex or not). Relatedly, there is a folk theorem (mentioned by several above) saying that Bayes will have the posterior concentrate on the distribution closest in KL divergence to the 'truth'. But this only holds under very stringent conditions - MUCH more stringent than the conditions needed for convergence in the well-specified case. If you're dealing with standard low dimensional parametric models and data are i.i.d. according to some distribution (not in the model) then the posterior will indeed concentrate around the point in the model that is closest to the truth in KL divergence. Now if you're dealing with large nonparametric models and the model is correct, then (essentially) your posterior will still concentrate around the true distribution given enough data, as long as your prior puts sufficient mass in small KL balls around the true distribution. This is the weak condition that is needed for convergence in the nonparametric case if the model is correct. But if your model is nonparametric yet incorrect, then the posterior may simply not concentrate around the closest KL point, even if your prior puts mass close to 1 (!) there - your posterior may remain confused for ever, concentrating on ever-different distributions as time proceeds but never around the best one. In my papers I have several examples of this happening. THe papers that do show convergence under misspecification (e.g. Kleijn and van der Vaart) require a lot of additional conditions, e.g. the model must be convex, or the prior must obey certain (complicated) properties. This is what I mean by 'stringent' conditions. In practice we're often dealing with parametric yet very high dimensional models (think Bayesian ridge regression etc.). Then if the model is wrong, eventually your posterior will concentrate on the best KL-distribution in the model but a mini-version of the nonparametric inconsistency still holds: it may take orders of magnitude more data before convergence happens - again, my paper with Van Ommen gives examples. The SafeBayes approach modifies standard bayes in a way that guarantees convergence in nonparametric models under (essentially) the same conditions as in the well-specified case, i.e. sufficient prior mass near the KL-optimal distribution in the model (G. and Mehta, 2014). Then there's the question of whether Bayes even has justification under misspecification. IMHO (and as also mentioned by several people above), the standard justifications of Bayes (admissibility, Savage, De Finetti, Cox etc) do not hold here (because if you realize your model is misspecified, your probabilities do not represent your true beliefs!). HOWEVER many Bayes methods can also be interpreted as 'minimum description length (MDL) methods' - MDL is an information-theoretic method which equates 'learning from data' with 'trying to compress the data as much as possible'. This data compression interpretation of (some) Bayesian methods remains valid under misspecification. So there is still some underlying interpretation that holds up under misspecification - nevertheless, there are problems, as my paper with van Ommen (and the confidence interval/credible set problem mentioned in the original post) show. And then a final remark about the original post: you mention the 'admissibility' justification of Bayes (going back to Wald's complete class thm of the 1940s/50s). Whether or not this is truly a justification of Bayes really depends very much on one's precise definition of 'Bayesian inference' (which differs from researcher to researcher...). The reason is that these admissibility results allow the possibility that one uses a prior that depends on aspects of the problem such as sample size, and loss function of interest etc. Most 'real' Bayesians would not want to change their prior if the amount of data they have to process changes, or if the loss function of interest is suddenly changed. For example, with strictly convex loss functions, minimax estimators are also admissible - though not usually thought of as Bayesian! The reason is that for each fixed sample size, they are equivalent to Bayes with a particular prior, but the prior is different for each sample size. Hope this is useful!
Why should I be Bayesian when my model is wrong? I only see this today but still I think I should chip in given that I'm kind of an expert and that at least two answers (nr 3 and 20 (thanks for referring to my work Xi'an!)) mention my work on SafeBa
2,317
Why should I be Bayesian when my model is wrong?
Edits: Added reference to this paper in the body, as requested by the OP. I am giving an answer as a naive empirical Bayesian here. First, the posterior distribution allows you to do computations that you simply cannot do with a straightforward MLE. The simplest case is that today's posterior is tomorrow's prior. Bayesian inference naturally allows for sequential updates, or more in general online or delayed combination of multiple sources of information (incorporating a prior is just one textbook instance of such combination). Bayesian Decision Theory with a nontrivial loss function is another example. I would not know what to do otherwise. Second, with this answer I will try and argue that the mantra that quantification of uncertainty is generally better than no uncertainty is effectively an empirical question, since theorems (as you mentioned, and as far as I know) provide no guarantees. Optimization as a toy model of scientific endeavor A domain that I feel fully captures the complexity of the problem is a very practical, no-nonsense one, the optimization of a black-box function $f: \mathcal{X} \subset \mathbb{R}^D \rightarrow \mathbb{R}$. We assume that we can sequentially query a point $x \in \mathcal{X}$ and get a possibly noisy observation $y = f(x) + \varepsilon$, with $\varepsilon \sim \mathcal{N}(0,\sigma^2)$. Our goal is to get as close as possible to $x^* = \arg\min_x f(x)$ with the minimum number of function evaluations. A particularly effective way to proceed, as you may expect, is to build a predictive model of what would happen if I query any $x^\prime \in \mathcal{X}$, and use this information to decide what to do next (either locally or globally). See Rios and Sahinidis (2013) for a review of derivative-free global optimization methods. When the model is complex enough, this is called a meta-model or surrogate-function or response surface approach. Crucially, the model could be a point estimate of $f$ (e.g., the fit of a radial basis network function to our observations), or we could be Bayesian and somehow get a full posterior distribution over $f$ (e.g., via a Gaussian process). Bayesian optimization uses the posterior over $f$ (in particular, the joint conditional posterior mean and variance at any point) to guide the search of the (global) optimum via some principled heuristic. The classical choice is to maximize the expected improvement over the current best point, but there are even fancier methods, like minimizing the expected entropy over the location of the minimum (see also here). The empirical result here is that having access to a posterior, even if partially misspecified, generally produces better results than other methods. (There are caveats and situations in which Bayesian optimization is no better than random search, such as in high dimensions.) In this paper, we perform an empirical evaluation of a novel BO method vs. other optimization algorithms, checking whether using BO is convenient in practice, with promising results. Since you asked -- this has a much higher computational cost than other non-Bayesian methods, and you were wondering why we should be Bayesian. The assumption here is that the cost involved in evaluating the true $f$ (e.g., in a real scenario, a complex engineering or machine learning experiment) is much larger than the computational cost for the Bayesian analysis, so being Bayesian pays off. What can we learn from this example? First, why does Bayesian optimization work at all? I guess that the model is wrong, but not that wrong, and as usual wrongness depends on what your model is for. For example, the exact shape of $f$ is not relevant for optimization, since we could be optimizing any monotonic transformation thereof. I guess nature is full of such invariances. So, the search we are doing might not be optimal (i.e., we are throwing away good information), but still better than with no uncertainty information. Second, our example highlights that it is possible that the usefulness of being Bayesian or not depends on the context, e.g. the relative cost and amount of available (computational) resources. (Of course if you are a hardcore Bayesian you believe that every computation is Bayesian inference under some prior and/or approximation.) Finally, the big question is -- why are the models we use not-so-bad after all, in the sense that the posteriors are still useful and not statistical garbage? If we take the No Free Lunch theorem, apparently we shouldn't be able to say much, but luckily we do not live in a world of completely random (or adversarially chosen) functions. More in general, since you put the "philosophical" tag... I guess we are entering the realm of the problem of induction, or the unreasonable effectiveness of mathematics in the statistical sciences (specifically, of our mathematical intuition & ability to specify models that work in practice) -- in the sense that from a purely a priori standpoint there is no reason why our guesses should be good or have any guarantee (and for sure you can build mathematical counterexamples in which things go awry), but they turn out to work well in practice.
Why should I be Bayesian when my model is wrong?
Edits: Added reference to this paper in the body, as requested by the OP. I am giving an answer as a naive empirical Bayesian here. First, the posterior distribution allows you to do computations tha
Why should I be Bayesian when my model is wrong? Edits: Added reference to this paper in the body, as requested by the OP. I am giving an answer as a naive empirical Bayesian here. First, the posterior distribution allows you to do computations that you simply cannot do with a straightforward MLE. The simplest case is that today's posterior is tomorrow's prior. Bayesian inference naturally allows for sequential updates, or more in general online or delayed combination of multiple sources of information (incorporating a prior is just one textbook instance of such combination). Bayesian Decision Theory with a nontrivial loss function is another example. I would not know what to do otherwise. Second, with this answer I will try and argue that the mantra that quantification of uncertainty is generally better than no uncertainty is effectively an empirical question, since theorems (as you mentioned, and as far as I know) provide no guarantees. Optimization as a toy model of scientific endeavor A domain that I feel fully captures the complexity of the problem is a very practical, no-nonsense one, the optimization of a black-box function $f: \mathcal{X} \subset \mathbb{R}^D \rightarrow \mathbb{R}$. We assume that we can sequentially query a point $x \in \mathcal{X}$ and get a possibly noisy observation $y = f(x) + \varepsilon$, with $\varepsilon \sim \mathcal{N}(0,\sigma^2)$. Our goal is to get as close as possible to $x^* = \arg\min_x f(x)$ with the minimum number of function evaluations. A particularly effective way to proceed, as you may expect, is to build a predictive model of what would happen if I query any $x^\prime \in \mathcal{X}$, and use this information to decide what to do next (either locally or globally). See Rios and Sahinidis (2013) for a review of derivative-free global optimization methods. When the model is complex enough, this is called a meta-model or surrogate-function or response surface approach. Crucially, the model could be a point estimate of $f$ (e.g., the fit of a radial basis network function to our observations), or we could be Bayesian and somehow get a full posterior distribution over $f$ (e.g., via a Gaussian process). Bayesian optimization uses the posterior over $f$ (in particular, the joint conditional posterior mean and variance at any point) to guide the search of the (global) optimum via some principled heuristic. The classical choice is to maximize the expected improvement over the current best point, but there are even fancier methods, like minimizing the expected entropy over the location of the minimum (see also here). The empirical result here is that having access to a posterior, even if partially misspecified, generally produces better results than other methods. (There are caveats and situations in which Bayesian optimization is no better than random search, such as in high dimensions.) In this paper, we perform an empirical evaluation of a novel BO method vs. other optimization algorithms, checking whether using BO is convenient in practice, with promising results. Since you asked -- this has a much higher computational cost than other non-Bayesian methods, and you were wondering why we should be Bayesian. The assumption here is that the cost involved in evaluating the true $f$ (e.g., in a real scenario, a complex engineering or machine learning experiment) is much larger than the computational cost for the Bayesian analysis, so being Bayesian pays off. What can we learn from this example? First, why does Bayesian optimization work at all? I guess that the model is wrong, but not that wrong, and as usual wrongness depends on what your model is for. For example, the exact shape of $f$ is not relevant for optimization, since we could be optimizing any monotonic transformation thereof. I guess nature is full of such invariances. So, the search we are doing might not be optimal (i.e., we are throwing away good information), but still better than with no uncertainty information. Second, our example highlights that it is possible that the usefulness of being Bayesian or not depends on the context, e.g. the relative cost and amount of available (computational) resources. (Of course if you are a hardcore Bayesian you believe that every computation is Bayesian inference under some prior and/or approximation.) Finally, the big question is -- why are the models we use not-so-bad after all, in the sense that the posteriors are still useful and not statistical garbage? If we take the No Free Lunch theorem, apparently we shouldn't be able to say much, but luckily we do not live in a world of completely random (or adversarially chosen) functions. More in general, since you put the "philosophical" tag... I guess we are entering the realm of the problem of induction, or the unreasonable effectiveness of mathematics in the statistical sciences (specifically, of our mathematical intuition & ability to specify models that work in practice) -- in the sense that from a purely a priori standpoint there is no reason why our guesses should be good or have any guarantee (and for sure you can build mathematical counterexamples in which things go awry), but they turn out to work well in practice.
Why should I be Bayesian when my model is wrong? Edits: Added reference to this paper in the body, as requested by the OP. I am giving an answer as a naive empirical Bayesian here. First, the posterior distribution allows you to do computations tha
2,318
Why should I be Bayesian when my model is wrong?
Here are a few other ways of justifying Bayesian inference in misspecified models. You can construct a confidence interval on the posterior mean, using the sandwich formula (in the same way that you would do with the MLE). Thus, even though the credible sets don't have coverage, you can still produce valid confidence intervals on point estimators, if that's what you're interested in. You can rescale the posterior distribution to ensure that credible sets have coverage, which is the approach taken in: Müller, Ulrich K. "Risk of Bayesian inference in misspecified models, and the sandwich covariance matrix." Econometrica 81.5 (2013): 1805-1849. There's a non-asymptotic justification for Bayes rule: omitting the technical conditions, if the prior is $p(\theta)$, and the log-likelihood is $\ell_n(\theta)$, then the posterior is the distribution that minimizes $-\int \ell_n(\theta) d\nu(\theta) + \int \log\!\Big(\frac{\nu(\theta)}{p(\theta)}\Big)d\nu(\theta)$ over all distributions $\nu(\theta)$. The first term is like an expected utility: you want to put mass on parameters that yield a high likelihood. The second term regularizes: you want a small KL divergence to the prior. This formula explicitly says what the posterior is optimizing. It is used a lot in the context of quasi-likelihood, where people replace the log-likelihood by another utility function.
Why should I be Bayesian when my model is wrong?
Here are a few other ways of justifying Bayesian inference in misspecified models. You can construct a confidence interval on the posterior mean, using the sandwich formula (in the same way that you
Why should I be Bayesian when my model is wrong? Here are a few other ways of justifying Bayesian inference in misspecified models. You can construct a confidence interval on the posterior mean, using the sandwich formula (in the same way that you would do with the MLE). Thus, even though the credible sets don't have coverage, you can still produce valid confidence intervals on point estimators, if that's what you're interested in. You can rescale the posterior distribution to ensure that credible sets have coverage, which is the approach taken in: Müller, Ulrich K. "Risk of Bayesian inference in misspecified models, and the sandwich covariance matrix." Econometrica 81.5 (2013): 1805-1849. There's a non-asymptotic justification for Bayes rule: omitting the technical conditions, if the prior is $p(\theta)$, and the log-likelihood is $\ell_n(\theta)$, then the posterior is the distribution that minimizes $-\int \ell_n(\theta) d\nu(\theta) + \int \log\!\Big(\frac{\nu(\theta)}{p(\theta)}\Big)d\nu(\theta)$ over all distributions $\nu(\theta)$. The first term is like an expected utility: you want to put mass on parameters that yield a high likelihood. The second term regularizes: you want a small KL divergence to the prior. This formula explicitly says what the posterior is optimizing. It is used a lot in the context of quasi-likelihood, where people replace the log-likelihood by another utility function.
Why should I be Bayesian when my model is wrong? Here are a few other ways of justifying Bayesian inference in misspecified models. You can construct a confidence interval on the posterior mean, using the sandwich formula (in the same way that you
2,319
Why should I be Bayesian when my model is wrong?
There is the usual bias-variance tradeoff. Bayesian inference assuming M-closed case [1,2], has a smaller variance [3] but in the case of model misspecification the bias grows faster [4]. It is also possible to do Bayesian inference assuming M-open case [1,2], which has a higher variance [3] but in the case of model misspecification the bias is smaller [4]. Dicussions of ths bias-variance tradeoff between Bayesian M-closed and M-open cases appear also in some of the references included in the references below, but there is clearly need for more. [1] Bernardo and Smith (1994). Bayesian Theory. John Wiley \& Sons. [2] Vehtari and Ojanen (2012). A survey of Bayesian predictive methods for model assessment, selection and comparison. Statistics Surveys, 6:142-228. http://dx.doi.org/10.1214/12-SS102 [3] Juho Piironen and Aki Vehtari (2017). Comparison of Bayesian predictive methods for model selection. Statistics and Computing, 27(3):711-735. http://dx.doi.org/10.1007/s11222-016-9649-y. [4] Yao, Vehtari, Simpson, and Andrew Gelman (2017). Using stacking to average Bayesian predictive distributions. arXiv preprint arXiv:1704.02030 arxiv.org/abs/1704.02030
Why should I be Bayesian when my model is wrong?
There is the usual bias-variance tradeoff. Bayesian inference assuming M-closed case [1,2], has a smaller variance [3] but in the case of model misspecification the bias grows faster [4]. It is also p
Why should I be Bayesian when my model is wrong? There is the usual bias-variance tradeoff. Bayesian inference assuming M-closed case [1,2], has a smaller variance [3] but in the case of model misspecification the bias grows faster [4]. It is also possible to do Bayesian inference assuming M-open case [1,2], which has a higher variance [3] but in the case of model misspecification the bias is smaller [4]. Dicussions of ths bias-variance tradeoff between Bayesian M-closed and M-open cases appear also in some of the references included in the references below, but there is clearly need for more. [1] Bernardo and Smith (1994). Bayesian Theory. John Wiley \& Sons. [2] Vehtari and Ojanen (2012). A survey of Bayesian predictive methods for model assessment, selection and comparison. Statistics Surveys, 6:142-228. http://dx.doi.org/10.1214/12-SS102 [3] Juho Piironen and Aki Vehtari (2017). Comparison of Bayesian predictive methods for model selection. Statistics and Computing, 27(3):711-735. http://dx.doi.org/10.1007/s11222-016-9649-y. [4] Yao, Vehtari, Simpson, and Andrew Gelman (2017). Using stacking to average Bayesian predictive distributions. arXiv preprint arXiv:1704.02030 arxiv.org/abs/1704.02030
Why should I be Bayesian when my model is wrong? There is the usual bias-variance tradeoff. Bayesian inference assuming M-closed case [1,2], has a smaller variance [3] but in the case of model misspecification the bias grows faster [4]. It is also p
2,320
Why should I be Bayesian when my model is wrong?
The MLE is still an estimator for a parameter in a model you specify and assume to be correct. The regression coefficients in a frequentist OLS can be estimated with the MLE and all the properties you want to attach to it (unbiased, a specific asymptotic variance) still assume your very specific linear model is correct. I'm going to take this a step further and say that every time you want to ascribe meaning and properties to an estimator you have to assume a model. Even when you take a simple sample mean, you are assuming the data is exchangeable and oftentimes IID. Now, Bayesian estimators have many desirable properties that an MLE might not have. For example, partial pooling, regularization, and interpretability of a posterior which make it desirable in many situations.
Why should I be Bayesian when my model is wrong?
The MLE is still an estimator for a parameter in a model you specify and assume to be correct. The regression coefficients in a frequentist OLS can be estimated with the MLE and all the properties you
Why should I be Bayesian when my model is wrong? The MLE is still an estimator for a parameter in a model you specify and assume to be correct. The regression coefficients in a frequentist OLS can be estimated with the MLE and all the properties you want to attach to it (unbiased, a specific asymptotic variance) still assume your very specific linear model is correct. I'm going to take this a step further and say that every time you want to ascribe meaning and properties to an estimator you have to assume a model. Even when you take a simple sample mean, you are assuming the data is exchangeable and oftentimes IID. Now, Bayesian estimators have many desirable properties that an MLE might not have. For example, partial pooling, regularization, and interpretability of a posterior which make it desirable in many situations.
Why should I be Bayesian when my model is wrong? The MLE is still an estimator for a parameter in a model you specify and assume to be correct. The regression coefficients in a frequentist OLS can be estimated with the MLE and all the properties you
2,321
Why should I be Bayesian when my model is wrong?
assume that the real model of the data $p_{true}(X)$ differs from $p(X|\theta)$ for all values of $\theta$ Bayesian interpretation of this assumption is that there is an additional random variable $\phi$ and a value $\phi_0$ in its range $\phi_0$ such that $\int p(X|\theta,\phi=\phi_0) \mathrm{d}\theta =0$. Your prior knowledge says $p(\phi=\phi_0)\propto 1$ and $p(\phi\neq\phi_0)=0$. Then $p(\theta|X,\phi=\phi_0)=0$ which is not proper probability distribution. This case corresponds to a similar inference rule in logic where $A, \neg A \vdash \emptyset$, i.e. you can't infer anything from a contradiction. The result $p(\theta|X,\phi=\phi_0)=0$ is a way in which bayesian probability theory tells you that your prior knowledge is not consistent with your data. If someone failed to get this result in their derivation of the posterior, it means that the formulation failed to encode all relevant prior knowledge. As for the appraisal of this situation I hand over to Jaynes (2003, p.41): ... it is a powerful analytical tool which can search out a set of propositions and detect a contradiction in them if one exists. The principle is that probabilities conditional on contradictory premises do not exist (the hypothesis space is reduced to the empty set). Therefore, put our robot to work; i.e. write a computer program to calculate probabilities $p(B|E)$ conditional on a set of propositions $E= (E_1,E_2,\dots,E_n)$ Even though no contradiction is apparent from inspection, if there is a contradiction hidden in $E$, the computer program will crash. We discovered this ,,empirically,'' and after some thought realized that it is not a reason for dismay, but rather a valuable diagnostic tool that warns us of unforeseen special cases in which our formulation of a problem can break down. In other words, if your problem formulation is inaccurate - if your model is wrong, bayesian statistics can help you find out that this is the case and can help you to find what aspect of the model is the source of the problem. In practice, it may not be entirely clear what knowledge is relevant and whether it should be included in derivation. Various model checking techniques (Chapters 6 & 7 in Gelman et al., 2013, provide an overview) are then used to find out and to identify an inaccurate problem formulation. Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis, Third edition. Chapman & Hall/CRC. Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge university press.
Why should I be Bayesian when my model is wrong?
assume that the real model of the data $p_{true}(X)$ differs from $p(X|\theta)$ for all values of $\theta$ Bayesian interpretation of this assumption is that there is an additional random variable $\
Why should I be Bayesian when my model is wrong? assume that the real model of the data $p_{true}(X)$ differs from $p(X|\theta)$ for all values of $\theta$ Bayesian interpretation of this assumption is that there is an additional random variable $\phi$ and a value $\phi_0$ in its range $\phi_0$ such that $\int p(X|\theta,\phi=\phi_0) \mathrm{d}\theta =0$. Your prior knowledge says $p(\phi=\phi_0)\propto 1$ and $p(\phi\neq\phi_0)=0$. Then $p(\theta|X,\phi=\phi_0)=0$ which is not proper probability distribution. This case corresponds to a similar inference rule in logic where $A, \neg A \vdash \emptyset$, i.e. you can't infer anything from a contradiction. The result $p(\theta|X,\phi=\phi_0)=0$ is a way in which bayesian probability theory tells you that your prior knowledge is not consistent with your data. If someone failed to get this result in their derivation of the posterior, it means that the formulation failed to encode all relevant prior knowledge. As for the appraisal of this situation I hand over to Jaynes (2003, p.41): ... it is a powerful analytical tool which can search out a set of propositions and detect a contradiction in them if one exists. The principle is that probabilities conditional on contradictory premises do not exist (the hypothesis space is reduced to the empty set). Therefore, put our robot to work; i.e. write a computer program to calculate probabilities $p(B|E)$ conditional on a set of propositions $E= (E_1,E_2,\dots,E_n)$ Even though no contradiction is apparent from inspection, if there is a contradiction hidden in $E$, the computer program will crash. We discovered this ,,empirically,'' and after some thought realized that it is not a reason for dismay, but rather a valuable diagnostic tool that warns us of unforeseen special cases in which our formulation of a problem can break down. In other words, if your problem formulation is inaccurate - if your model is wrong, bayesian statistics can help you find out that this is the case and can help you to find what aspect of the model is the source of the problem. In practice, it may not be entirely clear what knowledge is relevant and whether it should be included in derivation. Various model checking techniques (Chapters 6 & 7 in Gelman et al., 2013, provide an overview) are then used to find out and to identify an inaccurate problem formulation. Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis, Third edition. Chapman & Hall/CRC. Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge university press.
Why should I be Bayesian when my model is wrong? assume that the real model of the data $p_{true}(X)$ differs from $p(X|\theta)$ for all values of $\theta$ Bayesian interpretation of this assumption is that there is an additional random variable $\
2,322
Why should I be Bayesian when my model is wrong?
I recommend Gelman & Shalizi's Philosophy and the practice of Bayesian statistics. They have coherent, detailed and practical responses to these questions. We think most of this received view of Bayesian inference is wrong. Bayesian methods are no more inductive than any other mode of statistical inference. Bayesian data analysis is much better understood from a hypothetico-deductive perspective. Implicit in the best Bayesian practice is a stance that has much in common with the error-statistical approach of Mayo (1996), despite the latter’s frequentist orientation. Indeed, crucial parts of Bayesian data analysis, such as model checking, can be understood as ‘error probes’ in Mayo’s sense. We proceed by a combination of examining concrete cases of Bayesian data analysis in empirical social science research, and theoretical results on the consistency and convergence of Bayesian updating. Social-scientific data analysis is especially salient for our purposes because there is general agreement that, in this domain, all models in use are wrong – not merely falsifiable, but actually false. With enough data – and often only a fairly moderate amount – any analyst could reject any model now in use to any desired level of confidence. Model fitting is nonetheless a valuable activity, and indeed the crux of data analysis. To understand why this is so, we need to examine how models are built, fitted, used and checked, and the effects of misspecification on models. ... In our view, the account of the last paragraph [of the standard Bayesian view] is crucially mistaken. The data-analysis process – Bayesian or otherwise – does not end with calculating parameter estimates or posterior distributions. Rather, the model can then be checked, by comparing the implications of the fitted model to the empirical evidence. One asks questions such as whether simulations from the fitted model resemble the original data, whether the fitted model is consistent with other data not used in the fitting of the model, and whether variables that the model says are noise (‘error terms’) in fact display readily-detectable patterns. Discrepancies between the model and data can be used to learn about the ways in which the model is inadequate for the scientific purposes at hand, and thus to motivate expansions and changes to the model (Section 4.).
Why should I be Bayesian when my model is wrong?
I recommend Gelman & Shalizi's Philosophy and the practice of Bayesian statistics. They have coherent, detailed and practical responses to these questions. We think most of this received view of Baye
Why should I be Bayesian when my model is wrong? I recommend Gelman & Shalizi's Philosophy and the practice of Bayesian statistics. They have coherent, detailed and practical responses to these questions. We think most of this received view of Bayesian inference is wrong. Bayesian methods are no more inductive than any other mode of statistical inference. Bayesian data analysis is much better understood from a hypothetico-deductive perspective. Implicit in the best Bayesian practice is a stance that has much in common with the error-statistical approach of Mayo (1996), despite the latter’s frequentist orientation. Indeed, crucial parts of Bayesian data analysis, such as model checking, can be understood as ‘error probes’ in Mayo’s sense. We proceed by a combination of examining concrete cases of Bayesian data analysis in empirical social science research, and theoretical results on the consistency and convergence of Bayesian updating. Social-scientific data analysis is especially salient for our purposes because there is general agreement that, in this domain, all models in use are wrong – not merely falsifiable, but actually false. With enough data – and often only a fairly moderate amount – any analyst could reject any model now in use to any desired level of confidence. Model fitting is nonetheless a valuable activity, and indeed the crux of data analysis. To understand why this is so, we need to examine how models are built, fitted, used and checked, and the effects of misspecification on models. ... In our view, the account of the last paragraph [of the standard Bayesian view] is crucially mistaken. The data-analysis process – Bayesian or otherwise – does not end with calculating parameter estimates or posterior distributions. Rather, the model can then be checked, by comparing the implications of the fitted model to the empirical evidence. One asks questions such as whether simulations from the fitted model resemble the original data, whether the fitted model is consistent with other data not used in the fitting of the model, and whether variables that the model says are noise (‘error terms’) in fact display readily-detectable patterns. Discrepancies between the model and data can be used to learn about the ways in which the model is inadequate for the scientific purposes at hand, and thus to motivate expansions and changes to the model (Section 4.).
Why should I be Bayesian when my model is wrong? I recommend Gelman & Shalizi's Philosophy and the practice of Bayesian statistics. They have coherent, detailed and practical responses to these questions. We think most of this received view of Baye
2,323
Why should I be Bayesian when my model is wrong?
I think you're describing an impact of model uncertainty - you worry that your inference about an unknown parameter $x$ in light of data $d$ is conditional upon a model, $m$, $$ p (x|d, m), $$ as well as the data. What if $m$ is an implausible model? If there exist alternative models, with the same unknown parameter $x$, then you can marginalize model uncertainty with Bayesian model averaging, $$ p (x|d) = \sum_m p (x|d, m) p(m|d) $$ though this is a functional of the models considered and their priors. If , on the other hand, the definition of parameter $x$ is intrinsically tied to the model $m$, such that there are no alternatives, it's hardly surprising that inferences about $x$ are conditional on $m$.
Why should I be Bayesian when my model is wrong?
I think you're describing an impact of model uncertainty - you worry that your inference about an unknown parameter $x$ in light of data $d$ is conditional upon a model, $m$, $$ p (x|d, m), $$ as well
Why should I be Bayesian when my model is wrong? I think you're describing an impact of model uncertainty - you worry that your inference about an unknown parameter $x$ in light of data $d$ is conditional upon a model, $m$, $$ p (x|d, m), $$ as well as the data. What if $m$ is an implausible model? If there exist alternative models, with the same unknown parameter $x$, then you can marginalize model uncertainty with Bayesian model averaging, $$ p (x|d) = \sum_m p (x|d, m) p(m|d) $$ though this is a functional of the models considered and their priors. If , on the other hand, the definition of parameter $x$ is intrinsically tied to the model $m$, such that there are no alternatives, it's hardly surprising that inferences about $x$ are conditional on $m$.
Why should I be Bayesian when my model is wrong? I think you're describing an impact of model uncertainty - you worry that your inference about an unknown parameter $x$ in light of data $d$ is conditional upon a model, $m$, $$ p (x|d, m), $$ as well
2,324
Why should I be Bayesian when my model is wrong?
How do you define what a "mis-specified" model is? Does this mean the model... makes "bad" predictions? is not of the form $p_{T}(x) $ for some "true model"? is missing a parameter? leads to "bad" conclusions? If you think of the ways a given model could be mis-specified, you will essentially be extracting information on how to make a better model. Include that extra information in your model! If you think about what a "model" is in the bayesian framework, you can always make a model that cannot be mis-specified. One way to do this is by adding more parameters to your current model. By adding more parameters, you make your model more flexible and adaptable. Machine Learning methods make full use of this idea. This underlies things like "nueral networks" and "regression trees". You do need to think about priors though (similar to regularising for ML). For example, you have given the "linear model" as your example, so you have... $$\text {model 1: }x_i =\theta + \sigma e_i $$ Where $e_i \sim N (0,1)$. Now suppose we add a new parameter for each observation.... $$\text {model 2: }x_i =\theta + \sigma \frac{e_i}{w_i} $$ Where $e_i \sim N (0,1)$ as before. How does this change things? You could say "model 1 is mis-specified if model 2 is true". But model 2 is harder to estimate, as it has many more parameters. Also, if information about $\theta $ is what we care about, does it matter if model 1 is "wrong"? If you assume that $w_i\sim N (0,1) $ (like a "model 2a") then we basically have "cauchy errors" instead of "normal errors" and the model expects outliers in the data. Hence, by adding parameters to your model, and choosing a prior for them, I have created a "more robust model". However the model still expects symmetry in the error terms. By choosing a different prior, this could be accounted for as well...
Why should I be Bayesian when my model is wrong?
How do you define what a "mis-specified" model is? Does this mean the model... makes "bad" predictions? is not of the form $p_{T}(x) $ for some "true model"? is missing a parameter? leads to "bad"
Why should I be Bayesian when my model is wrong? How do you define what a "mis-specified" model is? Does this mean the model... makes "bad" predictions? is not of the form $p_{T}(x) $ for some "true model"? is missing a parameter? leads to "bad" conclusions? If you think of the ways a given model could be mis-specified, you will essentially be extracting information on how to make a better model. Include that extra information in your model! If you think about what a "model" is in the bayesian framework, you can always make a model that cannot be mis-specified. One way to do this is by adding more parameters to your current model. By adding more parameters, you make your model more flexible and adaptable. Machine Learning methods make full use of this idea. This underlies things like "nueral networks" and "regression trees". You do need to think about priors though (similar to regularising for ML). For example, you have given the "linear model" as your example, so you have... $$\text {model 1: }x_i =\theta + \sigma e_i $$ Where $e_i \sim N (0,1)$. Now suppose we add a new parameter for each observation.... $$\text {model 2: }x_i =\theta + \sigma \frac{e_i}{w_i} $$ Where $e_i \sim N (0,1)$ as before. How does this change things? You could say "model 1 is mis-specified if model 2 is true". But model 2 is harder to estimate, as it has many more parameters. Also, if information about $\theta $ is what we care about, does it matter if model 1 is "wrong"? If you assume that $w_i\sim N (0,1) $ (like a "model 2a") then we basically have "cauchy errors" instead of "normal errors" and the model expects outliers in the data. Hence, by adding parameters to your model, and choosing a prior for them, I have created a "more robust model". However the model still expects symmetry in the error terms. By choosing a different prior, this could be accounted for as well...
Why should I be Bayesian when my model is wrong? How do you define what a "mis-specified" model is? Does this mean the model... makes "bad" predictions? is not of the form $p_{T}(x) $ for some "true model"? is missing a parameter? leads to "bad"
2,325
An example: LASSO regression using glmnet for binary outcome
library(glmnet) age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7) gender <- as.factor(c(1, 0, 1, 1, 1, 0, 1, 0, 0)) bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88) m_edu <- as.factor(c(0, 1, 1, 2, 2, 3, 2, 0, 1)) p_edu <- as.factor(c(0, 2, 2, 2, 2, 3, 2, 0, 0)) f_color <- as.factor(c("blue", "blue", "yellow", "red", "red", "yellow", "yellow", "red", "yellow")) asthma <- c(1, 1, 0, 1, 0, 0, 0, 1, 1) xfactors <- model.matrix(asthma ~ gender + m_edu + p_edu + f_color)[, -1] x <- as.matrix(data.frame(age, bmi_p, xfactors)) # Note alpha=1 for lasso only and can blend with ridge penalty down to # alpha=0 ridge only. glmmod <- glmnet(x, y=as.factor(asthma), alpha=1, family="binomial") # Plot variable coefficients vs. shrinkage parameter lambda. plot(glmmod, xvar="lambda") Categorical variables are usually first transformed into factors, then a dummy variable matrix of predictors is created and along with the continuous predictors, is passed to the model. Keep in mind, glmnet uses both ridge and lasso penalties, but can be set to either alone. Some results: # Model shown for lambda up to first 3 selected variables. # Lambda can have manual tuning grid for wider range. glmmod # Call: glmnet(x = x, y = as.factor(asthma), family = "binomial", alpha = 1) # # Df %Dev Lambda # [1,] 0 0.00000 0.273300 # [2,] 1 0.01955 0.260900 # [3,] 1 0.03737 0.249000 # [4,] 1 0.05362 0.237700 # [5,] 1 0.06847 0.226900 # [6,] 1 0.08204 0.216600 # [7,] 1 0.09445 0.206700 # [8,] 1 0.10580 0.197300 # [9,] 1 0.11620 0.188400 # [10,] 3 0.13120 0.179800 # [11,] 3 0.15390 0.171600 # ... Coefficients can be extracted from the glmmod. Here shown with 3 variables selected. coef(glmmod)[, 10] # (Intercept) age bmi_p gender1 m_edu1 # 0.59445647 0.00000000 0.00000000 -0.01893607 0.00000000 # m_edu2 m_edu3 p_edu2 p_edu3 f_colorred # 0.00000000 0.00000000 -0.01882883 0.00000000 0.00000000 # f_coloryellow # -0.77207831 Lastly, cross validation can also be used to select lambda. cv.glmmod <- cv.glmnet(x, y=asthma, alpha=1) plot(cv.glmmod) (best.lambda <- cv.glmmod$lambda.min) # [1] 0.2732972
An example: LASSO regression using glmnet for binary outcome
library(glmnet) age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7) gender <- as.factor(c(1, 0, 1, 1, 1, 0, 1, 0, 0)) bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88) m_edu <- as.factor
An example: LASSO regression using glmnet for binary outcome library(glmnet) age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7) gender <- as.factor(c(1, 0, 1, 1, 1, 0, 1, 0, 0)) bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88) m_edu <- as.factor(c(0, 1, 1, 2, 2, 3, 2, 0, 1)) p_edu <- as.factor(c(0, 2, 2, 2, 2, 3, 2, 0, 0)) f_color <- as.factor(c("blue", "blue", "yellow", "red", "red", "yellow", "yellow", "red", "yellow")) asthma <- c(1, 1, 0, 1, 0, 0, 0, 1, 1) xfactors <- model.matrix(asthma ~ gender + m_edu + p_edu + f_color)[, -1] x <- as.matrix(data.frame(age, bmi_p, xfactors)) # Note alpha=1 for lasso only and can blend with ridge penalty down to # alpha=0 ridge only. glmmod <- glmnet(x, y=as.factor(asthma), alpha=1, family="binomial") # Plot variable coefficients vs. shrinkage parameter lambda. plot(glmmod, xvar="lambda") Categorical variables are usually first transformed into factors, then a dummy variable matrix of predictors is created and along with the continuous predictors, is passed to the model. Keep in mind, glmnet uses both ridge and lasso penalties, but can be set to either alone. Some results: # Model shown for lambda up to first 3 selected variables. # Lambda can have manual tuning grid for wider range. glmmod # Call: glmnet(x = x, y = as.factor(asthma), family = "binomial", alpha = 1) # # Df %Dev Lambda # [1,] 0 0.00000 0.273300 # [2,] 1 0.01955 0.260900 # [3,] 1 0.03737 0.249000 # [4,] 1 0.05362 0.237700 # [5,] 1 0.06847 0.226900 # [6,] 1 0.08204 0.216600 # [7,] 1 0.09445 0.206700 # [8,] 1 0.10580 0.197300 # [9,] 1 0.11620 0.188400 # [10,] 3 0.13120 0.179800 # [11,] 3 0.15390 0.171600 # ... Coefficients can be extracted from the glmmod. Here shown with 3 variables selected. coef(glmmod)[, 10] # (Intercept) age bmi_p gender1 m_edu1 # 0.59445647 0.00000000 0.00000000 -0.01893607 0.00000000 # m_edu2 m_edu3 p_edu2 p_edu3 f_colorred # 0.00000000 0.00000000 -0.01882883 0.00000000 0.00000000 # f_coloryellow # -0.77207831 Lastly, cross validation can also be used to select lambda. cv.glmmod <- cv.glmnet(x, y=asthma, alpha=1) plot(cv.glmmod) (best.lambda <- cv.glmmod$lambda.min) # [1] 0.2732972
An example: LASSO regression using glmnet for binary outcome library(glmnet) age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7) gender <- as.factor(c(1, 0, 1, 1, 1, 0, 1, 0, 0)) bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88) m_edu <- as.factor
2,326
An example: LASSO regression using glmnet for binary outcome
I will use the package enet since that is my preffered method. It is a little more flexible. install.packages('elasticnet') library(elasticnet) age <- c(4,8,7,12,6,9,10,14,7) gender <- c(1,0,1,1,1,0,1,0,0) bmi_p <- c(0.86,0.45,0.99,0.84,0.85,0.67,0.91,0.29,0.88) m_edu <- c(0,1,1,2,2,3,2,0,1) p_edu <- c(0,2,2,2,2,3,2,0,0) #f_color <- c("blue", "blue", "yellow", "red", "red", "yellow", "yellow", "red", "yellow") f_color <- c(0, 0, 1, 2, 2, 1, 1, 2, 1) asthma <- c(1,1,0,1,0,0,0,1,1) pred <- cbind(age, gender, bmi_p, m_edu, p_edu, f_color) enet(x=pred, y=asthma, lambda=0)
An example: LASSO regression using glmnet for binary outcome
I will use the package enet since that is my preffered method. It is a little more flexible. install.packages('elasticnet') library(elasticnet) age <- c(4,8,7,12,6,9,10,14,7) gender <- c(1,0,1,1,1,
An example: LASSO regression using glmnet for binary outcome I will use the package enet since that is my preffered method. It is a little more flexible. install.packages('elasticnet') library(elasticnet) age <- c(4,8,7,12,6,9,10,14,7) gender <- c(1,0,1,1,1,0,1,0,0) bmi_p <- c(0.86,0.45,0.99,0.84,0.85,0.67,0.91,0.29,0.88) m_edu <- c(0,1,1,2,2,3,2,0,1) p_edu <- c(0,2,2,2,2,3,2,0,0) #f_color <- c("blue", "blue", "yellow", "red", "red", "yellow", "yellow", "red", "yellow") f_color <- c(0, 0, 1, 2, 2, 1, 1, 2, 1) asthma <- c(1,1,0,1,0,0,0,1,1) pred <- cbind(age, gender, bmi_p, m_edu, p_edu, f_color) enet(x=pred, y=asthma, lambda=0)
An example: LASSO regression using glmnet for binary outcome I will use the package enet since that is my preffered method. It is a little more flexible. install.packages('elasticnet') library(elasticnet) age <- c(4,8,7,12,6,9,10,14,7) gender <- c(1,0,1,1,1,
2,327
An example: LASSO regression using glmnet for binary outcome
Just to expand on the excellent example provided by pat. The original problem posed ordinal variables (m_edu, p_edu), with an inherent order between levels (0 < 1 < 2 < 3). In pat's original answer I think these were treated as nominal categorical variables with no order between them. I may be wrong, but I believe these variables should be coded such that the model respects their inherent order. If these are coded as ordered factors (rather than as unordered factors as in pat's answer) then glmnet gives slightly different results... I think the code below correctly includes the ordinal variables as ordered factors, and it gives slightly different results: library(glmnet) age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7) gender <- as.factor(c(1, 0, 1, 1, 1, 0, 1, 0, 0)) bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88) m_edu <- factor(c(0, 1, 1, 2, 2, 3, 2, 0, 1), ordered = TRUE) p_edu <- factor(c(0, 2, 2, 2, 2, 3, 2, 0, 0), levels = c(0, 1, 2, 3), ordered = TRUE) f_color <- as.factor(c("blue", "blue", "yellow", "red", "red", "yellow", "yellow", "red", "yellow")) asthma <- c(1, 1, 0, 1, 0, 0, 0, 1, 1) xfactors <- model.matrix(asthma ~ gender + m_edu + p_edu + f_color)[, -1] x <- as.matrix(data.frame(age, bmi_p, xfactors)) # Note alpha=1 for lasso only and can blend with ridge penalty down to # alpha=0 ridge only. glmmod <- glmnet(x, y=as.factor(asthma), alpha=1, family="binomial") # Plot variable coefficients vs. shrinkage parameter lambda. plot(glmmod, xvar="lambda")
An example: LASSO regression using glmnet for binary outcome
Just to expand on the excellent example provided by pat. The original problem posed ordinal variables (m_edu, p_edu), with an inherent order between levels (0 < 1 < 2 < 3). In pat's original answer I
An example: LASSO regression using glmnet for binary outcome Just to expand on the excellent example provided by pat. The original problem posed ordinal variables (m_edu, p_edu), with an inherent order between levels (0 < 1 < 2 < 3). In pat's original answer I think these were treated as nominal categorical variables with no order between them. I may be wrong, but I believe these variables should be coded such that the model respects their inherent order. If these are coded as ordered factors (rather than as unordered factors as in pat's answer) then glmnet gives slightly different results... I think the code below correctly includes the ordinal variables as ordered factors, and it gives slightly different results: library(glmnet) age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7) gender <- as.factor(c(1, 0, 1, 1, 1, 0, 1, 0, 0)) bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88) m_edu <- factor(c(0, 1, 1, 2, 2, 3, 2, 0, 1), ordered = TRUE) p_edu <- factor(c(0, 2, 2, 2, 2, 3, 2, 0, 0), levels = c(0, 1, 2, 3), ordered = TRUE) f_color <- as.factor(c("blue", "blue", "yellow", "red", "red", "yellow", "yellow", "red", "yellow")) asthma <- c(1, 1, 0, 1, 0, 0, 0, 1, 1) xfactors <- model.matrix(asthma ~ gender + m_edu + p_edu + f_color)[, -1] x <- as.matrix(data.frame(age, bmi_p, xfactors)) # Note alpha=1 for lasso only and can blend with ridge penalty down to # alpha=0 ridge only. glmmod <- glmnet(x, y=as.factor(asthma), alpha=1, family="binomial") # Plot variable coefficients vs. shrinkage parameter lambda. plot(glmmod, xvar="lambda")
An example: LASSO regression using glmnet for binary outcome Just to expand on the excellent example provided by pat. The original problem posed ordinal variables (m_edu, p_edu), with an inherent order between levels (0 < 1 < 2 < 3). In pat's original answer I
2,328
What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?
Shortcomings of the MAPE The MAPE, as a percentage, only makes sense for values where divisions and ratios make sense. It doesn't make sense to calculate percentages of temperatures, for instance, so you shouldn't use the MAPE to calculate the accuracy of a temperature forecast. If just a single actual is zero, $A_t=0$, then you divide by zero in calculating the MAPE, which is undefined. It turns out that some forecasting software nevertheless reports a MAPE for such series, simply by dropping periods with zero actuals (Hoover, 2006). Needless to say, this is not a good idea, as it implies that we don't care at all about what we forecasted if the actual was zero - but a forecast of $F_t=100$ and one of $F_t=1000$ may have very different implications. So check what your software does. If only a few zeros occur, you can use a weighted MAPE (Kolassa & Schütz, 2007), which nevertheless has problems of its own. This also applies to the symmetric MAPE (Goodwin & Lawton, 1999). MAPEs greater than 100% can occur. If you prefer to work with accuracy, which some people define as 100%-MAPE, then this may lead to negative accuracy, which people may have a hard time understanding. (No, truncating accuracy at zero is not a good idea.) Model fitting relies on minimizing errors, which is often done using numerical optimizers that use first or second derivatives. The MAPE is not everywhere differentiable, and its Hessian is zero wherever it is defined. This can throw optimizers off if we want to use the MAPE as an in-sample fit criterion. A possible mitigation may be to use the log cosh loss function, which is similar to the MAE but twice differentiable. Alternatively, Zheng (2011) offer a way to approximate the MAE (or any other quantile loss) to arbitrary precision using a smooth function. If we know bounds on the actuals (which we do when fitting strictly positive historical data), we can therefore smoothly approximate the MAPE to arbitrary precision. The MAPE treats overforecasts differently than underforecasts. Suppose our forecast is $F_t=2$, then an actual of $A_t=1$ will contribute $\text{APE}_t=100\%$ to the MAPE, but an actual of $A_t=3$ will contribute $\text{APE}_t=33\%$. Minimizing the MAPE thus creates an incentive towards smaller $F_t$ - if our actuals have an equal chance of being $A_t=1$ or $A_t=3$, then we will minimize the expected MAPE by forecasting $F_t=1.5$, not $F_t=2$, which is the expectation of our actuals. The MAPE thus is lower for biased than for unbiased forecasts. Minimizing it may lead to forecasts that are biased low. Especially the last bullet point merits a little more thought. For this, we need to take a step back. To start with, note that we don't know the future outcome perfectly, nor will we ever. So the future outcome follows a probability distribution. Our so-called point forecast $F_t$ is our attempt to summarize what we know about the future distribution (i.e., the predictive distribution) at time $t$ using a single number. The MAPE then is a quality measure of a whole sequence of such single-number-summaries of future distributions at times $t=1, \dots, n$. The problem here is that people rarely explicitly say what a good one-number-summary of a future distribution is. When you talk to forecast consumers, they will usually want $F_t$ to be correct "on average". That is, they want $F_t$ to be the expectation or the mean of the future distribution, rather than, say, its median. Here's the problem: minimizing the MAPE will typically not incentivize us to output this expectation, but a quite different one-number-summary (McKenzie, 2011, Kolassa, 2020). This happens for two different reasons. Asymmetric future distributions. Suppose our true future distribution follows a stationary $(\mu=1,\sigma^2=1)$ lognormal distribution. The following picture shows a simulated time series, as well as the corresponding density. The horizontal lines give the optimal point forecasts, where "optimality" is defined as minimizing the expected error for various error measures. The dashed line at $F_t=\exp(\mu+\frac{\sigma^2}{2})\approx 4.5$ minimizes the expected MSE. It is the expectation of the time series. The dotted line at $F_t=\exp\mu\approx 2.7$ minimizes the expected MAE. It is the median of the time series. The dash-dotted line at $F_t=\exp(\mu-\sigma^2)=1.0$ minimizes the expected MAPE. It is the (-1)-median of the time series (Gneiting, 2011, p. 752 with $\beta=-1$), which in the specific case of a lognormal distribution happens to coincide with the mode of the distribution. We see that the asymmetry of the future distribution, together with the fact that the MAPE differentially penalizes over- and underforecasts, implies that minimizing the MAPE will lead to heavily biased forecasts. (Here is the calculation of optimal point forecasts in the gamma case.) Symmetric distribution with a high coefficient of variation. Suppose that $A_t$ comes from rolling a standard six-sided die at each time point $t$. The picture below again shows a simulated sample path: In this case: The dashed line at $F_t=3.5$ minimizes the expected MSE. It is the expectation of the time series. Any forecast $3\leq F_t\leq 4$ (not shown in the graph) will minimize the expected MAE. All values in this interval are medians of the time series. The dash-dotted line at $F_t=2$ minimizes the expected MAPE. We again see how minimizing the MAPE can lead to a biased forecast, because of the differential penalty it applies to over- and underforecasts. In this case, the problem does not come from an asymmetric distribution, but from the high coefficient of variation of our data-generating process. This is actually a simple illustration you can use to teach people about the shortcomings of the MAPE - just hand your attendees a few dice and have them roll. See Kolassa & Martin (2011) for more information. Related CrossValidated questions The difference between MSE and MAPE Best way to optimize MAPE Mean absolute percentage error with respect to predictions (on using the actual in the denominator) Minimizing symmetric mean absolute percentage error (SMAPE) (on using the average of the forecast and the actual in the denominator) Optimal prediction under squared percentage loss (on using the squared instead of the absolute percentage error) MAPE vs R-squared in regression models Why use a certain measure of forecast error (e.g. MAD) as opposed to another (e.g. MSE)? Does it make sense to increment by 1 the numerator and denominator in the MAPE to avoid division by 0? R code Lognormal example: mm <- 1 ss.sq <- 1 SAPMediumGray <- "#999999"; SAPGold <- "#F0AB00" set.seed(2013) actuals <- rlnorm(100,meanlog=mm,sdlog=sqrt(ss.sq)) opar <- par(mar=c(3,2,0,0)+.1) plot(actuals,type="o",pch=21,cex=0.8,bg="black",xlab="",ylab="",xlim=c(0,150)) abline(v=101,col=SAPMediumGray) xx <- seq(0,max(actuals),by=.1) polygon(c(101+150*dlnorm(xx,meanlog=mm,sdlog=sqrt(ss.sq)), rep(101,length(xx))),c(xx,rev(xx)),col="lightgray",border=NA) (min.Ese <- exp(mm+ss.sq/2)) lines(c(101,150),rep(min.Ese,2),col=SAPGold,lwd=3,lty=2) (min.Eae <- exp(mm)) lines(c(101,150),rep(min.Eae,2),col=SAPGold,lwd=3,lty=3) (min.Eape <- exp(mm-ss.sq)) lines(c(101,150),rep(min.Eape,2),col=SAPGold,lwd=3,lty=4) par(opar) Dice rolling example: SAPMediumGray <- "#999999"; SAPGold <- "#F0AB00" set.seed(2013) actuals <- sample(x=1:6,size=100,replace=TRUE) opar <- par(mar=c(3,2,0,0)+.1) plot(actuals,type="o",pch=21,cex=0.8,bg="black",xlab="",ylab="",xlim=c(0,150)) abline(v=101,col=SAPMediumGray) min.Ese <- 3.5 lines(c(101,150),rep(min.Ese,2),col=SAPGold,lwd=3,lty=2) min.Eape <- 2 lines(c(101,150),rep(min.Eape,2),col=SAPGold,lwd=3,lty=4) par(opar) References Gneiting, T. Making and Evaluating Point Forecasts. Journal of the American Statistical Association, 2011, 106, 746-762 Goodwin, P. & Lawton, R. On the asymmetry of the symmetric MAPE. International Journal of Forecasting, 1999, 15, 405-408 Hoover, J. Measuring Forecast Accuracy: Omissions in Today's Forecasting Engines and Demand-Planning Software. Foresight: The International Journal of Applied Forecasting, 2006, 4, 32-35 Kolassa, S. Why the "best" point forecast depends on the error or accuracy measure (Invited commentary on the M4 forecasting competition). International Journal of Forecasting, 2020, 36(1), 208-211 Kolassa, S. & Martin, R. Percentage Errors Can Ruin Your Day (and Rolling the Dice Shows How). Foresight: The International Journal of Applied Forecasting, 2011, 23, 21-29 Kolassa, S. & Schütz, W. Advantages of the MAD/Mean ratio over the MAPE. Foresight: The International Journal of Applied Forecasting, 2007, 6, 40-43 McKenzie, J. Mean absolute percentage error and bias in economic forecasting. Economics Letters, 2011, 113, 259-262 Zheng, S. Gradient descent algorithms for quantile regression with smooth approximation. International Journal of Machine Learning and Cybernetics, 2011, 2, 191-207
What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?
Shortcomings of the MAPE The MAPE, as a percentage, only makes sense for values where divisions and ratios make sense. It doesn't make sense to calculate percentages of temperatures, for instance, so
What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? Shortcomings of the MAPE The MAPE, as a percentage, only makes sense for values where divisions and ratios make sense. It doesn't make sense to calculate percentages of temperatures, for instance, so you shouldn't use the MAPE to calculate the accuracy of a temperature forecast. If just a single actual is zero, $A_t=0$, then you divide by zero in calculating the MAPE, which is undefined. It turns out that some forecasting software nevertheless reports a MAPE for such series, simply by dropping periods with zero actuals (Hoover, 2006). Needless to say, this is not a good idea, as it implies that we don't care at all about what we forecasted if the actual was zero - but a forecast of $F_t=100$ and one of $F_t=1000$ may have very different implications. So check what your software does. If only a few zeros occur, you can use a weighted MAPE (Kolassa & Schütz, 2007), which nevertheless has problems of its own. This also applies to the symmetric MAPE (Goodwin & Lawton, 1999). MAPEs greater than 100% can occur. If you prefer to work with accuracy, which some people define as 100%-MAPE, then this may lead to negative accuracy, which people may have a hard time understanding. (No, truncating accuracy at zero is not a good idea.) Model fitting relies on minimizing errors, which is often done using numerical optimizers that use first or second derivatives. The MAPE is not everywhere differentiable, and its Hessian is zero wherever it is defined. This can throw optimizers off if we want to use the MAPE as an in-sample fit criterion. A possible mitigation may be to use the log cosh loss function, which is similar to the MAE but twice differentiable. Alternatively, Zheng (2011) offer a way to approximate the MAE (or any other quantile loss) to arbitrary precision using a smooth function. If we know bounds on the actuals (which we do when fitting strictly positive historical data), we can therefore smoothly approximate the MAPE to arbitrary precision. The MAPE treats overforecasts differently than underforecasts. Suppose our forecast is $F_t=2$, then an actual of $A_t=1$ will contribute $\text{APE}_t=100\%$ to the MAPE, but an actual of $A_t=3$ will contribute $\text{APE}_t=33\%$. Minimizing the MAPE thus creates an incentive towards smaller $F_t$ - if our actuals have an equal chance of being $A_t=1$ or $A_t=3$, then we will minimize the expected MAPE by forecasting $F_t=1.5$, not $F_t=2$, which is the expectation of our actuals. The MAPE thus is lower for biased than for unbiased forecasts. Minimizing it may lead to forecasts that are biased low. Especially the last bullet point merits a little more thought. For this, we need to take a step back. To start with, note that we don't know the future outcome perfectly, nor will we ever. So the future outcome follows a probability distribution. Our so-called point forecast $F_t$ is our attempt to summarize what we know about the future distribution (i.e., the predictive distribution) at time $t$ using a single number. The MAPE then is a quality measure of a whole sequence of such single-number-summaries of future distributions at times $t=1, \dots, n$. The problem here is that people rarely explicitly say what a good one-number-summary of a future distribution is. When you talk to forecast consumers, they will usually want $F_t$ to be correct "on average". That is, they want $F_t$ to be the expectation or the mean of the future distribution, rather than, say, its median. Here's the problem: minimizing the MAPE will typically not incentivize us to output this expectation, but a quite different one-number-summary (McKenzie, 2011, Kolassa, 2020). This happens for two different reasons. Asymmetric future distributions. Suppose our true future distribution follows a stationary $(\mu=1,\sigma^2=1)$ lognormal distribution. The following picture shows a simulated time series, as well as the corresponding density. The horizontal lines give the optimal point forecasts, where "optimality" is defined as minimizing the expected error for various error measures. The dashed line at $F_t=\exp(\mu+\frac{\sigma^2}{2})\approx 4.5$ minimizes the expected MSE. It is the expectation of the time series. The dotted line at $F_t=\exp\mu\approx 2.7$ minimizes the expected MAE. It is the median of the time series. The dash-dotted line at $F_t=\exp(\mu-\sigma^2)=1.0$ minimizes the expected MAPE. It is the (-1)-median of the time series (Gneiting, 2011, p. 752 with $\beta=-1$), which in the specific case of a lognormal distribution happens to coincide with the mode of the distribution. We see that the asymmetry of the future distribution, together with the fact that the MAPE differentially penalizes over- and underforecasts, implies that minimizing the MAPE will lead to heavily biased forecasts. (Here is the calculation of optimal point forecasts in the gamma case.) Symmetric distribution with a high coefficient of variation. Suppose that $A_t$ comes from rolling a standard six-sided die at each time point $t$. The picture below again shows a simulated sample path: In this case: The dashed line at $F_t=3.5$ minimizes the expected MSE. It is the expectation of the time series. Any forecast $3\leq F_t\leq 4$ (not shown in the graph) will minimize the expected MAE. All values in this interval are medians of the time series. The dash-dotted line at $F_t=2$ minimizes the expected MAPE. We again see how minimizing the MAPE can lead to a biased forecast, because of the differential penalty it applies to over- and underforecasts. In this case, the problem does not come from an asymmetric distribution, but from the high coefficient of variation of our data-generating process. This is actually a simple illustration you can use to teach people about the shortcomings of the MAPE - just hand your attendees a few dice and have them roll. See Kolassa & Martin (2011) for more information. Related CrossValidated questions The difference between MSE and MAPE Best way to optimize MAPE Mean absolute percentage error with respect to predictions (on using the actual in the denominator) Minimizing symmetric mean absolute percentage error (SMAPE) (on using the average of the forecast and the actual in the denominator) Optimal prediction under squared percentage loss (on using the squared instead of the absolute percentage error) MAPE vs R-squared in regression models Why use a certain measure of forecast error (e.g. MAD) as opposed to another (e.g. MSE)? Does it make sense to increment by 1 the numerator and denominator in the MAPE to avoid division by 0? R code Lognormal example: mm <- 1 ss.sq <- 1 SAPMediumGray <- "#999999"; SAPGold <- "#F0AB00" set.seed(2013) actuals <- rlnorm(100,meanlog=mm,sdlog=sqrt(ss.sq)) opar <- par(mar=c(3,2,0,0)+.1) plot(actuals,type="o",pch=21,cex=0.8,bg="black",xlab="",ylab="",xlim=c(0,150)) abline(v=101,col=SAPMediumGray) xx <- seq(0,max(actuals),by=.1) polygon(c(101+150*dlnorm(xx,meanlog=mm,sdlog=sqrt(ss.sq)), rep(101,length(xx))),c(xx,rev(xx)),col="lightgray",border=NA) (min.Ese <- exp(mm+ss.sq/2)) lines(c(101,150),rep(min.Ese,2),col=SAPGold,lwd=3,lty=2) (min.Eae <- exp(mm)) lines(c(101,150),rep(min.Eae,2),col=SAPGold,lwd=3,lty=3) (min.Eape <- exp(mm-ss.sq)) lines(c(101,150),rep(min.Eape,2),col=SAPGold,lwd=3,lty=4) par(opar) Dice rolling example: SAPMediumGray <- "#999999"; SAPGold <- "#F0AB00" set.seed(2013) actuals <- sample(x=1:6,size=100,replace=TRUE) opar <- par(mar=c(3,2,0,0)+.1) plot(actuals,type="o",pch=21,cex=0.8,bg="black",xlab="",ylab="",xlim=c(0,150)) abline(v=101,col=SAPMediumGray) min.Ese <- 3.5 lines(c(101,150),rep(min.Ese,2),col=SAPGold,lwd=3,lty=2) min.Eape <- 2 lines(c(101,150),rep(min.Eape,2),col=SAPGold,lwd=3,lty=4) par(opar) References Gneiting, T. Making and Evaluating Point Forecasts. Journal of the American Statistical Association, 2011, 106, 746-762 Goodwin, P. & Lawton, R. On the asymmetry of the symmetric MAPE. International Journal of Forecasting, 1999, 15, 405-408 Hoover, J. Measuring Forecast Accuracy: Omissions in Today's Forecasting Engines and Demand-Planning Software. Foresight: The International Journal of Applied Forecasting, 2006, 4, 32-35 Kolassa, S. Why the "best" point forecast depends on the error or accuracy measure (Invited commentary on the M4 forecasting competition). International Journal of Forecasting, 2020, 36(1), 208-211 Kolassa, S. & Martin, R. Percentage Errors Can Ruin Your Day (and Rolling the Dice Shows How). Foresight: The International Journal of Applied Forecasting, 2011, 23, 21-29 Kolassa, S. & Schütz, W. Advantages of the MAD/Mean ratio over the MAPE. Foresight: The International Journal of Applied Forecasting, 2007, 6, 40-43 McKenzie, J. Mean absolute percentage error and bias in economic forecasting. Economics Letters, 2011, 113, 259-262 Zheng, S. Gradient descent algorithms for quantile regression with smooth approximation. International Journal of Machine Learning and Cybernetics, 2011, 2, 191-207
What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? Shortcomings of the MAPE The MAPE, as a percentage, only makes sense for values where divisions and ratios make sense. It doesn't make sense to calculate percentages of temperatures, for instance, so
2,329
What are modern, easily used alternatives to stepwise regression?
There are several alternatives to Stepwise Regression. The most used I have seen are: Expert opinion to decide which variables to include in the model. Partial Least Squares Regression. You essentially get latent variables and do a regression with them. You could also do PCA yourself and then use the principal variables. Least Absolute Shrinkage and Selection Operator (LASSO). Both PLS Regression and LASSO are implemented in R packages like PLS: http://cran.r-project.org/web/packages/pls/ and LARS: http://cran.r-project.org/web/packages/lars/index.html If you only want to explore the relationship between your dependent variable and the independent variables (e.g. you do not need statistical significance tests), I would also recommend Machine Learning methods like Random Forests or Classification/Regression Trees. Random Forests can also approximate complex non-linear relationships between your dependent and independent variables, which might not have been revealed by linear techniques (like Linear Regression). A good starting point to Machine Learning might be the Machine Learning task view on CRAN: Machine Learning Task View: http://cran.r-project.org/web/views/MachineLearning.html
What are modern, easily used alternatives to stepwise regression?
There are several alternatives to Stepwise Regression. The most used I have seen are: Expert opinion to decide which variables to include in the model. Partial Least Squares Regression. You essential
What are modern, easily used alternatives to stepwise regression? There are several alternatives to Stepwise Regression. The most used I have seen are: Expert opinion to decide which variables to include in the model. Partial Least Squares Regression. You essentially get latent variables and do a regression with them. You could also do PCA yourself and then use the principal variables. Least Absolute Shrinkage and Selection Operator (LASSO). Both PLS Regression and LASSO are implemented in R packages like PLS: http://cran.r-project.org/web/packages/pls/ and LARS: http://cran.r-project.org/web/packages/lars/index.html If you only want to explore the relationship between your dependent variable and the independent variables (e.g. you do not need statistical significance tests), I would also recommend Machine Learning methods like Random Forests or Classification/Regression Trees. Random Forests can also approximate complex non-linear relationships between your dependent and independent variables, which might not have been revealed by linear techniques (like Linear Regression). A good starting point to Machine Learning might be the Machine Learning task view on CRAN: Machine Learning Task View: http://cran.r-project.org/web/views/MachineLearning.html
What are modern, easily used alternatives to stepwise regression? There are several alternatives to Stepwise Regression. The most used I have seen are: Expert opinion to decide which variables to include in the model. Partial Least Squares Regression. You essential
2,330
What are modern, easily used alternatives to stepwise regression?
Another option you might consider for variable selection and regularization is the elastic net. It's implemented in R via the glmnet package.
What are modern, easily used alternatives to stepwise regression?
Another option you might consider for variable selection and regularization is the elastic net. It's implemented in R via the glmnet package.
What are modern, easily used alternatives to stepwise regression? Another option you might consider for variable selection and regularization is the elastic net. It's implemented in R via the glmnet package.
What are modern, easily used alternatives to stepwise regression? Another option you might consider for variable selection and regularization is the elastic net. It's implemented in R via the glmnet package.
2,331
What are modern, easily used alternatives to stepwise regression?
Model averaging is one way to go (an information-theoretic approach). The R package glmulti can perform linear models for every combination of predictor variables, and perform model averaging for these results. See http://sites.google.com/site/mcgillbgsa/workshops/glmulti Don't forget to investigate collinearity between predictor variables first though. Variance Inflation Factors (available in R package "car") are useful here.
What are modern, easily used alternatives to stepwise regression?
Model averaging is one way to go (an information-theoretic approach). The R package glmulti can perform linear models for every combination of predictor variables, and perform model averaging for thes
What are modern, easily used alternatives to stepwise regression? Model averaging is one way to go (an information-theoretic approach). The R package glmulti can perform linear models for every combination of predictor variables, and perform model averaging for these results. See http://sites.google.com/site/mcgillbgsa/workshops/glmulti Don't forget to investigate collinearity between predictor variables first though. Variance Inflation Factors (available in R package "car") are useful here.
What are modern, easily used alternatives to stepwise regression? Model averaging is one way to go (an information-theoretic approach). The R package glmulti can perform linear models for every combination of predictor variables, and perform model averaging for thes
2,332
What are modern, easily used alternatives to stepwise regression?
Interesting discussion. To label stepwise regression as statistical sin is a bit of a religious statement - as long as one knows what they are doing and that the objectives of the exercise is clear, it is definitely a fine approach with its own set of assumptions and, is certainly biased, and does not guarantee optimality, etc. Yet, the same can be said of of lot of other things we do. I have not seen CCA mentioned, which addresses the more fundamental problem of the correlation structure in covariate space, does guarantee optimality, has been around for quite a bit, and it has somewhat of a learning curve. It is implemented on a variety of platforms including R.
What are modern, easily used alternatives to stepwise regression?
Interesting discussion. To label stepwise regression as statistical sin is a bit of a religious statement - as long as one knows what they are doing and that the objectives of the exercise is clear, i
What are modern, easily used alternatives to stepwise regression? Interesting discussion. To label stepwise regression as statistical sin is a bit of a religious statement - as long as one knows what they are doing and that the objectives of the exercise is clear, it is definitely a fine approach with its own set of assumptions and, is certainly biased, and does not guarantee optimality, etc. Yet, the same can be said of of lot of other things we do. I have not seen CCA mentioned, which addresses the more fundamental problem of the correlation structure in covariate space, does guarantee optimality, has been around for quite a bit, and it has somewhat of a learning curve. It is implemented on a variety of platforms including R.
What are modern, easily used alternatives to stepwise regression? Interesting discussion. To label stepwise regression as statistical sin is a bit of a religious statement - as long as one knows what they are doing and that the objectives of the exercise is clear, i
2,333
What are modern, easily used alternatives to stepwise regression?
@johannes gave an excellent answer. If you are a SAS user, then LASSO is available through PROC GLMSELECT and partial least squares through PROC PLS. David Cassell and I made a presentation about LASSO (and Least Angle Regression) at a couple of SAS user groups. It's available here
What are modern, easily used alternatives to stepwise regression?
@johannes gave an excellent answer. If you are a SAS user, then LASSO is available through PROC GLMSELECT and partial least squares through PROC PLS. David Cassell and I made a presentation about LASS
What are modern, easily used alternatives to stepwise regression? @johannes gave an excellent answer. If you are a SAS user, then LASSO is available through PROC GLMSELECT and partial least squares through PROC PLS. David Cassell and I made a presentation about LASSO (and Least Angle Regression) at a couple of SAS user groups. It's available here
What are modern, easily used alternatives to stepwise regression? @johannes gave an excellent answer. If you are a SAS user, then LASSO is available through PROC GLMSELECT and partial least squares through PROC PLS. David Cassell and I made a presentation about LASS
2,334
Why not approach classification through regression?
"..approach classification problem through regression.." by "regression" I will assume you mean linear regression, and I will compare this approach to the "classification" approach of fitting a logistic regression model. Before we do this, it is important to clarify the distinction between regression and classification models. Regression models predict a continuous variable, such as rainfall amount or sunlight intensity. They can also predict probabilities, such as the probability that an image contains a cat. A probability-predicting regression model can be used as part of a classifier by imposing a decision rule - for example, if the probability is 50% or more, decide it's a cat. Logistic regression predicts probabilities, and is therefore a regression algorithm. However, it is commonly described as a classification method in the machine learning literature, because it can be (and is often) used to make classifiers. There are also "true" classification algorithms, such as SVM, which only predict an outcome and do not provide a probability. We won't discuss this kind of algorithm here. Linear vs. Logistic Regression on Classification Problems As Andrew Ng explains it, with linear regression you fit a polynomial through the data - say, like on the example below we're fitting a straight line through {tumor size, tumor type} sample set: Above, malignant tumors get $1$ and non-malignant ones get $0$, and the green line is our hypothesis $h(x)$. To make predictions we may say that for any given tumor size $x$, if $h(x)$ gets bigger than $0.5$ we predict malignant tumor, otherwise we predict benign. Looks like this way we could correctly predict every single training set sample, but now let's change the task a bit. Intuitively it's clear that all tumors larger certain threshold are malignant. So let's add another sample with a huge tumor size, and run linear regression again: Now our $h(x) > 0.5 \rightarrow malignant$ doesn't work anymore. To keep making correct predictions we need to change it to $h(x) > 0.2$ or something - but that not how the algorithm should work. We cannot change the hypothesis each time a new sample arrives. Instead, we should learn it off the training set data, and then (using the hypothesis we've learned) make correct predictions for the data we haven't seen before. Hope this explains why linear regression is not the best fit for classification problems! Also, you might want to watch VI. Logistic Regression. Classification video on ml-class.org which explains the idea in more detail. EDIT probabilityislogic asked what a good classifier would do. In this particular example you would probably use logistic regression which might learn a hypothesis like this (I'm just making this up): Note that both linear regression and logistic regression give you a straight line (or a higher order polynomial) but those lines have different meaning: $h(x)$ for linear regression interpolates, or extrapolates, the output and predicts the value for $x$ we haven't seen. It's simply like plugging a new $x$ and getting a raw number, and is more suitable for tasks like predicting, say car price based on {car size, car age} etc. $h(x)$ for logistic regression tells you the probability that $x$ belongs to the "positive" class. This is why it is called a regression algorithm - it estimates a continuous quantity, the probability. However, if you set a threshold on the probability, such as $h(x) > 0.5$, you obtain a classifier, and in many cases this is what is done with the output from a logistic regression model. This is equivalent to putting a line on the plot: all points sitting above the classifier line belong to one class while the points below belong to the other class. So, the bottom line is that in classification scenario we use a completely different reasoning and a completely different algorithm than in regression scenario.
Why not approach classification through regression?
"..approach classification problem through regression.." by "regression" I will assume you mean linear regression, and I will compare this approach to the "classification" approach of fitting a logist
Why not approach classification through regression? "..approach classification problem through regression.." by "regression" I will assume you mean linear regression, and I will compare this approach to the "classification" approach of fitting a logistic regression model. Before we do this, it is important to clarify the distinction between regression and classification models. Regression models predict a continuous variable, such as rainfall amount or sunlight intensity. They can also predict probabilities, such as the probability that an image contains a cat. A probability-predicting regression model can be used as part of a classifier by imposing a decision rule - for example, if the probability is 50% or more, decide it's a cat. Logistic regression predicts probabilities, and is therefore a regression algorithm. However, it is commonly described as a classification method in the machine learning literature, because it can be (and is often) used to make classifiers. There are also "true" classification algorithms, such as SVM, which only predict an outcome and do not provide a probability. We won't discuss this kind of algorithm here. Linear vs. Logistic Regression on Classification Problems As Andrew Ng explains it, with linear regression you fit a polynomial through the data - say, like on the example below we're fitting a straight line through {tumor size, tumor type} sample set: Above, malignant tumors get $1$ and non-malignant ones get $0$, and the green line is our hypothesis $h(x)$. To make predictions we may say that for any given tumor size $x$, if $h(x)$ gets bigger than $0.5$ we predict malignant tumor, otherwise we predict benign. Looks like this way we could correctly predict every single training set sample, but now let's change the task a bit. Intuitively it's clear that all tumors larger certain threshold are malignant. So let's add another sample with a huge tumor size, and run linear regression again: Now our $h(x) > 0.5 \rightarrow malignant$ doesn't work anymore. To keep making correct predictions we need to change it to $h(x) > 0.2$ or something - but that not how the algorithm should work. We cannot change the hypothesis each time a new sample arrives. Instead, we should learn it off the training set data, and then (using the hypothesis we've learned) make correct predictions for the data we haven't seen before. Hope this explains why linear regression is not the best fit for classification problems! Also, you might want to watch VI. Logistic Regression. Classification video on ml-class.org which explains the idea in more detail. EDIT probabilityislogic asked what a good classifier would do. In this particular example you would probably use logistic regression which might learn a hypothesis like this (I'm just making this up): Note that both linear regression and logistic regression give you a straight line (or a higher order polynomial) but those lines have different meaning: $h(x)$ for linear regression interpolates, or extrapolates, the output and predicts the value for $x$ we haven't seen. It's simply like plugging a new $x$ and getting a raw number, and is more suitable for tasks like predicting, say car price based on {car size, car age} etc. $h(x)$ for logistic regression tells you the probability that $x$ belongs to the "positive" class. This is why it is called a regression algorithm - it estimates a continuous quantity, the probability. However, if you set a threshold on the probability, such as $h(x) > 0.5$, you obtain a classifier, and in many cases this is what is done with the output from a logistic regression model. This is equivalent to putting a line on the plot: all points sitting above the classifier line belong to one class while the points below belong to the other class. So, the bottom line is that in classification scenario we use a completely different reasoning and a completely different algorithm than in regression scenario.
Why not approach classification through regression? "..approach classification problem through regression.." by "regression" I will assume you mean linear regression, and I will compare this approach to the "classification" approach of fitting a logist
2,335
Why not approach classification through regression?
I can't think of an example in which classification is actually the ultimate goal. Almost always the real goal is to make accurate predictions, e.g., of probabilities. In that spirit, (logistic) regression is your friend.
Why not approach classification through regression?
I can't think of an example in which classification is actually the ultimate goal. Almost always the real goal is to make accurate predictions, e.g., of probabilities. In that spirit, (logistic) reg
Why not approach classification through regression? I can't think of an example in which classification is actually the ultimate goal. Almost always the real goal is to make accurate predictions, e.g., of probabilities. In that spirit, (logistic) regression is your friend.
Why not approach classification through regression? I can't think of an example in which classification is actually the ultimate goal. Almost always the real goal is to make accurate predictions, e.g., of probabilities. In that spirit, (logistic) reg
2,336
Why not approach classification through regression?
Why not look at some evidence? Although many would argue that linear regression is not right for classification, it may still work. To gain some intuition, I included linear regression (used as classifier) into scikit-learn's classifier comparison. Here is what happens: The decision boundary is narrower than with the other classifiers, but the accuracy is the same. Much like the linear support vector classifier, the regression model gives you a hyperplane that separates the classes in feature space. As we see, using linear regression as classifier can work, but as always, I would cross validate the predictions. For the record, this is how my classifier code looks like: class LinearRegressionClassifier(): def __init__(self): self.reg = LinearRegression() def fit(self, X, y): self.reg.fit(X, y) def predict(self, X): return np.clip(self.reg.predict(X),0,1) def decision_function(self, X): return np.clip(self.reg.predict(X),0,1) def score(self, X, y): return accuracy_score(y,np.round(self.predict(X)))
Why not approach classification through regression?
Why not look at some evidence? Although many would argue that linear regression is not right for classification, it may still work. To gain some intuition, I included linear regression (used as classi
Why not approach classification through regression? Why not look at some evidence? Although many would argue that linear regression is not right for classification, it may still work. To gain some intuition, I included linear regression (used as classifier) into scikit-learn's classifier comparison. Here is what happens: The decision boundary is narrower than with the other classifiers, but the accuracy is the same. Much like the linear support vector classifier, the regression model gives you a hyperplane that separates the classes in feature space. As we see, using linear regression as classifier can work, but as always, I would cross validate the predictions. For the record, this is how my classifier code looks like: class LinearRegressionClassifier(): def __init__(self): self.reg = LinearRegression() def fit(self, X, y): self.reg.fit(X, y) def predict(self, X): return np.clip(self.reg.predict(X),0,1) def decision_function(self, X): return np.clip(self.reg.predict(X),0,1) def score(self, X, y): return accuracy_score(y,np.round(self.predict(X)))
Why not approach classification through regression? Why not look at some evidence? Although many would argue that linear regression is not right for classification, it may still work. To gain some intuition, I included linear regression (used as classi
2,337
Why not approach classification through regression?
Further, to expand on already good answers, for any classification task beyond a bivariate one, using the regression would require us to impose a distance and ordering between the classes. In other words, we might get different results just by shuffling the labels of the classes or changing the scale of assigned numeric values (say classes labeled as $1, 10, 100, ...$ vs $1, 2, 3, ...$), which defeats the purpose of the classification problem.
Why not approach classification through regression?
Further, to expand on already good answers, for any classification task beyond a bivariate one, using the regression would require us to impose a distance and ordering between the classes. In other wo
Why not approach classification through regression? Further, to expand on already good answers, for any classification task beyond a bivariate one, using the regression would require us to impose a distance and ordering between the classes. In other words, we might get different results just by shuffling the labels of the classes or changing the scale of assigned numeric values (say classes labeled as $1, 10, 100, ...$ vs $1, 2, 3, ...$), which defeats the purpose of the classification problem.
Why not approach classification through regression? Further, to expand on already good answers, for any classification task beyond a bivariate one, using the regression would require us to impose a distance and ordering between the classes. In other wo
2,338
How to produce a pretty plot of the results of k-means cluster analysis?
I'd push the silhouette plot for this, because it's unlikely that you'll get much actionable information from pair plots when the number of dimension is 14. library(cluster) library(HSAUR) data(pottery) km <- kmeans(pottery,3) dissE <- daisy(pottery) dE2 <- dissE^2 sk2 <- silhouette(km$cl, dE2) plot(sk2) This approach is highly cited and well known (see here for an explanation). Rousseeuw, P.J. (1987) Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math., 20, 53-65.
How to produce a pretty plot of the results of k-means cluster analysis?
I'd push the silhouette plot for this, because it's unlikely that you'll get much actionable information from pair plots when the number of dimension is 14. library(cluster) library(HSAUR) data(potter
How to produce a pretty plot of the results of k-means cluster analysis? I'd push the silhouette plot for this, because it's unlikely that you'll get much actionable information from pair plots when the number of dimension is 14. library(cluster) library(HSAUR) data(pottery) km <- kmeans(pottery,3) dissE <- daisy(pottery) dE2 <- dissE^2 sk2 <- silhouette(km$cl, dE2) plot(sk2) This approach is highly cited and well known (see here for an explanation). Rousseeuw, P.J. (1987) Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math., 20, 53-65.
How to produce a pretty plot of the results of k-means cluster analysis? I'd push the silhouette plot for this, because it's unlikely that you'll get much actionable information from pair plots when the number of dimension is 14. library(cluster) library(HSAUR) data(potter
2,339
How to produce a pretty plot of the results of k-means cluster analysis?
Here an example that can helps you: library(cluster) library(fpc) data(iris) dat &lt- iris[, -5] # without known classification # Kmeans clustre analysis clus &lt- kmeans(dat, centers=3) # Fig 01 plotcluster(dat, clus$cluster) # More complex clusplot(dat, clus$cluster, color=TRUE, shade=TRUE, labels=2, lines=0) # Fig 03 with(iris, pairs(dat, col=c(1:3)[clus$cluster])) Based on the latter plot you could decide which of your initial variables to plot. Maybe 14 variables are huge, so you can try a principal component analysis (PCA) before and then use the first two or three components from the PCA to perform the cluster analysis.
How to produce a pretty plot of the results of k-means cluster analysis?
Here an example that can helps you: library(cluster) library(fpc) data(iris) dat &lt- iris[, -5] # without known classification # Kmeans clustre analysis clus &lt- kmeans(dat, centers=3) # Fig 01 p
How to produce a pretty plot of the results of k-means cluster analysis? Here an example that can helps you: library(cluster) library(fpc) data(iris) dat &lt- iris[, -5] # without known classification # Kmeans clustre analysis clus &lt- kmeans(dat, centers=3) # Fig 01 plotcluster(dat, clus$cluster) # More complex clusplot(dat, clus$cluster, color=TRUE, shade=TRUE, labels=2, lines=0) # Fig 03 with(iris, pairs(dat, col=c(1:3)[clus$cluster])) Based on the latter plot you could decide which of your initial variables to plot. Maybe 14 variables are huge, so you can try a principal component analysis (PCA) before and then use the first two or three components from the PCA to perform the cluster analysis.
How to produce a pretty plot of the results of k-means cluster analysis? Here an example that can helps you: library(cluster) library(fpc) data(iris) dat &lt- iris[, -5] # without known classification # Kmeans clustre analysis clus &lt- kmeans(dat, centers=3) # Fig 01 p
2,340
How to produce a pretty plot of the results of k-means cluster analysis?
The simplest way I know to do that is the following: X <- data.frame(c1=c(0,1,2,4,5,4,6,7),c2=c(0,1,2,3,3,4,5,5)) km <- kmeans(X, center=2) plot(X,col=km$cluster) points(km$center,col=1:2,pch=8,cex=1) In this way you can draw the points of each cluster using a different color and their centroids.
How to produce a pretty plot of the results of k-means cluster analysis?
The simplest way I know to do that is the following: X <- data.frame(c1=c(0,1,2,4,5,4,6,7),c2=c(0,1,2,3,3,4,5,5)) km <- kmeans(X, center=2) plot(X,col=km$cluster) points(km$center,col=1:2,pch=8,cex=1)
How to produce a pretty plot of the results of k-means cluster analysis? The simplest way I know to do that is the following: X <- data.frame(c1=c(0,1,2,4,5,4,6,7),c2=c(0,1,2,3,3,4,5,5)) km <- kmeans(X, center=2) plot(X,col=km$cluster) points(km$center,col=1:2,pch=8,cex=1) In this way you can draw the points of each cluster using a different color and their centroids.
How to produce a pretty plot of the results of k-means cluster analysis? The simplest way I know to do that is the following: X <- data.frame(c1=c(0,1,2,4,5,4,6,7),c2=c(0,1,2,3,3,4,5,5)) km <- kmeans(X, center=2) plot(X,col=km$cluster) points(km$center,col=1:2,pch=8,cex=1)
2,341
How to produce a pretty plot of the results of k-means cluster analysis?
This is an old question at this point, but I think the factoextra package has several useful tools for clustering and plots. For example, the fviz_cluster() function, which plots PCA dimensions 1 and 2 in a scatter plot and colors and groups the clusters. This demo goes through some different functions from factoextra.
How to produce a pretty plot of the results of k-means cluster analysis?
This is an old question at this point, but I think the factoextra package has several useful tools for clustering and plots. For example, the fviz_cluster() function, which plots PCA dimensions 1 and
How to produce a pretty plot of the results of k-means cluster analysis? This is an old question at this point, but I think the factoextra package has several useful tools for clustering and plots. For example, the fviz_cluster() function, which plots PCA dimensions 1 and 2 in a scatter plot and colors and groups the clusters. This demo goes through some different functions from factoextra.
How to produce a pretty plot of the results of k-means cluster analysis? This is an old question at this point, but I think the factoextra package has several useful tools for clustering and plots. For example, the fviz_cluster() function, which plots PCA dimensions 1 and
2,342
Rules of thumb for "modern" statistics
Don't forget to do some basic data checking before you start the analysis. In particular, look at a scatter plot of every variable you intend to analyse against ID number, date / time of data collection or similar. The eye can often pick up patterns that reveal problems when summary statistics don't show anything unusual. And if you're going to use a log or other transformation for analysis, also use it for the plot.
Rules of thumb for "modern" statistics
Don't forget to do some basic data checking before you start the analysis. In particular, look at a scatter plot of every variable you intend to analyse against ID number, date / time of data collecti
Rules of thumb for "modern" statistics Don't forget to do some basic data checking before you start the analysis. In particular, look at a scatter plot of every variable you intend to analyse against ID number, date / time of data collection or similar. The eye can often pick up patterns that reveal problems when summary statistics don't show anything unusual. And if you're going to use a log or other transformation for analysis, also use it for the plot.
Rules of thumb for "modern" statistics Don't forget to do some basic data checking before you start the analysis. In particular, look at a scatter plot of every variable you intend to analyse against ID number, date / time of data collecti
2,343
Rules of thumb for "modern" statistics
Keep your analysis reproducible. A reviewer or your boss or someone else will eventually ask you how exactly you arrived at your result - probably six months or more after you did the analysis. You will not remember how you cleaned the data, what analysis you did, why you chose the specific model you used... And reconstructing all this is a pain. Corollary: use a scripting language of some kind, put comments in your analysis scripts, and keep them. What you use (R, SAS, Stata, whatever) is less important than having a completely reproducible script. Reject environments in which this is impossible or awkward.
Rules of thumb for "modern" statistics
Keep your analysis reproducible. A reviewer or your boss or someone else will eventually ask you how exactly you arrived at your result - probably six months or more after you did the analysis. You wi
Rules of thumb for "modern" statistics Keep your analysis reproducible. A reviewer or your boss or someone else will eventually ask you how exactly you arrived at your result - probably six months or more after you did the analysis. You will not remember how you cleaned the data, what analysis you did, why you chose the specific model you used... And reconstructing all this is a pain. Corollary: use a scripting language of some kind, put comments in your analysis scripts, and keep them. What you use (R, SAS, Stata, whatever) is less important than having a completely reproducible script. Reject environments in which this is impossible or awkward.
Rules of thumb for "modern" statistics Keep your analysis reproducible. A reviewer or your boss or someone else will eventually ask you how exactly you arrived at your result - probably six months or more after you did the analysis. You wi
2,344
Rules of thumb for "modern" statistics
There is no free lunch A large part of statistical failures is created by clicking a big shiny button called "Calculate significance" without taking into account its burden of hidden assumptions. Repeat Even if a single call to a random generator is involved, one may have luck or bad luck and so jump to the wrong conclusions.
Rules of thumb for "modern" statistics
There is no free lunch A large part of statistical failures is created by clicking a big shiny button called "Calculate significance" without taking into account its burden of hidden assumptions. Repe
Rules of thumb for "modern" statistics There is no free lunch A large part of statistical failures is created by clicking a big shiny button called "Calculate significance" without taking into account its burden of hidden assumptions. Repeat Even if a single call to a random generator is involved, one may have luck or bad luck and so jump to the wrong conclusions.
Rules of thumb for "modern" statistics There is no free lunch A large part of statistical failures is created by clicking a big shiny button called "Calculate significance" without taking into account its burden of hidden assumptions. Repe
2,345
Rules of thumb for "modern" statistics
One rule per answer ;-) Talk to the statistician before conducting the study. If possible, before applying for the grant. Help him/her understand the problem you are studying, get his/her input on how to analyze the data you are about to collect and think about what that means for your study design and data requirements. Perhaps the stats guy/gal suggests doing a hierarchical model to account for who diagnosed the patients - then you need to track who diagnosed whom. Sounds trivial, but it's far better to think about this before you collect data (and fail to collect something crucial) than afterwards. On a related note: do a power analysis before starting. Nothing is as frustrating as not having budgeted for a sufficiently large sample size. In thinking about what effect size you are expecting, remember publication bias - the effect size you are going to find will probably be smaller than what you expected given the (biased) literature.
Rules of thumb for "modern" statistics
One rule per answer ;-) Talk to the statistician before conducting the study. If possible, before applying for the grant. Help him/her understand the problem you are studying, get his/her input on how
Rules of thumb for "modern" statistics One rule per answer ;-) Talk to the statistician before conducting the study. If possible, before applying for the grant. Help him/her understand the problem you are studying, get his/her input on how to analyze the data you are about to collect and think about what that means for your study design and data requirements. Perhaps the stats guy/gal suggests doing a hierarchical model to account for who diagnosed the patients - then you need to track who diagnosed whom. Sounds trivial, but it's far better to think about this before you collect data (and fail to collect something crucial) than afterwards. On a related note: do a power analysis before starting. Nothing is as frustrating as not having budgeted for a sufficiently large sample size. In thinking about what effect size you are expecting, remember publication bias - the effect size you are going to find will probably be smaller than what you expected given the (biased) literature.
Rules of thumb for "modern" statistics One rule per answer ;-) Talk to the statistician before conducting the study. If possible, before applying for the grant. Help him/her understand the problem you are studying, get his/her input on how
2,346
Rules of thumb for "modern" statistics
One thing I tell my students is to produce an appropriate graph for every p-value. e.g., a scatterplot if they test correlation, side-by-side boxplots if they do a one-way ANOVA, etc.
Rules of thumb for "modern" statistics
One thing I tell my students is to produce an appropriate graph for every p-value. e.g., a scatterplot if they test correlation, side-by-side boxplots if they do a one-way ANOVA, etc.
Rules of thumb for "modern" statistics One thing I tell my students is to produce an appropriate graph for every p-value. e.g., a scatterplot if they test correlation, side-by-side boxplots if they do a one-way ANOVA, etc.
Rules of thumb for "modern" statistics One thing I tell my students is to produce an appropriate graph for every p-value. e.g., a scatterplot if they test correlation, side-by-side boxplots if they do a one-way ANOVA, etc.
2,347
Rules of thumb for "modern" statistics
If you're deciding between two ways of analysing your data, try it both ways and see if it makes a difference. This is useful in many contexts: To transform or not transform Non-parametric or parameteric test Spearman's or Pearson's correlation PCA or factor analysis Whether to use the arithmetic mean or a robust estimate of the mean Whether to include a covariate or not Whether to use list-wise deletion, pair-wise deletion, imputation, or some other method of missing values replacement This shouldn't absolve one from thinking through the issue, but it at least gives a sense of the degree to which substantive findings are robust to the choice.
Rules of thumb for "modern" statistics
If you're deciding between two ways of analysing your data, try it both ways and see if it makes a difference. This is useful in many contexts: To transform or not transform Non-parametric or parame
Rules of thumb for "modern" statistics If you're deciding between two ways of analysing your data, try it both ways and see if it makes a difference. This is useful in many contexts: To transform or not transform Non-parametric or parameteric test Spearman's or Pearson's correlation PCA or factor analysis Whether to use the arithmetic mean or a robust estimate of the mean Whether to include a covariate or not Whether to use list-wise deletion, pair-wise deletion, imputation, or some other method of missing values replacement This shouldn't absolve one from thinking through the issue, but it at least gives a sense of the degree to which substantive findings are robust to the choice.
Rules of thumb for "modern" statistics If you're deciding between two ways of analysing your data, try it both ways and see if it makes a difference. This is useful in many contexts: To transform or not transform Non-parametric or parame
2,348
Rules of thumb for "modern" statistics
Question your data. In the modern era of cheap RAM, we often work on large amounts of data. One 'fat-finger' error or 'lost decimal place' can easily dominate an analysis. Without some basic sanity checking, (or plotting the data, as suggested by others here) one can waste a lot of time. This also suggests using some basic techniques for 'robustness' to outliers.
Rules of thumb for "modern" statistics
Question your data. In the modern era of cheap RAM, we often work on large amounts of data. One 'fat-finger' error or 'lost decimal place' can easily dominate an analysis. Without some basic sanity ch
Rules of thumb for "modern" statistics Question your data. In the modern era of cheap RAM, we often work on large amounts of data. One 'fat-finger' error or 'lost decimal place' can easily dominate an analysis. Without some basic sanity checking, (or plotting the data, as suggested by others here) one can waste a lot of time. This also suggests using some basic techniques for 'robustness' to outliers.
Rules of thumb for "modern" statistics Question your data. In the modern era of cheap RAM, we often work on large amounts of data. One 'fat-finger' error or 'lost decimal place' can easily dominate an analysis. Without some basic sanity ch
2,349
Rules of thumb for "modern" statistics
Use software that shows the chain of programming logic from the raw data through to the final analyses/results. Avoid software like Excel where one user can make an undetectable error in one cell, that only manual checking will pick up.
Rules of thumb for "modern" statistics
Use software that shows the chain of programming logic from the raw data through to the final analyses/results. Avoid software like Excel where one user can make an undetectable error in one cell, tha
Rules of thumb for "modern" statistics Use software that shows the chain of programming logic from the raw data through to the final analyses/results. Avoid software like Excel where one user can make an undetectable error in one cell, that only manual checking will pick up.
Rules of thumb for "modern" statistics Use software that shows the chain of programming logic from the raw data through to the final analyses/results. Avoid software like Excel where one user can make an undetectable error in one cell, tha
2,350
Rules of thumb for "modern" statistics
Always ask yourself "what do these results mean and how will they be used?" Usually the purpose of using statistics is to assist in making decisions under uncertainty. So it is important to have at the front of your mind "What decisions will be made as a result of this analysis and how will this analysis influence these decisions?" (e.g. publish an article, recommend a new method be used, provide $X in funding to Y, get more data, report an estimated quantity as E, etc.etc.....) If you don't feel that there is any decision to be made, then one wonders why you are doing the analysis in the first place (as it is quite expensive to do analysis). I think of statistics as a "nuisance" in that it is a means to an end, rather than an end itself. In my view we only quantify uncertainty so that we can use this to make decisions which account for this uncertainty in a precise way. I think this is one reason why keeping things simple is a good policy in general, because it is usually much easier to relate a simple solution to the real world (and hence to the environment in which the decision is being made) than the complex solution. It is also usually easier to understand the limitations of the simple answer. You then move to the more complex solutions when you understand the limitations of the simple solution, and how the complex one addresses them.
Rules of thumb for "modern" statistics
Always ask yourself "what do these results mean and how will they be used?" Usually the purpose of using statistics is to assist in making decisions under uncertainty. So it is important to have at t
Rules of thumb for "modern" statistics Always ask yourself "what do these results mean and how will they be used?" Usually the purpose of using statistics is to assist in making decisions under uncertainty. So it is important to have at the front of your mind "What decisions will be made as a result of this analysis and how will this analysis influence these decisions?" (e.g. publish an article, recommend a new method be used, provide $X in funding to Y, get more data, report an estimated quantity as E, etc.etc.....) If you don't feel that there is any decision to be made, then one wonders why you are doing the analysis in the first place (as it is quite expensive to do analysis). I think of statistics as a "nuisance" in that it is a means to an end, rather than an end itself. In my view we only quantify uncertainty so that we can use this to make decisions which account for this uncertainty in a precise way. I think this is one reason why keeping things simple is a good policy in general, because it is usually much easier to relate a simple solution to the real world (and hence to the environment in which the decision is being made) than the complex solution. It is also usually easier to understand the limitations of the simple answer. You then move to the more complex solutions when you understand the limitations of the simple solution, and how the complex one addresses them.
Rules of thumb for "modern" statistics Always ask yourself "what do these results mean and how will they be used?" Usually the purpose of using statistics is to assist in making decisions under uncertainty. So it is important to have at t
2,351
Rules of thumb for "modern" statistics
There can be a long list but to mention a few: (in no specific order) P-value is NOT probability. Specifically, it is not the probability of committing Type I error. Similarly, CIs have no probabilistic interpretation for the given data. They are applicable for repeated experiments. Problem related to variance dominate bias most the time in practice, so a biased estimate with small variance is better than an unbiased estimate with large variance (most of the time). Model fitting is an iterative process. Before analyzing the data understand the source of data and possible models that fit or don't fit the description. Also, try model any design issues in your model. Use the visualization tools, look at the data (for possible abnormalities, obvious trends etc etc to understand the data) before analyzing it. Use the visualization methods (if possible) to see how the model fits to that data. Last but not the least, use statistical software for what they are made for (to make your task of computation easier), they are not a substitute for human thinking.
Rules of thumb for "modern" statistics
There can be a long list but to mention a few: (in no specific order) P-value is NOT probability. Specifically, it is not the probability of committing Type I error. Similarly, CIs have no probabilis
Rules of thumb for "modern" statistics There can be a long list but to mention a few: (in no specific order) P-value is NOT probability. Specifically, it is not the probability of committing Type I error. Similarly, CIs have no probabilistic interpretation for the given data. They are applicable for repeated experiments. Problem related to variance dominate bias most the time in practice, so a biased estimate with small variance is better than an unbiased estimate with large variance (most of the time). Model fitting is an iterative process. Before analyzing the data understand the source of data and possible models that fit or don't fit the description. Also, try model any design issues in your model. Use the visualization tools, look at the data (for possible abnormalities, obvious trends etc etc to understand the data) before analyzing it. Use the visualization methods (if possible) to see how the model fits to that data. Last but not the least, use statistical software for what they are made for (to make your task of computation easier), they are not a substitute for human thinking.
Rules of thumb for "modern" statistics There can be a long list but to mention a few: (in no specific order) P-value is NOT probability. Specifically, it is not the probability of committing Type I error. Similarly, CIs have no probabilis
2,352
Rules of thumb for "modern" statistics
For data organization/management, ensure that when you generate new variables in the dataset (for example, calculating body mass index from height and weight), the original variables are never deleted. A non-destructive approach is best from a reproducibility perspective. You never know when you might mis-enter a command and subsequently need to redo your variable generation. Without the original variables, you will lose a lot of time!
Rules of thumb for "modern" statistics
For data organization/management, ensure that when you generate new variables in the dataset (for example, calculating body mass index from height and weight), the original variables are never deleted
Rules of thumb for "modern" statistics For data organization/management, ensure that when you generate new variables in the dataset (for example, calculating body mass index from height and weight), the original variables are never deleted. A non-destructive approach is best from a reproducibility perspective. You never know when you might mis-enter a command and subsequently need to redo your variable generation. Without the original variables, you will lose a lot of time!
Rules of thumb for "modern" statistics For data organization/management, ensure that when you generate new variables in the dataset (for example, calculating body mass index from height and weight), the original variables are never deleted
2,353
Rules of thumb for "modern" statistics
Think hard about the underlying data generating process (DGP). If the model you want to use doesn't reflect the DGP, you need to find a new model.
Rules of thumb for "modern" statistics
Think hard about the underlying data generating process (DGP). If the model you want to use doesn't reflect the DGP, you need to find a new model.
Rules of thumb for "modern" statistics Think hard about the underlying data generating process (DGP). If the model you want to use doesn't reflect the DGP, you need to find a new model.
Rules of thumb for "modern" statistics Think hard about the underlying data generating process (DGP). If the model you want to use doesn't reflect the DGP, you need to find a new model.
2,354
Rules of thumb for "modern" statistics
For histograms, a good rule of thumb for number of bins in a histogram: square root of the number of data points
Rules of thumb for "modern" statistics
For histograms, a good rule of thumb for number of bins in a histogram: square root of the number of data points
Rules of thumb for "modern" statistics For histograms, a good rule of thumb for number of bins in a histogram: square root of the number of data points
Rules of thumb for "modern" statistics For histograms, a good rule of thumb for number of bins in a histogram: square root of the number of data points
2,355
Rules of thumb for "modern" statistics
Despite increasingly larger datasets and more powerful software, over-fitting models is a major danger to researchers, especially those who have not yet been burned by over-fitting. Over-fitting means that you have fitted something more complicated than your data and the state of the art. Like love or beauty, it is hard to define, let alone to define formally, but easier to recognise. A minimal rule of thumb is 10 data points for every parameter estimated for anything like classical regression, and watch out for the consequences if you ignore it. For other analyses, you usually need much more to do a good job, particularly if there are rare categories in the data. Even if you can fit a model easily, you should worry constantly about what it means and how far it is reproducible with even a very similar dataset.
Rules of thumb for "modern" statistics
Despite increasingly larger datasets and more powerful software, over-fitting models is a major danger to researchers, especially those who have not yet been burned by over-fitting. Over-fitting means
Rules of thumb for "modern" statistics Despite increasingly larger datasets and more powerful software, over-fitting models is a major danger to researchers, especially those who have not yet been burned by over-fitting. Over-fitting means that you have fitted something more complicated than your data and the state of the art. Like love or beauty, it is hard to define, let alone to define formally, but easier to recognise. A minimal rule of thumb is 10 data points for every parameter estimated for anything like classical regression, and watch out for the consequences if you ignore it. For other analyses, you usually need much more to do a good job, particularly if there are rare categories in the data. Even if you can fit a model easily, you should worry constantly about what it means and how far it is reproducible with even a very similar dataset.
Rules of thumb for "modern" statistics Despite increasingly larger datasets and more powerful software, over-fitting models is a major danger to researchers, especially those who have not yet been burned by over-fitting. Over-fitting means
2,356
Rules of thumb for "modern" statistics
In a forecasting problem (i.e., when you need to forecast $Y_{t+h}$ given $(Y_t,X_t)$ $t>T$, with the use of a learning set $(Y_1,X_1),\dots, (Y_T,X_T)$ ), the rule of the thumb (to be done before any complex modelling) are Climatology ($Y_{t+h}$ forecast by the mean observed value over the learning set, possibly by removing obvious periodic patterns) Persistence ($Y_{t+h}$ forecast by the last observed value: $Y_t$). What I often do now as a last simple benchmark / rule of the thumb is using randomForest($Y_{t+h}$~$Y_t+X_t$, data=learningSet) in R software. It gives you (with 2 lines of code in R) a first idea of what can be achieved without any modelling.
Rules of thumb for "modern" statistics
In a forecasting problem (i.e., when you need to forecast $Y_{t+h}$ given $(Y_t,X_t)$ $t>T$, with the use of a learning set $(Y_1,X_1),\dots, (Y_T,X_T)$ ), the rule of the thumb (to be done before any
Rules of thumb for "modern" statistics In a forecasting problem (i.e., when you need to forecast $Y_{t+h}$ given $(Y_t,X_t)$ $t>T$, with the use of a learning set $(Y_1,X_1),\dots, (Y_T,X_T)$ ), the rule of the thumb (to be done before any complex modelling) are Climatology ($Y_{t+h}$ forecast by the mean observed value over the learning set, possibly by removing obvious periodic patterns) Persistence ($Y_{t+h}$ forecast by the last observed value: $Y_t$). What I often do now as a last simple benchmark / rule of the thumb is using randomForest($Y_{t+h}$~$Y_t+X_t$, data=learningSet) in R software. It gives you (with 2 lines of code in R) a first idea of what can be achieved without any modelling.
Rules of thumb for "modern" statistics In a forecasting problem (i.e., when you need to forecast $Y_{t+h}$ given $(Y_t,X_t)$ $t>T$, with the use of a learning set $(Y_1,X_1),\dots, (Y_T,X_T)$ ), the rule of the thumb (to be done before any
2,357
Rules of thumb for "modern" statistics
If the model won't converge easily and quickly, it could be the fault of the software. It is, however, much more common that your data are not suitable for the model or the model is not suitable for the data. It could be hard to tell which, and empiricists and theorists can have different views. But subject-matter thinking, really looking at the data, and constantly thinking about the interpretation of the model help as much as anything can. Above all else, try a simpler model if a complicated one won't converge. There is no gain in forcing convergence or in declaring victory and taking results after many iterations but before your model really has converged. At best you fool yourself if you do that.
Rules of thumb for "modern" statistics
If the model won't converge easily and quickly, it could be the fault of the software. It is, however, much more common that your data are not suitable for the model or the model is not suitable for t
Rules of thumb for "modern" statistics If the model won't converge easily and quickly, it could be the fault of the software. It is, however, much more common that your data are not suitable for the model or the model is not suitable for the data. It could be hard to tell which, and empiricists and theorists can have different views. But subject-matter thinking, really looking at the data, and constantly thinking about the interpretation of the model help as much as anything can. Above all else, try a simpler model if a complicated one won't converge. There is no gain in forcing convergence or in declaring victory and taking results after many iterations but before your model really has converged. At best you fool yourself if you do that.
Rules of thumb for "modern" statistics If the model won't converge easily and quickly, it could be the fault of the software. It is, however, much more common that your data are not suitable for the model or the model is not suitable for t
2,358
Rules of thumb for "modern" statistics
In instrumental variables regression always check the joint significance of your instruments. The Staiger-Stock rule of thumb says that an F-statistic of less than 10 is worrisome and indicates that your instruments might be weak, i.e. they are not sufficiently correlated with the endogenous variable. However, this does not automatically imply that an F above 10 guarantees strong instruments. Staiger and Stock (1997) have shown that instrumental variables techniques like 2SLS can be badly biased in "small" samples if the instruments are only weakly correlated with the endogenous variable. Their example was the study by Angrist and Krueger (1991) who had more than 300,000 observations - a disturbing fact about the notion of "small" samples.
Rules of thumb for "modern" statistics
In instrumental variables regression always check the joint significance of your instruments. The Staiger-Stock rule of thumb says that an F-statistic of less than 10 is worrisome and indicates that y
Rules of thumb for "modern" statistics In instrumental variables regression always check the joint significance of your instruments. The Staiger-Stock rule of thumb says that an F-statistic of less than 10 is worrisome and indicates that your instruments might be weak, i.e. they are not sufficiently correlated with the endogenous variable. However, this does not automatically imply that an F above 10 guarantees strong instruments. Staiger and Stock (1997) have shown that instrumental variables techniques like 2SLS can be badly biased in "small" samples if the instruments are only weakly correlated with the endogenous variable. Their example was the study by Angrist and Krueger (1991) who had more than 300,000 observations - a disturbing fact about the notion of "small" samples.
Rules of thumb for "modern" statistics In instrumental variables regression always check the joint significance of your instruments. The Staiger-Stock rule of thumb says that an F-statistic of less than 10 is worrisome and indicates that y
2,359
Rules of thumb for "modern" statistics
There are no criteria to choose information criteria. Once someone says something like "The ?IC indicates this, but it is known often to give the wrong results" (where ? is any letter you like), you know that you will have also to think about the model and particularly whether it makes scientific or practical sense. No algebra can tell you that.
Rules of thumb for "modern" statistics
There are no criteria to choose information criteria. Once someone says something like "The ?IC indicates this, but it is known often to give the wrong results" (where ? is any letter you like), you
Rules of thumb for "modern" statistics There are no criteria to choose information criteria. Once someone says something like "The ?IC indicates this, but it is known often to give the wrong results" (where ? is any letter you like), you know that you will have also to think about the model and particularly whether it makes scientific or practical sense. No algebra can tell you that.
Rules of thumb for "modern" statistics There are no criteria to choose information criteria. Once someone says something like "The ?IC indicates this, but it is known often to give the wrong results" (where ? is any letter you like), you
2,360
Rules of thumb for "modern" statistics
I read this somewhere (probably on cross validated) and I haven't been able to find it anywhere, so here goes... If you've discovered an interesting result, it's probably wrong. It's very easy to get excited by the prospect of a staggering p-value or a near perfect cross validation error. I've personally ecstatically presented awesome (false) results to colleagues only to have to retract them. Most often, if it looks too good to be true... 'taint true. 'Taint true at all.
Rules of thumb for "modern" statistics
I read this somewhere (probably on cross validated) and I haven't been able to find it anywhere, so here goes... If you've discovered an interesting result, it's probably wrong. It's very easy to get
Rules of thumb for "modern" statistics I read this somewhere (probably on cross validated) and I haven't been able to find it anywhere, so here goes... If you've discovered an interesting result, it's probably wrong. It's very easy to get excited by the prospect of a staggering p-value or a near perfect cross validation error. I've personally ecstatically presented awesome (false) results to colleagues only to have to retract them. Most often, if it looks too good to be true... 'taint true. 'Taint true at all.
Rules of thumb for "modern" statistics I read this somewhere (probably on cross validated) and I haven't been able to find it anywhere, so here goes... If you've discovered an interesting result, it's probably wrong. It's very easy to get
2,361
Rules of thumb for "modern" statistics
Try to be valiant rather than virtuous That is, don't let petty signs of non-Normality, non-independence or non-linearity etc. block your road if such indications need to be disregarded in order to have the data speak loud and clear. -- In Danish, 'dristig' vs. 'dydig' are the adjectives.
Rules of thumb for "modern" statistics
Try to be valiant rather than virtuous That is, don't let petty signs of non-Normality, non-independence or non-linearity etc. block your road if such indications need to be disregarded in order to ha
Rules of thumb for "modern" statistics Try to be valiant rather than virtuous That is, don't let petty signs of non-Normality, non-independence or non-linearity etc. block your road if such indications need to be disregarded in order to have the data speak loud and clear. -- In Danish, 'dristig' vs. 'dydig' are the adjectives.
Rules of thumb for "modern" statistics Try to be valiant rather than virtuous That is, don't let petty signs of non-Normality, non-independence or non-linearity etc. block your road if such indications need to be disregarded in order to ha
2,362
Rules of thumb for "modern" statistics
When analyzing longitudinal data be sure to check that variables are coded the same way in each time period. While writing my dissertation, which entailed analysis of secondary data, there was a week or so of utter bafflement of a 1-unit shift in mean depression scores across an otherwise stable mean by year: it turned out that of one of the years in my data set, scale items for a validated instrument had been coded 1–4 instead of 0–3.
Rules of thumb for "modern" statistics
When analyzing longitudinal data be sure to check that variables are coded the same way in each time period. While writing my dissertation, which entailed analysis of secondary data, there was a week
Rules of thumb for "modern" statistics When analyzing longitudinal data be sure to check that variables are coded the same way in each time period. While writing my dissertation, which entailed analysis of secondary data, there was a week or so of utter bafflement of a 1-unit shift in mean depression scores across an otherwise stable mean by year: it turned out that of one of the years in my data set, scale items for a validated instrument had been coded 1–4 instead of 0–3.
Rules of thumb for "modern" statistics When analyzing longitudinal data be sure to check that variables are coded the same way in each time period. While writing my dissertation, which entailed analysis of secondary data, there was a week
2,363
Rules of thumb for "modern" statistics
Your hypothesis should drive your choice of model, not the other way around. To paraphrase Maslow, if you are a hammer, everything looks like a nail. Specific models come with blinders and assumptions about the world built right in: for example non-dynamic models choke on treatment-outcome feedback.
Rules of thumb for "modern" statistics
Your hypothesis should drive your choice of model, not the other way around. To paraphrase Maslow, if you are a hammer, everything looks like a nail. Specific models come with blinders and assumptions
Rules of thumb for "modern" statistics Your hypothesis should drive your choice of model, not the other way around. To paraphrase Maslow, if you are a hammer, everything looks like a nail. Specific models come with blinders and assumptions about the world built right in: for example non-dynamic models choke on treatment-outcome feedback.
Rules of thumb for "modern" statistics Your hypothesis should drive your choice of model, not the other way around. To paraphrase Maslow, if you are a hammer, everything looks like a nail. Specific models come with blinders and assumptions
2,364
Rules of thumb for "modern" statistics
Use simulation to check where the structure of your model may be creating "results" which are simply mathematical artifacts of your model's assumptions Perform your analysis on rerandomized variables, or on simulated variables known to be uncorrelated with one another. Do this many times and contrast averaged point estimates (and confidence or credible intervals) with the results you obtain on actual data: are they all that different?
Rules of thumb for "modern" statistics
Use simulation to check where the structure of your model may be creating "results" which are simply mathematical artifacts of your model's assumptions Perform your analysis on rerandomized variables,
Rules of thumb for "modern" statistics Use simulation to check where the structure of your model may be creating "results" which are simply mathematical artifacts of your model's assumptions Perform your analysis on rerandomized variables, or on simulated variables known to be uncorrelated with one another. Do this many times and contrast averaged point estimates (and confidence or credible intervals) with the results you obtain on actual data: are they all that different?
Rules of thumb for "modern" statistics Use simulation to check where the structure of your model may be creating "results" which are simply mathematical artifacts of your model's assumptions Perform your analysis on rerandomized variables,
2,365
Rules of thumb for "modern" statistics
I am a data analyst rather than a statistician but these are my suggestions. 1)Before you analyze data make sure the assumptions of your method are right. Once you see results they can be hard to forget even after you fix the problems and the results change. 2) It helps to know your data. I run time series and got a result that made little sense given recent years data. I reviewed the methods in light of that and discovered the averaging of models in the method was distorting results for one period (and a structural break had occurred). 3) Be careful about rules of thumb. They reflect the experiences of individual researchers from their own data and if their field is very different from yours their conclusions may not be correct for your data. Moreover, and this was a shock to me, statisticians often disagree on key points. 4) Try to analyze data with different methods and see if the results are similar. Understand that no method is perfect and be careful to check when you can for violations of the assumptions.
Rules of thumb for "modern" statistics
I am a data analyst rather than a statistician but these are my suggestions. 1)Before you analyze data make sure the assumptions of your method are right. Once you see results they can be hard to forg
Rules of thumb for "modern" statistics I am a data analyst rather than a statistician but these are my suggestions. 1)Before you analyze data make sure the assumptions of your method are right. Once you see results they can be hard to forget even after you fix the problems and the results change. 2) It helps to know your data. I run time series and got a result that made little sense given recent years data. I reviewed the methods in light of that and discovered the averaging of models in the method was distorting results for one period (and a structural break had occurred). 3) Be careful about rules of thumb. They reflect the experiences of individual researchers from their own data and if their field is very different from yours their conclusions may not be correct for your data. Moreover, and this was a shock to me, statisticians often disagree on key points. 4) Try to analyze data with different methods and see if the results are similar. Understand that no method is perfect and be careful to check when you can for violations of the assumptions.
Rules of thumb for "modern" statistics I am a data analyst rather than a statistician but these are my suggestions. 1)Before you analyze data make sure the assumptions of your method are right. Once you see results they can be hard to forg
2,366
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe size. But together it doesn't work out. Brief simulation example RSS = 3:10 #Right shoe size LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS cor(LSS, RSS) #correlation ~ 0.99 weights = 120 + rnorm(RSS, 10*RSS, 10) ##Fit a joint model m = lm(weights ~ LSS + RSS) ##F-value is very small, but neither LSS or RSS are significant summary(m) ##Fitting RSS or LSS separately gives a significant result. summary(lm(weights ~ LSS))
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe size. But together it doesn't work out. Brief simulation example RSS = 3:10 #Right shoe size LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS cor(LSS, RSS) #correlation ~ 0.99 weights = 120 + rnorm(RSS, 10*RSS, 10) ##Fit a joint model m = lm(weights ~ LSS + RSS) ##F-value is very small, but neither LSS or RSS are significant summary(m) ##Fitting RSS or LSS separately gives a significant result. summary(lm(weights ~ LSS))
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe
2,367
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
It takes very little correlation among the independent variables to cause this. To see why, try the following: Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard normal. Compute $y_i = (x_i + x_{i+1})/\sqrt{2}$ for $i = 1, 2, \ldots, 9$. This makes the $y_i$ individually standard normal but with some correlations among them. Compute $w = x_1 + x_2 + \cdots + x_{10}$. Note that $w = \sqrt{2}(y_1 + y_3 + y_5 + y_7 + y_9)$. Add some independent normally distributed error to $w$. With a little experimentation I found that $z = w + \varepsilon$ with $\varepsilon \sim N(0, 6)$ works pretty well. Thus, $z$ is the sum of the $x_i$ plus some error. It is also the sum of some of the $y_i$ plus the same error. We will consider the $y_i$ to be the independent variables and $z$ the dependent variable. Here's a scatterplot matrix of one such dataset, with $z$ along the top and left and the $y_i$ proceeding in order. The expected correlations among $y_i$ and $y_j$ are $1/2$ when $|i-j|=1$ and $0$ otherwise. The realized correlations range up to 62%. They show up as tighter scatterplots next to the diagonal. Look at the regression of $z$ against the $y_i$: Source | SS df MS Number of obs = 50 -------------+------------------------------ F( 9, 40) = 4.57 Model | 1684.15999 9 187.128887 Prob > F = 0.0003 Residual | 1636.70545 40 40.9176363 R-squared = 0.5071 -------------+------------------------------ Adj R-squared = 0.3963 Total | 3320.86544 49 67.7727641 Root MSE = 6.3967 ------------------------------------------------------------------------------ z | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- y1 | 2.184007 1.264074 1.73 0.092 -.3707815 4.738795 y2 | 1.537829 1.809436 0.85 0.400 -2.119178 5.194837 y3 | 2.621185 2.140416 1.22 0.228 -1.704757 6.947127 y4 | .6024704 2.176045 0.28 0.783 -3.795481 5.000421 y5 | 1.692758 2.196725 0.77 0.445 -2.746989 6.132506 y6 | .0290429 2.094395 0.01 0.989 -4.203888 4.261974 y7 | .7794273 2.197227 0.35 0.725 -3.661333 5.220188 y8 | -2.485206 2.19327 -1.13 0.264 -6.91797 1.947558 y9 | 1.844671 1.744538 1.06 0.297 -1.681172 5.370514 _cons | .8498024 .9613522 0.88 0.382 -1.093163 2.792768 ------------------------------------------------------------------------------ The F statistic is highly significant but none of the independent variables is, even without any adjustment for all 9 of them. To see what's going on, consider the regression of $z$ against just the odd-numbered $y_i$: Source | SS df MS Number of obs = 50 -------------+------------------------------ F( 5, 44) = 7.77 Model | 1556.88498 5 311.376997 Prob > F = 0.0000 Residual | 1763.98046 44 40.0904649 R-squared = 0.4688 -------------+------------------------------ Adj R-squared = 0.4085 Total | 3320.86544 49 67.7727641 Root MSE = 6.3317 ------------------------------------------------------------------------------ z | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- y1 | 2.943948 .8138525 3.62 0.001 1.303736 4.58416 y3 | 3.403871 1.080173 3.15 0.003 1.226925 5.580818 y5 | 2.458887 .955118 2.57 0.013 .533973 4.383801 y7 | -.3859711 .9742503 -0.40 0.694 -2.349443 1.577501 y9 | .1298614 .9795983 0.13 0.895 -1.844389 2.104112 _cons | 1.118512 .9241601 1.21 0.233 -.7440107 2.981034 ------------------------------------------------------------------------------ Some of these variables are highly significant, even with a Bonferroni adjustment. (There's much more that can be said by looking at these results, but it would take us away from the main point.) The intuition behind this is that $z$ depends primarily on a subset of the variables (but not necessarily on a unique subset). The complement of this subset ($y_2, y_4, y_6, y_8$) adds essentially no information about $z$ due to correlations—however slight—with the subset itself. This sort of situation will arise in time series analysis. We can consider the subscripts to be times. The construction of the $y_i$ has induced a short-range serial correlation among them, much like many time series. Due to this, we lose little information by subsampling the series at regular intervals. One conclusion we can draw from this is that when too many variables are included in a model they can mask the truly significant ones. The first sign of this is the highly significant overall F statistic accompanied by not-so-significant t-tests for the individual coefficients. (Even when some of the variables are individually significant, this does not automatically mean the others are not. That's one of the basic defects of stepwise regression strategies: they fall victim to this masking problem.) Incidentally, the variance inflation factors in the first regression range from 2.55 to 6.09 with a mean of 4.79: just on the borderline of diagnosing some multicollinearity according to the most conservative rules of thumb; well below the threshold according to other rules (where 10 is an upper cutoff).
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
It takes very little correlation among the independent variables to cause this. To see why, try the following: Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? It takes very little correlation among the independent variables to cause this. To see why, try the following: Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard normal. Compute $y_i = (x_i + x_{i+1})/\sqrt{2}$ for $i = 1, 2, \ldots, 9$. This makes the $y_i$ individually standard normal but with some correlations among them. Compute $w = x_1 + x_2 + \cdots + x_{10}$. Note that $w = \sqrt{2}(y_1 + y_3 + y_5 + y_7 + y_9)$. Add some independent normally distributed error to $w$. With a little experimentation I found that $z = w + \varepsilon$ with $\varepsilon \sim N(0, 6)$ works pretty well. Thus, $z$ is the sum of the $x_i$ plus some error. It is also the sum of some of the $y_i$ plus the same error. We will consider the $y_i$ to be the independent variables and $z$ the dependent variable. Here's a scatterplot matrix of one such dataset, with $z$ along the top and left and the $y_i$ proceeding in order. The expected correlations among $y_i$ and $y_j$ are $1/2$ when $|i-j|=1$ and $0$ otherwise. The realized correlations range up to 62%. They show up as tighter scatterplots next to the diagonal. Look at the regression of $z$ against the $y_i$: Source | SS df MS Number of obs = 50 -------------+------------------------------ F( 9, 40) = 4.57 Model | 1684.15999 9 187.128887 Prob > F = 0.0003 Residual | 1636.70545 40 40.9176363 R-squared = 0.5071 -------------+------------------------------ Adj R-squared = 0.3963 Total | 3320.86544 49 67.7727641 Root MSE = 6.3967 ------------------------------------------------------------------------------ z | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- y1 | 2.184007 1.264074 1.73 0.092 -.3707815 4.738795 y2 | 1.537829 1.809436 0.85 0.400 -2.119178 5.194837 y3 | 2.621185 2.140416 1.22 0.228 -1.704757 6.947127 y4 | .6024704 2.176045 0.28 0.783 -3.795481 5.000421 y5 | 1.692758 2.196725 0.77 0.445 -2.746989 6.132506 y6 | .0290429 2.094395 0.01 0.989 -4.203888 4.261974 y7 | .7794273 2.197227 0.35 0.725 -3.661333 5.220188 y8 | -2.485206 2.19327 -1.13 0.264 -6.91797 1.947558 y9 | 1.844671 1.744538 1.06 0.297 -1.681172 5.370514 _cons | .8498024 .9613522 0.88 0.382 -1.093163 2.792768 ------------------------------------------------------------------------------ The F statistic is highly significant but none of the independent variables is, even without any adjustment for all 9 of them. To see what's going on, consider the regression of $z$ against just the odd-numbered $y_i$: Source | SS df MS Number of obs = 50 -------------+------------------------------ F( 5, 44) = 7.77 Model | 1556.88498 5 311.376997 Prob > F = 0.0000 Residual | 1763.98046 44 40.0904649 R-squared = 0.4688 -------------+------------------------------ Adj R-squared = 0.4085 Total | 3320.86544 49 67.7727641 Root MSE = 6.3317 ------------------------------------------------------------------------------ z | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- y1 | 2.943948 .8138525 3.62 0.001 1.303736 4.58416 y3 | 3.403871 1.080173 3.15 0.003 1.226925 5.580818 y5 | 2.458887 .955118 2.57 0.013 .533973 4.383801 y7 | -.3859711 .9742503 -0.40 0.694 -2.349443 1.577501 y9 | .1298614 .9795983 0.13 0.895 -1.844389 2.104112 _cons | 1.118512 .9241601 1.21 0.233 -.7440107 2.981034 ------------------------------------------------------------------------------ Some of these variables are highly significant, even with a Bonferroni adjustment. (There's much more that can be said by looking at these results, but it would take us away from the main point.) The intuition behind this is that $z$ depends primarily on a subset of the variables (but not necessarily on a unique subset). The complement of this subset ($y_2, y_4, y_6, y_8$) adds essentially no information about $z$ due to correlations—however slight—with the subset itself. This sort of situation will arise in time series analysis. We can consider the subscripts to be times. The construction of the $y_i$ has induced a short-range serial correlation among them, much like many time series. Due to this, we lose little information by subsampling the series at regular intervals. One conclusion we can draw from this is that when too many variables are included in a model they can mask the truly significant ones. The first sign of this is the highly significant overall F statistic accompanied by not-so-significant t-tests for the individual coefficients. (Even when some of the variables are individually significant, this does not automatically mean the others are not. That's one of the basic defects of stepwise regression strategies: they fall victim to this masking problem.) Incidentally, the variance inflation factors in the first regression range from 2.55 to 6.09 with a mean of 4.79: just on the borderline of diagnosing some multicollinearity according to the most conservative rules of thumb; well below the threshold according to other rules (where 10 is an upper cutoff).
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? It takes very little correlation among the independent variables to cause this. To see why, try the following: Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard
2,368
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Multicollinearity As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-significant predictors. Of course, multicollinearity is not just about an absolute threshold. Standard errors on regression coefficients will increase as intercorrelations with the focal predictor increase. Multiple almost significant predictors Even if you had no multicollinearity, you can still get non-significant predictors and an overall significant model if two or more individual predictors are close to significant and thus collectively, the overall prediction passes the threshold of statistical significance. For example, using an alpha of .05, if you had two predictors with p-values of .06, and .07, then I wouldn't be surprised if the overall model had a p<.05.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Multicollinearity As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-signif
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? Multicollinearity As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-significant predictors. Of course, multicollinearity is not just about an absolute threshold. Standard errors on regression coefficients will increase as intercorrelations with the focal predictor increase. Multiple almost significant predictors Even if you had no multicollinearity, you can still get non-significant predictors and an overall significant model if two or more individual predictors are close to significant and thus collectively, the overall prediction passes the threshold of statistical significance. For example, using an alpha of .05, if you had two predictors with p-values of .06, and .07, then I wouldn't be surprised if the overall model had a p<.05.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? Multicollinearity As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-signif
2,369
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the response variable. Consequently, the F-test has a low p-value (it is saying that the predictors together are highly significant in explaining the variation in the response variable). But the t-test for each predictor has a high p-value because after allowing for the effect of the other predictor there is not much left to explain.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the resp
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the response variable. Consequently, the F-test has a low p-value (it is saying that the predictors together are highly significant in explaining the variation in the response variable). But the t-test for each predictor has a high p-value because after allowing for the effect of the other predictor there is not much left to explain.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the resp
2,370
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$. Then $${\rm Cov}(X_2,Y) = {\rm E}[(aX_1+\delta)(bX_1+cX_2+\epsilon)]={\rm E}[(aX_1+\delta)(\{b+ac\}X_1+c\delta+\epsilon)]=a(b+ac)+c$$ We can set this to zero with say $a=1$, $b=2$ and $c=-1$. Yet all the relations will obviously be there and easily detectable with regression analysis. You said that you understand the issue of variables being correlated and regression being insignificant better; it probably means that you have been conditioned by frequent mentioning of multicollinearity, but you would need to boost your understanding of the geometry of least squares.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$. Then $${\rm Cov}(X_2,Y) =
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$. Then $${\rm Cov}(X_2,Y) = {\rm E}[(aX_1+\delta)(bX_1+cX_2+\epsilon)]={\rm E}[(aX_1+\delta)(\{b+ac\}X_1+c\delta+\epsilon)]=a(b+ac)+c$$ We can set this to zero with say $a=1$, $b=2$ and $c=-1$. Yet all the relations will obviously be there and easily detectable with regression analysis. You said that you understand the issue of variables being correlated and regression being insignificant better; it probably means that you have been conditioned by frequent mentioning of multicollinearity, but you would need to boost your understanding of the geometry of least squares.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$. Then $${\rm Cov}(X_2,Y) =
2,371
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression Diagnostics: Identifying Influential Data and Sources of Collinearity" by Belsley, Kuh and Welsch. VIFs are much easier to understand, but they can't deal with collinearity involving the intercept (i.e., predictors that are almost constant by themselves or in a linear combination) - conversely, the BKW diagnostics are far less intuitive but can deal with collinearity involving the intercept.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression Diagnostics: Identifying Influential Data and Sources of Collinearity" by Belsley, Kuh and Welsch. VIFs are much easier to understand, but they can't deal with collinearity involving the intercept (i.e., predictors that are almost constant by themselves or in a linear combination) - conversely, the BKW diagnostics are far less intuitive but can deal with collinearity involving the intercept.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression
2,372
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get different answers. I have seen this happen even when the individual F values are not that close to significant, especially if the model has more than 2 or 3 IVs. I do not know of any way to combine the individual p-values and get anything meaningful, althought there may be a way.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get d
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get different answers. I have seen this happen even when the individual F values are not that close to significant, especially if the model has more than 2 or 3 IVs. I do not know of any way to combine the individual p-values and get anything meaningful, althought there may be a way.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get d
2,373
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long as all of the other predictors are in the model. There must be some interaction or interdependence between two or more of your predictors. As someone else asked above - how did you diagnose a lack of multicollinearity?
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long as all of the other predictors are in the model. There must be some interaction or interdependence between two or more of your predictors. As someone else asked above - how did you diagnose a lack of multicollinearity?
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long
2,374
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One way to understand this is the geometry of least squares as @StasK suggests. Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say X relates to unique variance in Y. This is right. The unique variance in Y, though, is different from the total variance. So, what variance are the other variables removing? It would help if you could tell us your variables.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One way to understand this is the geometry of least squares as @StasK suggests. Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? One way to understand this is the geometry of least squares as @StasK suggests. Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say X relates to unique variance in Y. This is right. The unique variance in Y, though, is different from the total variance. So, what variance are the other variables removing? It would help if you could tell us your variables.
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? One way to understand this is the geometry of least squares as @StasK suggests. Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say
2,375
Why haven't robust (and resistant) statistics replaced classical techniques?
Researchers want small p-values, and you can get smaller p-values if you use methods that make stronger distributional assumptions. In other words, non-robust methods let you publish more papers. Of course more of these papers may be false positives, but a publication is a publication. That's a cynical explanation, but it's sometimes valid.
Why haven't robust (and resistant) statistics replaced classical techniques?
Researchers want small p-values, and you can get smaller p-values if you use methods that make stronger distributional assumptions. In other words, non-robust methods let you publish more papers. Of c
Why haven't robust (and resistant) statistics replaced classical techniques? Researchers want small p-values, and you can get smaller p-values if you use methods that make stronger distributional assumptions. In other words, non-robust methods let you publish more papers. Of course more of these papers may be false positives, but a publication is a publication. That's a cynical explanation, but it's sometimes valid.
Why haven't robust (and resistant) statistics replaced classical techniques? Researchers want small p-values, and you can get smaller p-values if you use methods that make stronger distributional assumptions. In other words, non-robust methods let you publish more papers. Of c
2,376
Why haven't robust (and resistant) statistics replaced classical techniques?
So 'classical models' (whatever they are - I assume you mean something like simple models taught in textbooks and estimated by ML) fail on some, perhaps many, real world data sets. If a model fails then there are two basic approaches to fixing it: Make fewer assumptions (less model) Make more assumptions (more model) Robust statistics, quasi-likelihood, and GEE approaches take the first approach by changing the estimation strategy to one where the model does not hold for all data points (robust) or need not characterize all aspects of the data (QL and GEE). The alternative is to try to build a model that explicitly models the source of contaminating data points, or the aspects of the original model that seems to be false, while keeping the estimation method the same as before. Some intuitively prefer the former (it's particularly popular in economics), and some intuitively prefer the latter (it's particular popular among Bayesians, who tend to be happier with more complex models, particularly once they realize they're going to have use simulation tools for inference anyway). Fat tailed distributional assumptions, e.g. using the negative binomial rather than poisson or t rather than normal, belong to the second strategy. Most things labelled 'robust statistics' belong to the first strategy. As a practical matter, deriving estimators for the first strategy for realistically complex problems seems to be quite hard. Not that that's a reason for not doing so, but it is perhaps an explanation for why it isn't done very often.
Why haven't robust (and resistant) statistics replaced classical techniques?
So 'classical models' (whatever they are - I assume you mean something like simple models taught in textbooks and estimated by ML) fail on some, perhaps many, real world data sets. If a model fails
Why haven't robust (and resistant) statistics replaced classical techniques? So 'classical models' (whatever they are - I assume you mean something like simple models taught in textbooks and estimated by ML) fail on some, perhaps many, real world data sets. If a model fails then there are two basic approaches to fixing it: Make fewer assumptions (less model) Make more assumptions (more model) Robust statistics, quasi-likelihood, and GEE approaches take the first approach by changing the estimation strategy to one where the model does not hold for all data points (robust) or need not characterize all aspects of the data (QL and GEE). The alternative is to try to build a model that explicitly models the source of contaminating data points, or the aspects of the original model that seems to be false, while keeping the estimation method the same as before. Some intuitively prefer the former (it's particularly popular in economics), and some intuitively prefer the latter (it's particular popular among Bayesians, who tend to be happier with more complex models, particularly once they realize they're going to have use simulation tools for inference anyway). Fat tailed distributional assumptions, e.g. using the negative binomial rather than poisson or t rather than normal, belong to the second strategy. Most things labelled 'robust statistics' belong to the first strategy. As a practical matter, deriving estimators for the first strategy for realistically complex problems seems to be quite hard. Not that that's a reason for not doing so, but it is perhaps an explanation for why it isn't done very often.
Why haven't robust (and resistant) statistics replaced classical techniques? So 'classical models' (whatever they are - I assume you mean something like simple models taught in textbooks and estimated by ML) fail on some, perhaps many, real world data sets. If a model fails
2,377
Why haven't robust (and resistant) statistics replaced classical techniques?
I would suggest that it's a lag in teaching. Most people either learn statistics at college or University. If statistics is not your first degree and instead did a mathematics or computer science degree then you probably only cover the fundamental statistics modules: Probability Hypothesis testing Regression This means that when faced with a problem you try and use what you know to solve the problem. Data isn't Normal - take logs. Data has annoying outliers - remove them. Unless you stumble across something else, then it's difficult to do something better. It's really hard using Google to find something if you don't know what it's called! I think with all techniques it will take a while before the newer techniques filter down. How long did it take standard hypothesis tests to be part of a standard statistics curriculum? BTW, with a statistics degree there will still be a lag in teaching - just a shorter one!
Why haven't robust (and resistant) statistics replaced classical techniques?
I would suggest that it's a lag in teaching. Most people either learn statistics at college or University. If statistics is not your first degree and instead did a mathematics or computer science degr
Why haven't robust (and resistant) statistics replaced classical techniques? I would suggest that it's a lag in teaching. Most people either learn statistics at college or University. If statistics is not your first degree and instead did a mathematics or computer science degree then you probably only cover the fundamental statistics modules: Probability Hypothesis testing Regression This means that when faced with a problem you try and use what you know to solve the problem. Data isn't Normal - take logs. Data has annoying outliers - remove them. Unless you stumble across something else, then it's difficult to do something better. It's really hard using Google to find something if you don't know what it's called! I think with all techniques it will take a while before the newer techniques filter down. How long did it take standard hypothesis tests to be part of a standard statistics curriculum? BTW, with a statistics degree there will still be a lag in teaching - just a shorter one!
Why haven't robust (and resistant) statistics replaced classical techniques? I would suggest that it's a lag in teaching. Most people either learn statistics at college or University. If statistics is not your first degree and instead did a mathematics or computer science degr
2,378
Why haven't robust (and resistant) statistics replaced classical techniques?
Statistics is a tool for non-statistical-minded researchers, and they just don't care. I once tried to help with a Medicine article my ex-wife was co-authoring. I wrote several pages describing the data, what it suggested, why certain observations had been excluded from the study... and the lead researcher, a doctor, threw it all away and asked someone to compute a p-value, which is all she (and just about everyone who would read the article) cared about.
Why haven't robust (and resistant) statistics replaced classical techniques?
Statistics is a tool for non-statistical-minded researchers, and they just don't care. I once tried to help with a Medicine article my ex-wife was co-authoring. I wrote several pages describing the da
Why haven't robust (and resistant) statistics replaced classical techniques? Statistics is a tool for non-statistical-minded researchers, and they just don't care. I once tried to help with a Medicine article my ex-wife was co-authoring. I wrote several pages describing the data, what it suggested, why certain observations had been excluded from the study... and the lead researcher, a doctor, threw it all away and asked someone to compute a p-value, which is all she (and just about everyone who would read the article) cared about.
Why haven't robust (and resistant) statistics replaced classical techniques? Statistics is a tool for non-statistical-minded researchers, and they just don't care. I once tried to help with a Medicine article my ex-wife was co-authoring. I wrote several pages describing the da
2,379
Why haven't robust (and resistant) statistics replaced classical techniques?
Anyone trained in statistical data analysis at a reasonable level uses the concepts of robust statistics on a regular basis. Most researchers know enough to look for serious outliers and data recording errors; the policy of removing suspect data points goes back well into the 19th century with Lord Rayleigh, G.G. Stokes, and others of their age. If the question is: Why don't researchers use the more modern methods for computing location, scale, regression, etc. estimates? then the answer is given above -- the methods have largely been developed in the last 25 years, say 1985 - 2010. The lag for learning new methods factors in, as well as inertia compounded by the 'myth' that there is nothing wrong with blindly using classical methods. John Tukey comments that just which robust/resistant methods you use is not important—what is important is that you use some. It is perfectly proper to use both classical and robust/resistant methods routinely, and only worry when they differ enough to matter. But when they differ, you should think hard. If instead, the question is: Why don't researchers stop and ask questions about their data, instead of blindly applying highly unstable estimates? then the answer really comes down to training. There are far too many researchers who were never trained in statistics properly, summed up by the general reliance on p-values as the be-all and end-all of 'statistical significance'. @Kwak: Huber's estimates from the 1970s are robust, in the classical sense of the word: they resist outliers. And redescending estimators actually date well before the 1980s: the Princeton robustness study (of 1971) included the bisquare estimate of location, a redescending estimate.
Why haven't robust (and resistant) statistics replaced classical techniques?
Anyone trained in statistical data analysis at a reasonable level uses the concepts of robust statistics on a regular basis. Most researchers know enough to look for serious outliers and data recordin
Why haven't robust (and resistant) statistics replaced classical techniques? Anyone trained in statistical data analysis at a reasonable level uses the concepts of robust statistics on a regular basis. Most researchers know enough to look for serious outliers and data recording errors; the policy of removing suspect data points goes back well into the 19th century with Lord Rayleigh, G.G. Stokes, and others of their age. If the question is: Why don't researchers use the more modern methods for computing location, scale, regression, etc. estimates? then the answer is given above -- the methods have largely been developed in the last 25 years, say 1985 - 2010. The lag for learning new methods factors in, as well as inertia compounded by the 'myth' that there is nothing wrong with blindly using classical methods. John Tukey comments that just which robust/resistant methods you use is not important—what is important is that you use some. It is perfectly proper to use both classical and robust/resistant methods routinely, and only worry when they differ enough to matter. But when they differ, you should think hard. If instead, the question is: Why don't researchers stop and ask questions about their data, instead of blindly applying highly unstable estimates? then the answer really comes down to training. There are far too many researchers who were never trained in statistics properly, summed up by the general reliance on p-values as the be-all and end-all of 'statistical significance'. @Kwak: Huber's estimates from the 1970s are robust, in the classical sense of the word: they resist outliers. And redescending estimators actually date well before the 1980s: the Princeton robustness study (of 1971) included the bisquare estimate of location, a redescending estimate.
Why haven't robust (and resistant) statistics replaced classical techniques? Anyone trained in statistical data analysis at a reasonable level uses the concepts of robust statistics on a regular basis. Most researchers know enough to look for serious outliers and data recordin
2,380
Why haven't robust (and resistant) statistics replaced classical techniques?
I Give an answer in two directions: things that are robust are not necessarily labeled robust. If you believe robustness against everything exists then you are naive. Statistical approaches that leave the problem of robustness appart are sometime not adapted to the real world but are often more valuable (as a concept) than an algorithm that looks like kitchening. developpment First, I think there are a lot of good approaches in statistic (you will find them in R packages not necessarily with robust mentionned somewhere) which are naturally robust and tested on real data and the fact that you don't find algorithm with "robust" mentionned somewhere does not mean it is not robust. Anyway if you think being robust means being universal then you'll never find any robust procedure (no free lunch) you need to have some knowledge/expertise on the data you analyse in order to use adapted tool or to create an adapted model. On the other hand, some approaches in statistic are not robust because they are dedicated to one single type of model. I think it is good sometime to work in a laboratory to try to understand things. It is also good to treat problem separatly to understand to what problem our solution is... this is how mathematician work. The example of Gaussian model elocant: is so much criticised because the gaussian assumption is never fulfilled but has bring 75% of the ideas used practically in statistic today. Do you really think all this is about writting paper to follow the publish or perish rule (which I don't like, I agree) ?
Why haven't robust (and resistant) statistics replaced classical techniques?
I Give an answer in two directions: things that are robust are not necessarily labeled robust. If you believe robustness against everything exists then you are naive. Statistical approaches that le
Why haven't robust (and resistant) statistics replaced classical techniques? I Give an answer in two directions: things that are robust are not necessarily labeled robust. If you believe robustness against everything exists then you are naive. Statistical approaches that leave the problem of robustness appart are sometime not adapted to the real world but are often more valuable (as a concept) than an algorithm that looks like kitchening. developpment First, I think there are a lot of good approaches in statistic (you will find them in R packages not necessarily with robust mentionned somewhere) which are naturally robust and tested on real data and the fact that you don't find algorithm with "robust" mentionned somewhere does not mean it is not robust. Anyway if you think being robust means being universal then you'll never find any robust procedure (no free lunch) you need to have some knowledge/expertise on the data you analyse in order to use adapted tool or to create an adapted model. On the other hand, some approaches in statistic are not robust because they are dedicated to one single type of model. I think it is good sometime to work in a laboratory to try to understand things. It is also good to treat problem separatly to understand to what problem our solution is... this is how mathematician work. The example of Gaussian model elocant: is so much criticised because the gaussian assumption is never fulfilled but has bring 75% of the ideas used practically in statistic today. Do you really think all this is about writting paper to follow the publish or perish rule (which I don't like, I agree) ?
Why haven't robust (and resistant) statistics replaced classical techniques? I Give an answer in two directions: things that are robust are not necessarily labeled robust. If you believe robustness against everything exists then you are naive. Statistical approaches that le
2,381
Why haven't robust (and resistant) statistics replaced classical techniques?
As someone who has learned a little bit of statistics for my own research, I'll guess that the reasons are pedagogical and inertial. I've observed within my own field that the order in which topics are taught reflects the history of the field. Those ideas which came first are taught first, and so on. For people who only dip into stats for cursory instruction, this means they'll learn classical stats first, and probably last. Then, even if they learn more, the classical stuff with stick with them better due to primacy effects. Also, everyone knows what a two sample t-test is. Less than everyone knows what a Mann-Whitney or Wilcoxon Rank Sum test is. This means that I have to exert just a little bit of energy on explaining what my robust test is, versus not having to exert any with a classical test. Such conditions will obviously result in fewer people using robust methods than should.
Why haven't robust (and resistant) statistics replaced classical techniques?
As someone who has learned a little bit of statistics for my own research, I'll guess that the reasons are pedagogical and inertial. I've observed within my own field that the order in which topics ar
Why haven't robust (and resistant) statistics replaced classical techniques? As someone who has learned a little bit of statistics for my own research, I'll guess that the reasons are pedagogical and inertial. I've observed within my own field that the order in which topics are taught reflects the history of the field. Those ideas which came first are taught first, and so on. For people who only dip into stats for cursory instruction, this means they'll learn classical stats first, and probably last. Then, even if they learn more, the classical stuff with stick with them better due to primacy effects. Also, everyone knows what a two sample t-test is. Less than everyone knows what a Mann-Whitney or Wilcoxon Rank Sum test is. This means that I have to exert just a little bit of energy on explaining what my robust test is, versus not having to exert any with a classical test. Such conditions will obviously result in fewer people using robust methods than should.
Why haven't robust (and resistant) statistics replaced classical techniques? As someone who has learned a little bit of statistics for my own research, I'll guess that the reasons are pedagogical and inertial. I've observed within my own field that the order in which topics ar
2,382
Why haven't robust (and resistant) statistics replaced classical techniques?
Wooldridge "Introductory Econometrics - A Modern Approach" 2E p.261. If Heteroskedasticity-robust standard errors are valid more often than the usual OLS standard errors, why do we bother we the usual standard errors at all?...One reason they are still used in cross sectional work is that, if the homoskedasticity assumption holds and the erros are normally distributed, then the usual t-statistics have exact t distributions, regardless of the sample size. The robust standard errors and robust t statistics are justified only as the sample size becomes large. With small sample sizes, the robust t statistics can have distributions that are not very close to the t distribution, and that could throw off our inference. In large sample sizes, we can make a case for always reporting only the Heteroskedasticity-robust standard errors in cross-sectional applications, and this practice is being followed more and more in applied work.
Why haven't robust (and resistant) statistics replaced classical techniques?
Wooldridge "Introductory Econometrics - A Modern Approach" 2E p.261. If Heteroskedasticity-robust standard errors are valid more often than the usual OLS standard errors, why do we bother we the usual
Why haven't robust (and resistant) statistics replaced classical techniques? Wooldridge "Introductory Econometrics - A Modern Approach" 2E p.261. If Heteroskedasticity-robust standard errors are valid more often than the usual OLS standard errors, why do we bother we the usual standard errors at all?...One reason they are still used in cross sectional work is that, if the homoskedasticity assumption holds and the erros are normally distributed, then the usual t-statistics have exact t distributions, regardless of the sample size. The robust standard errors and robust t statistics are justified only as the sample size becomes large. With small sample sizes, the robust t statistics can have distributions that are not very close to the t distribution, and that could throw off our inference. In large sample sizes, we can make a case for always reporting only the Heteroskedasticity-robust standard errors in cross-sectional applications, and this practice is being followed more and more in applied work.
Why haven't robust (and resistant) statistics replaced classical techniques? Wooldridge "Introductory Econometrics - A Modern Approach" 2E p.261. If Heteroskedasticity-robust standard errors are valid more often than the usual OLS standard errors, why do we bother we the usual
2,383
Why haven't robust (and resistant) statistics replaced classical techniques?
While they're not mutually exclusive, I think the growing popularity of Bayesian statistics is part of it. Bayesian statistics can achieve a lot of the same goals through priors and model averaging, and tend to be a bit more robust in practice.
Why haven't robust (and resistant) statistics replaced classical techniques?
While they're not mutually exclusive, I think the growing popularity of Bayesian statistics is part of it. Bayesian statistics can achieve a lot of the same goals through priors and model averaging,
Why haven't robust (and resistant) statistics replaced classical techniques? While they're not mutually exclusive, I think the growing popularity of Bayesian statistics is part of it. Bayesian statistics can achieve a lot of the same goals through priors and model averaging, and tend to be a bit more robust in practice.
Why haven't robust (and resistant) statistics replaced classical techniques? While they're not mutually exclusive, I think the growing popularity of Bayesian statistics is part of it. Bayesian statistics can achieve a lot of the same goals through priors and model averaging,
2,384
Why haven't robust (and resistant) statistics replaced classical techniques?
I'm not statistician, my experience in statistics is fairly limited, I just use robust statistics in computer vision/3d reconstruction/pose estimation. Here is my take on the problem from the user point of view: First, robust statistics used a lot in engineering and science without calling it "robust statistics". A lot of people use it intuitively, coming to it in the process of adjusting specific method to real-world problem. For example iterative reweighted least squares and trimmed means/trimmed least square used commonly, that just the user don't know they used robust statistics - they just make method workable for real, non-synthetic data. Second, both "intuitive" and conscious robust statistics practically always used in the case where results are verifiable, or where exists clearly visible error metrics. If result obtained with normal distribution are obviously non-valid or wrong, people start tinkering with weights, trimming,sampling, read some paper and end up using robust estimators, whether they know term or not. On the other hand if end result of research just some graphics and diagrams, and there is no insensitive to verify results, or if normal statistic produce reults good enough - people just don't bother. And last, about usefulness of robust statistics as a theory - while theory itself is very interesting it's not often give any practical advantages. Most of robust estimators are fairly trivial and intuitive, often people reinventing them without any statistical knowledge. Theory, like breakdown point estimation, asymptotics, data depth, heteroskedacity etc allow deeper understanding of data, but in most cases it's just unnecessary. One big exception is intersection of robust statistics and compressive sensing, which produce some new practical methods such as "cross-and-bouquet"
Why haven't robust (and resistant) statistics replaced classical techniques?
I'm not statistician, my experience in statistics is fairly limited, I just use robust statistics in computer vision/3d reconstruction/pose estimation. Here is my take on the problem from the user poi
Why haven't robust (and resistant) statistics replaced classical techniques? I'm not statistician, my experience in statistics is fairly limited, I just use robust statistics in computer vision/3d reconstruction/pose estimation. Here is my take on the problem from the user point of view: First, robust statistics used a lot in engineering and science without calling it "robust statistics". A lot of people use it intuitively, coming to it in the process of adjusting specific method to real-world problem. For example iterative reweighted least squares and trimmed means/trimmed least square used commonly, that just the user don't know they used robust statistics - they just make method workable for real, non-synthetic data. Second, both "intuitive" and conscious robust statistics practically always used in the case where results are verifiable, or where exists clearly visible error metrics. If result obtained with normal distribution are obviously non-valid or wrong, people start tinkering with weights, trimming,sampling, read some paper and end up using robust estimators, whether they know term or not. On the other hand if end result of research just some graphics and diagrams, and there is no insensitive to verify results, or if normal statistic produce reults good enough - people just don't bother. And last, about usefulness of robust statistics as a theory - while theory itself is very interesting it's not often give any practical advantages. Most of robust estimators are fairly trivial and intuitive, often people reinventing them without any statistical knowledge. Theory, like breakdown point estimation, asymptotics, data depth, heteroskedacity etc allow deeper understanding of data, but in most cases it's just unnecessary. One big exception is intersection of robust statistics and compressive sensing, which produce some new practical methods such as "cross-and-bouquet"
Why haven't robust (and resistant) statistics replaced classical techniques? I'm not statistician, my experience in statistics is fairly limited, I just use robust statistics in computer vision/3d reconstruction/pose estimation. Here is my take on the problem from the user poi
2,385
Why haven't robust (and resistant) statistics replaced classical techniques?
My knowledge of robust estimators is solely in regards to robust standard errors for regression parameters so my comment will only be in regards to those. I would suggest people read this article, On The So-Called "Huber Sandwich Estimator" and "Robust Standard Errors" by: Freedman, A. David The American Statistician, Vol. 60, No. 4. (November 2006), pp. 299-302. doi:10.1198/000313006X152207 (PDF Version) Particular what I am concerned about with these approaches is not that they are wrong, but they are simply distracting from bigger problems. Thus I entirely agree with Robin Girard's answer and his mention of "no free lunch".
Why haven't robust (and resistant) statistics replaced classical techniques?
My knowledge of robust estimators is solely in regards to robust standard errors for regression parameters so my comment will only be in regards to those. I would suggest people read this article, On
Why haven't robust (and resistant) statistics replaced classical techniques? My knowledge of robust estimators is solely in regards to robust standard errors for regression parameters so my comment will only be in regards to those. I would suggest people read this article, On The So-Called "Huber Sandwich Estimator" and "Robust Standard Errors" by: Freedman, A. David The American Statistician, Vol. 60, No. 4. (November 2006), pp. 299-302. doi:10.1198/000313006X152207 (PDF Version) Particular what I am concerned about with these approaches is not that they are wrong, but they are simply distracting from bigger problems. Thus I entirely agree with Robin Girard's answer and his mention of "no free lunch".
Why haven't robust (and resistant) statistics replaced classical techniques? My knowledge of robust estimators is solely in regards to robust standard errors for regression parameters so my comment will only be in regards to those. I would suggest people read this article, On
2,386
Why haven't robust (and resistant) statistics replaced classical techniques?
The calculus and probability needed for robust statistics is (usually) harder, so (a) there is less theory and (b) it is harder to grasp.
Why haven't robust (and resistant) statistics replaced classical techniques?
The calculus and probability needed for robust statistics is (usually) harder, so (a) there is less theory and (b) it is harder to grasp.
Why haven't robust (and resistant) statistics replaced classical techniques? The calculus and probability needed for robust statistics is (usually) harder, so (a) there is less theory and (b) it is harder to grasp.
Why haven't robust (and resistant) statistics replaced classical techniques? The calculus and probability needed for robust statistics is (usually) harder, so (a) there is less theory and (b) it is harder to grasp.
2,387
Why haven't robust (and resistant) statistics replaced classical techniques?
I am surprised to see the Gauss-Markov theorem is not mentioned in this long list of answers, afaics: In a linear model with spherical errors (which along the way includes an assumption of no outliers, via a finite error variance), OLS is efficient in a class of linear unbiased estimators - there are (restrictive, to be sure) conditions under which "you can't do better than OLS". I am not arguing this should justify using OLS almost all of the time, but it sure contributes to why (especially since it is a good excuse to focus so much on OLS in teaching).
Why haven't robust (and resistant) statistics replaced classical techniques?
I am surprised to see the Gauss-Markov theorem is not mentioned in this long list of answers, afaics: In a linear model with spherical errors (which along the way includes an assumption of no outliers
Why haven't robust (and resistant) statistics replaced classical techniques? I am surprised to see the Gauss-Markov theorem is not mentioned in this long list of answers, afaics: In a linear model with spherical errors (which along the way includes an assumption of no outliers, via a finite error variance), OLS is efficient in a class of linear unbiased estimators - there are (restrictive, to be sure) conditions under which "you can't do better than OLS". I am not arguing this should justify using OLS almost all of the time, but it sure contributes to why (especially since it is a good excuse to focus so much on OLS in teaching).
Why haven't robust (and resistant) statistics replaced classical techniques? I am surprised to see the Gauss-Markov theorem is not mentioned in this long list of answers, afaics: In a linear model with spherical errors (which along the way includes an assumption of no outliers
2,388
Why haven't robust (and resistant) statistics replaced classical techniques?
My guess would be that robust statistics are never sufficient i.e. to be robust these statistics skip some of the information about the distribution. And I suspect that it is not always a good thing. In other words there's a trade-off between robustness and loss of information. E.g. the median is robust because (unlike the mean) it utilizes information only about half of the elements (in discrete case): $$median(\{1, 2, 3, 4, 5\})=3=median(\{0.1, 0.2, 3, 4000, 5000\})$$
Why haven't robust (and resistant) statistics replaced classical techniques?
My guess would be that robust statistics are never sufficient i.e. to be robust these statistics skip some of the information about the distribution. And I suspect that it is not always a good thing.
Why haven't robust (and resistant) statistics replaced classical techniques? My guess would be that robust statistics are never sufficient i.e. to be robust these statistics skip some of the information about the distribution. And I suspect that it is not always a good thing. In other words there's a trade-off between robustness and loss of information. E.g. the median is robust because (unlike the mean) it utilizes information only about half of the elements (in discrete case): $$median(\{1, 2, 3, 4, 5\})=3=median(\{0.1, 0.2, 3, 4000, 5000\})$$
Why haven't robust (and resistant) statistics replaced classical techniques? My guess would be that robust statistics are never sufficient i.e. to be robust these statistics skip some of the information about the distribution. And I suspect that it is not always a good thing.
2,389
What is the lasso in regression analysis?
The LASSO (Least Absolute Shrinkage and Selection Operator) is a regression method that involves penalizing the absolute size of the regression coefficients. By penalizing (or equivalently constraining the sum of the absolute values of the estimates) you end up in a situation where some of the parameter estimates may be exactly zero. The larger the penalty applied, the further estimates are shrunk towards zero. This is convenient when we want some automatic feature/variable selection, or when dealing with highly correlated predictors, where standard regression will usually have regression coefficients that are 'too large'. https://web.stanford.edu/~hastie/ElemStatLearn/ (Free download) has a good description of the LASSO and related methods.
What is the lasso in regression analysis?
The LASSO (Least Absolute Shrinkage and Selection Operator) is a regression method that involves penalizing the absolute size of the regression coefficients. By penalizing (or equivalently constraini
What is the lasso in regression analysis? The LASSO (Least Absolute Shrinkage and Selection Operator) is a regression method that involves penalizing the absolute size of the regression coefficients. By penalizing (or equivalently constraining the sum of the absolute values of the estimates) you end up in a situation where some of the parameter estimates may be exactly zero. The larger the penalty applied, the further estimates are shrunk towards zero. This is convenient when we want some automatic feature/variable selection, or when dealing with highly correlated predictors, where standard regression will usually have regression coefficients that are 'too large'. https://web.stanford.edu/~hastie/ElemStatLearn/ (Free download) has a good description of the LASSO and related methods.
What is the lasso in regression analysis? The LASSO (Least Absolute Shrinkage and Selection Operator) is a regression method that involves penalizing the absolute size of the regression coefficients. By penalizing (or equivalently constraini
2,390
What is the lasso in regression analysis?
In "normal" regression (OLS) the goal is to minimize the residual sum of squares (RSS) in order to estimate the coefficients $$ \underset{\beta \in \mathbb{R}^p}{\operatorname{argmin}} \sum_{i=1}^{n} (Y_{i} - \sum_{j=1}^{p}X_{ij}\beta_{j})^{2} $$ In case of LASSO regression you estimate the coefficients with a slightly different approach: $$ \underset{\beta \in \mathbb{R}^p}{\operatorname{argmin}} \sum_{i=1}^{n} (Y_{i} - \sum_{j=1}^{p}X_{ij}\beta_{j})^{2} \color{red}{+ \lambda \sum_{j=1}^{p}|\beta_{j}|} $$ The new part is highlitened in red, which is a sum of the absolute coefficient values penalized by $\lambda$, so $\lambda$ controls the amount of (L1) regulazation. Note that if $\lambda = 0$, it would result into same coefficients as that of Simple Linear Regression. The formula shows that in case of LASSO $\operatorname{argmin}$ needs both, RSS and L1 regulazation (new red part) to be minimal. If $\lambda = 1$, the red L1 penalty constrains the size of the coefficients so that the coefficient can only increase if this lead to the same amount of decrease in RSS. More generally, the only way the coefficients can increase is if we experience a comparable decrease in the residual sum of squares (RSS). Thus, the higher you set $\lambda$ the more penalty is applied to the coefficients and the smaller will be the coefficients, some might become zero. That means LASSO can result in parsimonious models by doing feature selection and it prevents the model from overfitting. That said, you can use LASSO if you have many features and your goal is rather to predict data than to interpret the coefficients of your model.
What is the lasso in regression analysis?
In "normal" regression (OLS) the goal is to minimize the residual sum of squares (RSS) in order to estimate the coefficients $$ \underset{\beta \in \mathbb{R}^p}{\operatorname{argmin}} \sum_{i=1}^{n}
What is the lasso in regression analysis? In "normal" regression (OLS) the goal is to minimize the residual sum of squares (RSS) in order to estimate the coefficients $$ \underset{\beta \in \mathbb{R}^p}{\operatorname{argmin}} \sum_{i=1}^{n} (Y_{i} - \sum_{j=1}^{p}X_{ij}\beta_{j})^{2} $$ In case of LASSO regression you estimate the coefficients with a slightly different approach: $$ \underset{\beta \in \mathbb{R}^p}{\operatorname{argmin}} \sum_{i=1}^{n} (Y_{i} - \sum_{j=1}^{p}X_{ij}\beta_{j})^{2} \color{red}{+ \lambda \sum_{j=1}^{p}|\beta_{j}|} $$ The new part is highlitened in red, which is a sum of the absolute coefficient values penalized by $\lambda$, so $\lambda$ controls the amount of (L1) regulazation. Note that if $\lambda = 0$, it would result into same coefficients as that of Simple Linear Regression. The formula shows that in case of LASSO $\operatorname{argmin}$ needs both, RSS and L1 regulazation (new red part) to be minimal. If $\lambda = 1$, the red L1 penalty constrains the size of the coefficients so that the coefficient can only increase if this lead to the same amount of decrease in RSS. More generally, the only way the coefficients can increase is if we experience a comparable decrease in the residual sum of squares (RSS). Thus, the higher you set $\lambda$ the more penalty is applied to the coefficients and the smaller will be the coefficients, some might become zero. That means LASSO can result in parsimonious models by doing feature selection and it prevents the model from overfitting. That said, you can use LASSO if you have many features and your goal is rather to predict data than to interpret the coefficients of your model.
What is the lasso in regression analysis? In "normal" regression (OLS) the goal is to minimize the residual sum of squares (RSS) in order to estimate the coefficients $$ \underset{\beta \in \mathbb{R}^p}{\operatorname{argmin}} \sum_{i=1}^{n}
2,391
What is the lasso in regression analysis?
LASSO regression is a type of regression analysis in which both variable selection and regulization occurs simultaneously. This method uses a penalty which affects they value of coefficients of regression. As penalty increases more coefficients are becomes zero and vice Versa. It uses L1 normalisation technique in which tuning parameter is used as amount of shrinkage. As tuning parameter increase then bias increases and as is decreases then variance increases. If it is constant then no coefficients are zero and as is tends to infinity then all the coefficients will be zero.
What is the lasso in regression analysis?
LASSO regression is a type of regression analysis in which both variable selection and regulization occurs simultaneously. This method uses a penalty which affects they value of coefficients of regres
What is the lasso in regression analysis? LASSO regression is a type of regression analysis in which both variable selection and regulization occurs simultaneously. This method uses a penalty which affects they value of coefficients of regression. As penalty increases more coefficients are becomes zero and vice Versa. It uses L1 normalisation technique in which tuning parameter is used as amount of shrinkage. As tuning parameter increase then bias increases and as is decreases then variance increases. If it is constant then no coefficients are zero and as is tends to infinity then all the coefficients will be zero.
What is the lasso in regression analysis? LASSO regression is a type of regression analysis in which both variable selection and regulization occurs simultaneously. This method uses a penalty which affects they value of coefficients of regres
2,392
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
About k-means specifically, you can use the Gap statistics. Basically, the idea is to compute a goodness of clustering measure based on average dispersion compared to a reference distribution for an increasing number of clusters. More information can be found in the original paper: Tibshirani, R., Walther, G., and Hastie, T. (2001). Estimating the numbers of clusters in a data set via the gap statistic. J. R. Statist. Soc. B, 63(2): 411-423. The answer that I provided to a related question highlights other general validity indices that might be used to check whether a given dataset exhibits some kind of a structure. When you don't have any idea of what you would expect to find if there was noise only, a good approach is to use resampling and study clusters stability. In other words, resample your data (via bootstrap or by adding small noise to it) and compute the "closeness" of the resulting partitions, as measured by Jaccard similarities. In short, it allows to estimate the frequency with which similar clusters were recovered in the data. This method is readily available in the fpc R package as clusterboot(). It takes as input either raw data or a distance matrix, and allows to apply a wide range of clustering methods (hierarchical, k-means, fuzzy methods). The method is discussed in the linked references: Hennig, C. (2007) Cluster-wise assessment of cluster stability. Computational Statistics and Data Analysis, 52, 258-271. Hennig, C. (2008) Dissolution point and isolation robustness: robustness criteria for general cluster analysis methods. Journal of Multivariate Analysis, 99, 1154-1176. Below is a small demonstration with the k-means algorithm. sim.xy <- function(n, mean, sd) cbind(rnorm(n, mean[1], sd[1]), rnorm(n, mean[2],sd[2])) xy <- rbind(sim.xy(100, c(0,0), c(.2,.2)), sim.xy(100, c(2.5,0), c(.4,.2)), sim.xy(100, c(1.25,.5), c(.3,.2))) library(fpc) km.boot <- clusterboot(xy, B=20, bootmethod="boot", clustermethod=kmeansCBI, krange=3, seed=15555) The results are quite positive in this artificial (and well structured) dataset since none of the three clusters (krange) were dissolved across the samples, and the average clusterwise Jaccard similarity is > 0.95 for all clusters. Below are the results on the 20 bootstrap samples. As can be seen, statistical units tend to stay grouped into the same cluster, with few exceptions for those observations lying in between. You can extend this idea to any validity index, of course: choose a new series of observations by bootstrap (with replacement), compute your statistic (e.g., silhouette width, cophenetic correlation, Hubert's gamma, within sum of squares) for a range of cluster numbers (e.g., 2 to 10), repeat 100 or 500 times, and look at the boxplot of your statistic as a function of the number of cluster. Here is what I get with the same simulated dataset, but using Ward's hierarchical clustering and considering the cophenetic correlation (which assess how well distance information are reproduced in the resulting partitions) and silhouette width (a combination measure assessing intra-cluster homogeneity and inter-cluster separation). The cophenetic correlation ranges from 0.6267 to 0.7511 with a median value of 0.7031 (500 bootstrap samples). Silhouette width appears to be maximal when we consider 3 clusters (median 0.8408, range 0.7371-0.8769).
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
About k-means specifically, you can use the Gap statistics. Basically, the idea is to compute a goodness of clustering measure based on average dispersion compared to a reference distribution for an i
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? About k-means specifically, you can use the Gap statistics. Basically, the idea is to compute a goodness of clustering measure based on average dispersion compared to a reference distribution for an increasing number of clusters. More information can be found in the original paper: Tibshirani, R., Walther, G., and Hastie, T. (2001). Estimating the numbers of clusters in a data set via the gap statistic. J. R. Statist. Soc. B, 63(2): 411-423. The answer that I provided to a related question highlights other general validity indices that might be used to check whether a given dataset exhibits some kind of a structure. When you don't have any idea of what you would expect to find if there was noise only, a good approach is to use resampling and study clusters stability. In other words, resample your data (via bootstrap or by adding small noise to it) and compute the "closeness" of the resulting partitions, as measured by Jaccard similarities. In short, it allows to estimate the frequency with which similar clusters were recovered in the data. This method is readily available in the fpc R package as clusterboot(). It takes as input either raw data or a distance matrix, and allows to apply a wide range of clustering methods (hierarchical, k-means, fuzzy methods). The method is discussed in the linked references: Hennig, C. (2007) Cluster-wise assessment of cluster stability. Computational Statistics and Data Analysis, 52, 258-271. Hennig, C. (2008) Dissolution point and isolation robustness: robustness criteria for general cluster analysis methods. Journal of Multivariate Analysis, 99, 1154-1176. Below is a small demonstration with the k-means algorithm. sim.xy <- function(n, mean, sd) cbind(rnorm(n, mean[1], sd[1]), rnorm(n, mean[2],sd[2])) xy <- rbind(sim.xy(100, c(0,0), c(.2,.2)), sim.xy(100, c(2.5,0), c(.4,.2)), sim.xy(100, c(1.25,.5), c(.3,.2))) library(fpc) km.boot <- clusterboot(xy, B=20, bootmethod="boot", clustermethod=kmeansCBI, krange=3, seed=15555) The results are quite positive in this artificial (and well structured) dataset since none of the three clusters (krange) were dissolved across the samples, and the average clusterwise Jaccard similarity is > 0.95 for all clusters. Below are the results on the 20 bootstrap samples. As can be seen, statistical units tend to stay grouped into the same cluster, with few exceptions for those observations lying in between. You can extend this idea to any validity index, of course: choose a new series of observations by bootstrap (with replacement), compute your statistic (e.g., silhouette width, cophenetic correlation, Hubert's gamma, within sum of squares) for a range of cluster numbers (e.g., 2 to 10), repeat 100 or 500 times, and look at the boxplot of your statistic as a function of the number of cluster. Here is what I get with the same simulated dataset, but using Ward's hierarchical clustering and considering the cophenetic correlation (which assess how well distance information are reproduced in the resulting partitions) and silhouette width (a combination measure assessing intra-cluster homogeneity and inter-cluster separation). The cophenetic correlation ranges from 0.6267 to 0.7511 with a median value of 0.7031 (500 bootstrap samples). Silhouette width appears to be maximal when we consider 3 clusters (median 0.8408, range 0.7371-0.8769).
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? About k-means specifically, you can use the Gap statistics. Basically, the idea is to compute a goodness of clustering measure based on average dispersion compared to a reference distribution for an i
2,393
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
When are results meaningful anyway? In particular k-means results? Fact is that k-means optimizes a certain mathematical statistic. There is no "meaningful" associated with this. In particular in high dimensional data, the first question should be: is the Euclidean distance still meaningful? If not, don't use k-means. Euclidean distance is meaningful in the physical world, but it quickly loses meaning when you have other data. In particular, when you artificially transform data into a vector space, is there any reason why it should be Euclidean? If you take the classic "old faithful" data set and run k-means on it without normalization, but with pure Euclidean distance, it already is no longer meaningful. EM, which in fact uses some form of "cluster local" Mahalanobis distance, will work a lot better. In particular, it adapts to the axes having very different scales. Btw, a key strength of k-means is that it will actually just always partition the data, no matter what it looks like. You can use k-means to partition uniform noise into k clusters. One can claim that obviously, k-means clusters are not meaningful. Or one can accept this as: the user wanted to partition the data to minimize squared Euclidean distances, without having a requirement of the clusters to be "meaningful".
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
When are results meaningful anyway? In particular k-means results? Fact is that k-means optimizes a certain mathematical statistic. There is no "meaningful" associated with this. In particular in high
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? When are results meaningful anyway? In particular k-means results? Fact is that k-means optimizes a certain mathematical statistic. There is no "meaningful" associated with this. In particular in high dimensional data, the first question should be: is the Euclidean distance still meaningful? If not, don't use k-means. Euclidean distance is meaningful in the physical world, but it quickly loses meaning when you have other data. In particular, when you artificially transform data into a vector space, is there any reason why it should be Euclidean? If you take the classic "old faithful" data set and run k-means on it without normalization, but with pure Euclidean distance, it already is no longer meaningful. EM, which in fact uses some form of "cluster local" Mahalanobis distance, will work a lot better. In particular, it adapts to the axes having very different scales. Btw, a key strength of k-means is that it will actually just always partition the data, no matter what it looks like. You can use k-means to partition uniform noise into k clusters. One can claim that obviously, k-means clusters are not meaningful. Or one can accept this as: the user wanted to partition the data to minimize squared Euclidean distances, without having a requirement of the clusters to be "meaningful".
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? When are results meaningful anyway? In particular k-means results? Fact is that k-means optimizes a certain mathematical statistic. There is no "meaningful" associated with this. In particular in high
2,394
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
One way to quickly visualize whether high dimensional data exhibits enough clustering is to use t-Distributed Stochastic Neighbor Embedding (t-SNE). It projects the data to some low dimensional space (e.g. 2D, 3D) and does a pretty good job at keeping cluster structure if any. E.g. MNIST data set: Olivetti faces data set:
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
One way to quickly visualize whether high dimensional data exhibits enough clustering is to use t-Distributed Stochastic Neighbor Embedding (t-SNE). It projects the data to some low dimensional space
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? One way to quickly visualize whether high dimensional data exhibits enough clustering is to use t-Distributed Stochastic Neighbor Embedding (t-SNE). It projects the data to some low dimensional space (e.g. 2D, 3D) and does a pretty good job at keeping cluster structure if any. E.g. MNIST data set: Olivetti faces data set:
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? One way to quickly visualize whether high dimensional data exhibits enough clustering is to use t-Distributed Stochastic Neighbor Embedding (t-SNE). It projects the data to some low dimensional space
2,395
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
Surely, the ability to visually discern the clusters in a plotable number of dimensions is a doubtful criterion for the usefulness of a clustering algorithm, especially if this dimension reduction is done independently of the clustering itself (i.e.: in a vain attempt to find out if clustering will work). In fact, clustering methods have their highest value in finding the clusters where the human eye/mind is unable to see the clusters. The simple answer is: do clustering, then find out whether it worked (with any of the criteria you are interested in, see also @Jeff's answer).
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
Surely, the ability to visually discern the clusters in a plotable number of dimensions is a doubtful criterion for the usefulness of a clustering algorithm, especially if this dimension reduction is
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? Surely, the ability to visually discern the clusters in a plotable number of dimensions is a doubtful criterion for the usefulness of a clustering algorithm, especially if this dimension reduction is done independently of the clustering itself (i.e.: in a vain attempt to find out if clustering will work). In fact, clustering methods have their highest value in finding the clusters where the human eye/mind is unable to see the clusters. The simple answer is: do clustering, then find out whether it worked (with any of the criteria you are interested in, see also @Jeff's answer).
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? Surely, the ability to visually discern the clusters in a plotable number of dimensions is a doubtful criterion for the usefulness of a clustering algorithm, especially if this dimension reduction is
2,396
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
I have just started using clustering algorithms recently, so hopefully someone more knowledgeable can provide a more complete answer, but here are some thoughts: 'Meaningful', as I'm sure you're aware, is very subjective. So whether the clustering is good enough is completely dependent upon why you need to cluster in the first place. If you're trying to predict group membership, it's likely that any clustering will do better than chance (and no worse), so the results should be meaningful to some degree. If you want to know how reliable this clustering is, you need some metric to compare it to. If you have a set of entities with known memberships, you can use discriminant analysis to see how good the predictions were. If you don't have a set of entities with known memberships, you'll have to know what variance is typical of clusters in your field. Physical attributes of entities with rigid categories are likely to have much lower in-group variance than psychometric data on humans, but that doesn't necessarily make the clustering 'worse'. Your second question alludes to 'What value of k should I choose?' Again, there's no hard answer here. In the absence of any a priori set of categories, you probably want to minimize the number of clusters while also minimizing the average cluster variance. A simple approach might be to plot 'number of clusters' vs 'average cluster variance', and look for the "elbow"-- where adding more clusters does not have a significant impact on your cluster variance. I wouldn't say the results from k-means is meaningless if it cannot be visualized, but it's certainly appealing when the clusters are visually apparent. This, again, just leads back to the question: why do you need to do clustering, and how reliable do you need to be? Ultimately, this is a question that you need to answer based on how you will use the data.
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
I have just started using clustering algorithms recently, so hopefully someone more knowledgeable can provide a more complete answer, but here are some thoughts: 'Meaningful', as I'm sure you're aware
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? I have just started using clustering algorithms recently, so hopefully someone more knowledgeable can provide a more complete answer, but here are some thoughts: 'Meaningful', as I'm sure you're aware, is very subjective. So whether the clustering is good enough is completely dependent upon why you need to cluster in the first place. If you're trying to predict group membership, it's likely that any clustering will do better than chance (and no worse), so the results should be meaningful to some degree. If you want to know how reliable this clustering is, you need some metric to compare it to. If you have a set of entities with known memberships, you can use discriminant analysis to see how good the predictions were. If you don't have a set of entities with known memberships, you'll have to know what variance is typical of clusters in your field. Physical attributes of entities with rigid categories are likely to have much lower in-group variance than psychometric data on humans, but that doesn't necessarily make the clustering 'worse'. Your second question alludes to 'What value of k should I choose?' Again, there's no hard answer here. In the absence of any a priori set of categories, you probably want to minimize the number of clusters while also minimizing the average cluster variance. A simple approach might be to plot 'number of clusters' vs 'average cluster variance', and look for the "elbow"-- where adding more clusters does not have a significant impact on your cluster variance. I wouldn't say the results from k-means is meaningless if it cannot be visualized, but it's certainly appealing when the clusters are visually apparent. This, again, just leads back to the question: why do you need to do clustering, and how reliable do you need to be? Ultimately, this is a question that you need to answer based on how you will use the data.
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? I have just started using clustering algorithms recently, so hopefully someone more knowledgeable can provide a more complete answer, but here are some thoughts: 'Meaningful', as I'm sure you're aware
2,397
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
To tell whether a clustering is meaningful, you can run an algorithm to count the number of clusters, and see if it outputs something greater than 1. Like chl said, one cluster-counting algorithm is the gap statistic algorithm. Roughly, this computes the total cluster variance given your actual data, and compares it against the total cluster variance of data that should not have any clusters at all (e.g., a dataset formed by sampling uniformly within the same bounds as your actual data). The number of clusters $k$ is then chosen to be the $k$ that gives the largest "gap" between these two cluster variances. Another algorithm is the prediction strength algorithm (which is similar to the rest of chl's answer). Roughly, this performs a bunch of k-means clusterings, and computes the proportion of points that stay in the same cluster. $k$ is then chosen to be the smallest $k$ that gives a proportion higher than some threshold (e.g., a threshold of 0.8).
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results?
To tell whether a clustering is meaningful, you can run an algorithm to count the number of clusters, and see if it outputs something greater than 1. Like chl said, one cluster-counting algorithm is t
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? To tell whether a clustering is meaningful, you can run an algorithm to count the number of clusters, and see if it outputs something greater than 1. Like chl said, one cluster-counting algorithm is the gap statistic algorithm. Roughly, this computes the total cluster variance given your actual data, and compares it against the total cluster variance of data that should not have any clusters at all (e.g., a dataset formed by sampling uniformly within the same bounds as your actual data). The number of clusters $k$ is then chosen to be the $k$ that gives the largest "gap" between these two cluster variances. Another algorithm is the prediction strength algorithm (which is similar to the rest of chl's answer). Roughly, this performs a bunch of k-means clusterings, and computes the proportion of points that stay in the same cluster. $k$ is then chosen to be the smallest $k$ that gives a proportion higher than some threshold (e.g., a threshold of 0.8).
How to tell if data is "clustered" enough for clustering algorithms to produce meaningful results? To tell whether a clustering is meaningful, you can run an algorithm to count the number of clusters, and see if it outputs something greater than 1. Like chl said, one cluster-counting algorithm is t
2,398
ImageNet: what is top-1 and top-5 error rate?
[...] where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the mode. First, you make a prediction using the CNN and obtain the predicted class multinomial distribution ($\sum p_{class} = 1$). Now, in the case of the top-1 score, you check if the top class (the one with the highest probability) is the same as the target label. In the case of the top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities). In both cases, the top score is computed as the number of times a predicted label matched the target label, divided by the number of data points evaluated. Finally, when 5-CNNs are used, you first average their predictions and follow the same procedure for calculating the top-1 and top-5 scores.
ImageNet: what is top-1 and top-5 error rate?
[...] where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the mode. First, you make a prediction using the C
ImageNet: what is top-1 and top-5 error rate? [...] where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the mode. First, you make a prediction using the CNN and obtain the predicted class multinomial distribution ($\sum p_{class} = 1$). Now, in the case of the top-1 score, you check if the top class (the one with the highest probability) is the same as the target label. In the case of the top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities). In both cases, the top score is computed as the number of times a predicted label matched the target label, divided by the number of data points evaluated. Finally, when 5-CNNs are used, you first average their predictions and follow the same procedure for calculating the top-1 and top-5 scores.
ImageNet: what is top-1 and top-5 error rate? [...] where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the mode. First, you make a prediction using the C
2,399
ImageNet: what is top-1 and top-5 error rate?
Your classifier gives you a probability for each class. Lets say we had only "cat", "dog", "house", "mouse" as classes (in this order). Then the classifier gives somehting like 0.1; 0.2; 0.0; 0.7 as a result. The Top-1 class is "mouse". The top-2 classes are {mouse, dog}. If the correct class was "dog", it would be counted as "correct" for the Top-2 accuracy, but as wrong for the Top-1 accuracy. Hence, in a classification problem with $k$ possible classes, every classifier has 100% top-$k$ accuracy. The "normal" accuracy is top-1.
ImageNet: what is top-1 and top-5 error rate?
Your classifier gives you a probability for each class. Lets say we had only "cat", "dog", "house", "mouse" as classes (in this order). Then the classifier gives somehting like 0.1; 0.2; 0.0; 0.7 as
ImageNet: what is top-1 and top-5 error rate? Your classifier gives you a probability for each class. Lets say we had only "cat", "dog", "house", "mouse" as classes (in this order). Then the classifier gives somehting like 0.1; 0.2; 0.0; 0.7 as a result. The Top-1 class is "mouse". The top-2 classes are {mouse, dog}. If the correct class was "dog", it would be counted as "correct" for the Top-2 accuracy, but as wrong for the Top-1 accuracy. Hence, in a classification problem with $k$ possible classes, every classifier has 100% top-$k$ accuracy. The "normal" accuracy is top-1.
ImageNet: what is top-1 and top-5 error rate? Your classifier gives you a probability for each class. Lets say we had only "cat", "dog", "house", "mouse" as classes (in this order). Then the classifier gives somehting like 0.1; 0.2; 0.0; 0.7 as
2,400
What are disadvantages of using the lasso for variable selection for regression?
There is NO reason to do stepwise selection. It's just wrong. LASSO/LAR are the best automatic methods. But they are automatic methods. They let the analyst not think. In many analyses, some variables should be in the model REGARDLESS of any measure of significance. Sometimes they are necessary control variables. Other times, finding a small effect can be substantively important.
What are disadvantages of using the lasso for variable selection for regression?
There is NO reason to do stepwise selection. It's just wrong. LASSO/LAR are the best automatic methods. But they are automatic methods. They let the analyst not think. In many analyses, some variab
What are disadvantages of using the lasso for variable selection for regression? There is NO reason to do stepwise selection. It's just wrong. LASSO/LAR are the best automatic methods. But they are automatic methods. They let the analyst not think. In many analyses, some variables should be in the model REGARDLESS of any measure of significance. Sometimes they are necessary control variables. Other times, finding a small effect can be substantively important.
What are disadvantages of using the lasso for variable selection for regression? There is NO reason to do stepwise selection. It's just wrong. LASSO/LAR are the best automatic methods. But they are automatic methods. They let the analyst not think. In many analyses, some variab