idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
51,701
Why is a deterministic trend process not stationary?
A process $Y_t$ is stationary when for any vector of times $(t_1,...,t_n)$ and for every time interval $\tau$ the joint distribution of the vector $$ (Y_{t_1},...,Y_{t_n}) $$ coincides with the joint distribution of the vector $$ (Y_{t_1+\tau},...,Y_{t_n+\tau}) $$ In your example, the distribution of the "vector" $$ (Y_1) $$ is normal (I'm assuming that the shocks $\varepsilon_t$ are normal in your example) with mean $a+b$ and variance equal to the variance of $\varepsilon_t$. On the other hand the distribution of the "vector" $$ (Y_2)=(Y_{1+\tau}) $$ where $\tau=1$ is normal with mean $a+2b$ and same variance. Therefore the process cannot be stationary. In the same way you prove that the second example you show is stationary (variance grows). For what we have said above you should see that a stationary process always has constant mean and variance. I think you are confusing stationary processes, e.g. AR(1), with a process with stationary increments, i.e. random walks.
Why is a deterministic trend process not stationary?
A process $Y_t$ is stationary when for any vector of times $(t_1,...,t_n)$ and for every time interval $\tau$ the joint distribution of the vector $$ (Y_{t_1},...,Y_{t_n}) $$ coincides with the join
Why is a deterministic trend process not stationary? A process $Y_t$ is stationary when for any vector of times $(t_1,...,t_n)$ and for every time interval $\tau$ the joint distribution of the vector $$ (Y_{t_1},...,Y_{t_n}) $$ coincides with the joint distribution of the vector $$ (Y_{t_1+\tau},...,Y_{t_n+\tau}) $$ In your example, the distribution of the "vector" $$ (Y_1) $$ is normal (I'm assuming that the shocks $\varepsilon_t$ are normal in your example) with mean $a+b$ and variance equal to the variance of $\varepsilon_t$. On the other hand the distribution of the "vector" $$ (Y_2)=(Y_{1+\tau}) $$ where $\tau=1$ is normal with mean $a+2b$ and same variance. Therefore the process cannot be stationary. In the same way you prove that the second example you show is stationary (variance grows). For what we have said above you should see that a stationary process always has constant mean and variance. I think you are confusing stationary processes, e.g. AR(1), with a process with stationary increments, i.e. random walks.
Why is a deterministic trend process not stationary? A process $Y_t$ is stationary when for any vector of times $(t_1,...,t_n)$ and for every time interval $\tau$ the joint distribution of the vector $$ (Y_{t_1},...,Y_{t_n}) $$ coincides with the join
51,702
SVM vs. artificial neural network
The no-free lunch theorems suggest there is no classifier that is a priori superior to any other, and the choice of classifier depends on the nature of the particular data set. I wouldn't cmomit myself to a choice of classifier and would instead evaluate several methods. The classes are only mildly imbalanced, so I suspect that shouldn't be a key factor in the decision of which classifier to use. A more important question would be whether you wanted a simple discrete classification, or whether you wanted estimates of the probabilities of class membership, for examples because you have unknown or variable mis-classification costs, or relative class frequencies, or if it would be beneficial to have a "reject" option. In that case the SVM is not a good choice as it is designed for discrete classification, and rather than post-processing the output to get probabilities it is better to use a method that was designed to provide a probabilistic output in the first place, such as kernel logistic regression.
SVM vs. artificial neural network
The no-free lunch theorems suggest there is no classifier that is a priori superior to any other, and the choice of classifier depends on the nature of the particular data set. I wouldn't cmomit myse
SVM vs. artificial neural network The no-free lunch theorems suggest there is no classifier that is a priori superior to any other, and the choice of classifier depends on the nature of the particular data set. I wouldn't cmomit myself to a choice of classifier and would instead evaluate several methods. The classes are only mildly imbalanced, so I suspect that shouldn't be a key factor in the decision of which classifier to use. A more important question would be whether you wanted a simple discrete classification, or whether you wanted estimates of the probabilities of class membership, for examples because you have unknown or variable mis-classification costs, or relative class frequencies, or if it would be beneficial to have a "reject" option. In that case the SVM is not a good choice as it is designed for discrete classification, and rather than post-processing the output to get probabilities it is better to use a method that was designed to provide a probabilistic output in the first place, such as kernel logistic regression.
SVM vs. artificial neural network The no-free lunch theorems suggest there is no classifier that is a priori superior to any other, and the choice of classifier depends on the nature of the particular data set. I wouldn't cmomit myse
51,703
SVM vs. artificial neural network
For specificity in the following I'm going to assume that an ANN here means a feedforward multilayer neural network / perceptron as discussed in e.g. Bishop 1996. and an SVM is the the vanilla version e.g. from Hastie and Tibshirani. @Dikran Marsupial's points about the structure of the domain are important ones. In fact you might want to read DM's other answer about SVMs. The possibility of having a posterior over classes is important if you expect to apply a loss function or otherwise act on your level of classification certainty as well as the actual classification. If not: well, not. In addition, I can see four more ways to choose. Loss function One way to distinguish the two is to decide whose loss function you prefer. Classically, ANNs have smooth loss functions, e.g. cross-entropy for multi-class classification. SVMs tend to have some kind of 'hinge loss': 0 to a point then increasing. One of these may be a more natural fit to your problem. Data size Another consideration is data size and storage. You mention your category balance but not the total size of the data. SVMs by definition keep and use only the 'support vectors', a subset of observations that anchor the separating hyperplane(s). This can make for a small final classifier. Also, traditional ANN training can be slow - the space of functions as smooth as the implicit gaussian process that your ANN is approximating with its finite number of hidden nodes is large... Multiple classes If you have multi-category data, SVMs have several ways to construct the necessary multi-class classifier out of individual two class SVM models. At least three methods are available which, as @fabee points out, may not give the same answers. His reference looks like a useful one. The options are a lot clearer in ordinary smoothed statistical classification model territory, where your ANN belongs. Interpretability If you care about discerning the the importance of different covariates, then ANNs give you hyperparameters to do so, although more traditional methods might be as or more efficient and straightforward at this, e.g. the Lasso (L1 regularisation) for linear regression models. If prediction success is your only goal then this aspect is, of course, irrelevant.
SVM vs. artificial neural network
For specificity in the following I'm going to assume that an ANN here means a feedforward multilayer neural network / perceptron as discussed in e.g. Bishop 1996. and an SVM is the the vanilla version
SVM vs. artificial neural network For specificity in the following I'm going to assume that an ANN here means a feedforward multilayer neural network / perceptron as discussed in e.g. Bishop 1996. and an SVM is the the vanilla version e.g. from Hastie and Tibshirani. @Dikran Marsupial's points about the structure of the domain are important ones. In fact you might want to read DM's other answer about SVMs. The possibility of having a posterior over classes is important if you expect to apply a loss function or otherwise act on your level of classification certainty as well as the actual classification. If not: well, not. In addition, I can see four more ways to choose. Loss function One way to distinguish the two is to decide whose loss function you prefer. Classically, ANNs have smooth loss functions, e.g. cross-entropy for multi-class classification. SVMs tend to have some kind of 'hinge loss': 0 to a point then increasing. One of these may be a more natural fit to your problem. Data size Another consideration is data size and storage. You mention your category balance but not the total size of the data. SVMs by definition keep and use only the 'support vectors', a subset of observations that anchor the separating hyperplane(s). This can make for a small final classifier. Also, traditional ANN training can be slow - the space of functions as smooth as the implicit gaussian process that your ANN is approximating with its finite number of hidden nodes is large... Multiple classes If you have multi-category data, SVMs have several ways to construct the necessary multi-class classifier out of individual two class SVM models. At least three methods are available which, as @fabee points out, may not give the same answers. His reference looks like a useful one. The options are a lot clearer in ordinary smoothed statistical classification model territory, where your ANN belongs. Interpretability If you care about discerning the the importance of different covariates, then ANNs give you hyperparameters to do so, although more traditional methods might be as or more efficient and straightforward at this, e.g. the Lasso (L1 regularisation) for linear regression models. If prediction success is your only goal then this aspect is, of course, irrelevant.
SVM vs. artificial neural network For specificity in the following I'm going to assume that an ANN here means a feedforward multilayer neural network / perceptron as discussed in e.g. Bishop 1996. and an SVM is the the vanilla version
51,704
SVM vs. artificial neural network
This question cannot be answered generically. It even depends on the multi-class classification strategy that you are using (i.e. one-vs-one, one-vs-rest, ...). Personally, I would use an SVM and choose an multi-class strategy that fit my problem and my computational resources. A nice paper how to do that is: Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers Erin L Allwein, Robert E Schapire, Yoram Singer in Journal of Machine Learning Research (2001) If you want every class of your dataset to be equally important you can either use the quick and dirty hack of cloning datapoints in the smaller classes until each class has the same number of datapoints or you could use an SVM implementation that allows you to set different penalization constants C for each class.
SVM vs. artificial neural network
This question cannot be answered generically. It even depends on the multi-class classification strategy that you are using (i.e. one-vs-one, one-vs-rest, ...). Personally, I would use an SVM and choo
SVM vs. artificial neural network This question cannot be answered generically. It even depends on the multi-class classification strategy that you are using (i.e. one-vs-one, one-vs-rest, ...). Personally, I would use an SVM and choose an multi-class strategy that fit my problem and my computational resources. A nice paper how to do that is: Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers Erin L Allwein, Robert E Schapire, Yoram Singer in Journal of Machine Learning Research (2001) If you want every class of your dataset to be equally important you can either use the quick and dirty hack of cloning datapoints in the smaller classes until each class has the same number of datapoints or you could use an SVM implementation that allows you to set different penalization constants C for each class.
SVM vs. artificial neural network This question cannot be answered generically. It even depends on the multi-class classification strategy that you are using (i.e. one-vs-one, one-vs-rest, ...). Personally, I would use an SVM and choo
51,705
The disadvantage of using F-score in feature selection
Three years late, but it might help other people. I guess you refer to F-score used in the paper of Chen and Lin (2006) : "Combining SVMs with Various Feature Selection Strategies". They use an example to explain what you ask : I quote their words : "Both features of this data have low F-scores as the denominator (the sum of variances of the positive and negative sets) is much larger than the numerator." In other words, F-score reveals the discriminative power of each feature independently from others. One score is computed for the first feature, and another score is computed for the second feature. But it does not indicate anything on the combination of both features (mutual information). This is the main weakness of F-score.
The disadvantage of using F-score in feature selection
Three years late, but it might help other people. I guess you refer to F-score used in the paper of Chen and Lin (2006) : "Combining SVMs with Various Feature Selection Strategies". They use an exampl
The disadvantage of using F-score in feature selection Three years late, but it might help other people. I guess you refer to F-score used in the paper of Chen and Lin (2006) : "Combining SVMs with Various Feature Selection Strategies". They use an example to explain what you ask : I quote their words : "Both features of this data have low F-scores as the denominator (the sum of variances of the positive and negative sets) is much larger than the numerator." In other words, F-score reveals the discriminative power of each feature independently from others. One score is computed for the first feature, and another score is computed for the second feature. But it does not indicate anything on the combination of both features (mutual information). This is the main weakness of F-score.
The disadvantage of using F-score in feature selection Three years late, but it might help other people. I guess you refer to F-score used in the paper of Chen and Lin (2006) : "Combining SVMs with Various Feature Selection Strategies". They use an exampl
51,706
The disadvantage of using F-score in feature selection
In this reply, I assume that the questioned F-score is the one described in the article pointed out by @Guillaume Sutra. Here is the page describing the F-score, including its definition: Let us first look at the intuition behind the F-score for feature selection. For simplicity, let us consider a binary classification problem (each sample in the dataset has one of two classes). Assume we have a large dataset on the format: x1 x2 x3 ... class 0.3 0.5 0.1 ... A 0.1 0.7 0.4 ... B 0.1 0.1 0.2 ... A 0.2 0.4 0.2 ... A 0.5 0.7 0.8 ... B ... ... ... ... ... The F-score is a uni-variate feature selection method, which means that it scores each of the features (x1, x2, x3, ...) individually (higher score is better), without considering that a feature may improve in combination with another feature. For example, let us assume that we want to score the feature x2. With respect to the x2 feature, assume that our dataset looks as follows: If we spend 10 seconds drawing a normal distribution for both classes in Microsoft Paint, it looks as follows: Now, if we want to predict the class of a datapoint only based on the x2 feature, the fact that the two normal distributions overlap makes it harder for us to make a good predictive model. Consider for instance these two extreme cases: It is easy to see how the case to the left is much easier to predict. Notice that the overlap is reduced when The means of A and B are more separated (the numerator in the F-score definition) The variances of A and B are small (the denominator in the F-score definition) The F-score captures these two properties, such that a high F-score reflects a small overlap. Regarding your question about the F-score not revealing mutual information, I have made this example, inspired by the article: For each of the features x1 and x2, the F-score is low because of the high variance. However, by combining the two features, you can perfectly separate the two classes A and B. Unfortunately, the F-score does not consider this.
The disadvantage of using F-score in feature selection
In this reply, I assume that the questioned F-score is the one described in the article pointed out by @Guillaume Sutra. Here is the page describing the F-score, including its definition: Let us firs
The disadvantage of using F-score in feature selection In this reply, I assume that the questioned F-score is the one described in the article pointed out by @Guillaume Sutra. Here is the page describing the F-score, including its definition: Let us first look at the intuition behind the F-score for feature selection. For simplicity, let us consider a binary classification problem (each sample in the dataset has one of two classes). Assume we have a large dataset on the format: x1 x2 x3 ... class 0.3 0.5 0.1 ... A 0.1 0.7 0.4 ... B 0.1 0.1 0.2 ... A 0.2 0.4 0.2 ... A 0.5 0.7 0.8 ... B ... ... ... ... ... The F-score is a uni-variate feature selection method, which means that it scores each of the features (x1, x2, x3, ...) individually (higher score is better), without considering that a feature may improve in combination with another feature. For example, let us assume that we want to score the feature x2. With respect to the x2 feature, assume that our dataset looks as follows: If we spend 10 seconds drawing a normal distribution for both classes in Microsoft Paint, it looks as follows: Now, if we want to predict the class of a datapoint only based on the x2 feature, the fact that the two normal distributions overlap makes it harder for us to make a good predictive model. Consider for instance these two extreme cases: It is easy to see how the case to the left is much easier to predict. Notice that the overlap is reduced when The means of A and B are more separated (the numerator in the F-score definition) The variances of A and B are small (the denominator in the F-score definition) The F-score captures these two properties, such that a high F-score reflects a small overlap. Regarding your question about the F-score not revealing mutual information, I have made this example, inspired by the article: For each of the features x1 and x2, the F-score is low because of the high variance. However, by combining the two features, you can perfectly separate the two classes A and B. Unfortunately, the F-score does not consider this.
The disadvantage of using F-score in feature selection In this reply, I assume that the questioned F-score is the one described in the article pointed out by @Guillaume Sutra. Here is the page describing the F-score, including its definition: Let us firs
51,707
The disadvantage of using F-score in feature selection
The F-score is a ratio of two variables: F = F1/F2, where F1 is the variability between groups and F2 is the variability within each group. In other words, a high F value (leading to a significant p-value depending on your alpha) means that at least one of your groups is significantly different from the rest, but it doesn't tell you which group. Typically you select features that return high F-values and use those for further analysis.
The disadvantage of using F-score in feature selection
The F-score is a ratio of two variables: F = F1/F2, where F1 is the variability between groups and F2 is the variability within each group. In other words, a high F value (leading to a significant p-v
The disadvantage of using F-score in feature selection The F-score is a ratio of two variables: F = F1/F2, where F1 is the variability between groups and F2 is the variability within each group. In other words, a high F value (leading to a significant p-value depending on your alpha) means that at least one of your groups is significantly different from the rest, but it doesn't tell you which group. Typically you select features that return high F-values and use those for further analysis.
The disadvantage of using F-score in feature selection The F-score is a ratio of two variables: F = F1/F2, where F1 is the variability between groups and F2 is the variability within each group. In other words, a high F value (leading to a significant p-v
51,708
How to set the optimal number of simulations
If you're using the simulations to try to estimate something, then you'd seek a sufficient number of replicates to achieve the desired precision. It's hard to see how one could make any general statement about the minimal number. It depends on the variation among replicates. There are cases where 5 might be sufficient, and others where 100,000 are necessary.
How to set the optimal number of simulations
If you're using the simulations to try to estimate something, then you'd seek a sufficient number of replicates to achieve the desired precision. It's hard to see how one could make any general statem
How to set the optimal number of simulations If you're using the simulations to try to estimate something, then you'd seek a sufficient number of replicates to achieve the desired precision. It's hard to see how one could make any general statement about the minimal number. It depends on the variation among replicates. There are cases where 5 might be sufficient, and others where 100,000 are necessary.
How to set the optimal number of simulations If you're using the simulations to try to estimate something, then you'd seek a sufficient number of replicates to achieve the desired precision. It's hard to see how one could make any general statem
51,709
How to set the optimal number of simulations
You can use Wald's sequential probability ratio test (SPRT). Suppose that your simulation is testing a null hypothesis---the primary null hypothesis. In testing the primary null, we use a level $p_0=0.05$ test. The interesting part is recognizing that the p-value given by your simulations is itself random. So we want to form a test that this p-value is really less than the level of our test. This leads to the ancillary null hypothesis that the p-value of the primary test is greater than the desired level of that test---that is, we evaluate the null hypothesis that the true p-value of your simulations is greater than the level of your test based upon the sample of simulations that you have done. (The primary and ancillary language is, as far as I know, my own; I don't know that there are standard names for these things.) The level for this ancillary test is chosen to be $\alpha=0.001$. That is, we want to incorrectly conclude that the p-value of the simulations is small 1 time in 1,000. We desire a power of $\beta=0.01$ in detecting a p-value of $p_1=0.045$ for the main hypothesis. That is, when the true p-value is 0.045, we want to incorrectly accept the null only 1 time in 100. (Of course, all these probabilities are suggestions for concreteness; you need to think about what values would be appropriate for your context.) In the sequential procedure, we obtain a single observation and determine whether it is a "success" or a "failure." It is a success if it is more extreme than our critical value for hypothesis testing---it is a success if it is evidence against the null hypothesis. We count the number of successes after $m$ observations, thereby defining $T_m = \sum_{i=1}^m{X_i}$. Using this count, we calculate the probability ratio $$\begin{equation*} \frac{p_{1m}}{p_{0m}} \equiv \frac{p_1^{T_m}(1-p_1)^{1-T_m}}{p_0^{T_m}(1-p_0)^{1-T_m}}. \end{equation*}$$ This is the likelihood ratio of the p-value that we want power against and the level of the test of our main hypothesis. Wald provides a stopping rule based upon this probability ratio. We conclude that the true p-value is below the level of our test ($p < p_0$) and reject our main null hypothesis if $$\begin{equation} \frac{p_{1m}}{p_{0m}} \geq \frac{1-\beta}{\alpha}. \end{equation}$$ We fail to reject our primary null hypothesis and conclude that $p > p_0$ if $$\begin{equation} \frac{p_{1m}}{p_{0m}} \leq \frac{\beta}{1-\alpha}. \label{lbound} \end{equation}$$ Otherwise, we collect an additional observation and recalculate these ratios. Hence, this test gives a stopping rule when performing simulations that depends upon the level of your test ($p_0$), the p-value that you want power against ($p_1$), the amount of power you want against that alternative ($\beta$), and a measure of how certain you want to be in drawing conclusions from your simulations ($\alpha$). The number of simulations required depends upon the true p-value of your simulations. The further this is from the level of your test, the fewer simulations that you'll need to perform. In simulations that I've done of this procedure, it took about 16,000 simulations to reach a conclusion when the true p-value was 0.05, 1,700 when the p-value was 0.01, and 800 when the true p-value was 0.10 using the values of the parameters that I gave above. If this is more than you can handle, you can change values of the parameters that I give ($\alpha$, for example). Lastly, I'll just note that this is a non-parametric approach to finding the stopping point.
How to set the optimal number of simulations
You can use Wald's sequential probability ratio test (SPRT). Suppose that your simulation is testing a null hypothesis---the primary null hypothesis. In testing the primary null, we use a level $p_0=0
How to set the optimal number of simulations You can use Wald's sequential probability ratio test (SPRT). Suppose that your simulation is testing a null hypothesis---the primary null hypothesis. In testing the primary null, we use a level $p_0=0.05$ test. The interesting part is recognizing that the p-value given by your simulations is itself random. So we want to form a test that this p-value is really less than the level of our test. This leads to the ancillary null hypothesis that the p-value of the primary test is greater than the desired level of that test---that is, we evaluate the null hypothesis that the true p-value of your simulations is greater than the level of your test based upon the sample of simulations that you have done. (The primary and ancillary language is, as far as I know, my own; I don't know that there are standard names for these things.) The level for this ancillary test is chosen to be $\alpha=0.001$. That is, we want to incorrectly conclude that the p-value of the simulations is small 1 time in 1,000. We desire a power of $\beta=0.01$ in detecting a p-value of $p_1=0.045$ for the main hypothesis. That is, when the true p-value is 0.045, we want to incorrectly accept the null only 1 time in 100. (Of course, all these probabilities are suggestions for concreteness; you need to think about what values would be appropriate for your context.) In the sequential procedure, we obtain a single observation and determine whether it is a "success" or a "failure." It is a success if it is more extreme than our critical value for hypothesis testing---it is a success if it is evidence against the null hypothesis. We count the number of successes after $m$ observations, thereby defining $T_m = \sum_{i=1}^m{X_i}$. Using this count, we calculate the probability ratio $$\begin{equation*} \frac{p_{1m}}{p_{0m}} \equiv \frac{p_1^{T_m}(1-p_1)^{1-T_m}}{p_0^{T_m}(1-p_0)^{1-T_m}}. \end{equation*}$$ This is the likelihood ratio of the p-value that we want power against and the level of the test of our main hypothesis. Wald provides a stopping rule based upon this probability ratio. We conclude that the true p-value is below the level of our test ($p < p_0$) and reject our main null hypothesis if $$\begin{equation} \frac{p_{1m}}{p_{0m}} \geq \frac{1-\beta}{\alpha}. \end{equation}$$ We fail to reject our primary null hypothesis and conclude that $p > p_0$ if $$\begin{equation} \frac{p_{1m}}{p_{0m}} \leq \frac{\beta}{1-\alpha}. \label{lbound} \end{equation}$$ Otherwise, we collect an additional observation and recalculate these ratios. Hence, this test gives a stopping rule when performing simulations that depends upon the level of your test ($p_0$), the p-value that you want power against ($p_1$), the amount of power you want against that alternative ($\beta$), and a measure of how certain you want to be in drawing conclusions from your simulations ($\alpha$). The number of simulations required depends upon the true p-value of your simulations. The further this is from the level of your test, the fewer simulations that you'll need to perform. In simulations that I've done of this procedure, it took about 16,000 simulations to reach a conclusion when the true p-value was 0.05, 1,700 when the p-value was 0.01, and 800 when the true p-value was 0.10 using the values of the parameters that I gave above. If this is more than you can handle, you can change values of the parameters that I give ($\alpha$, for example). Lastly, I'll just note that this is a non-parametric approach to finding the stopping point.
How to set the optimal number of simulations You can use Wald's sequential probability ratio test (SPRT). Suppose that your simulation is testing a null hypothesis---the primary null hypothesis. In testing the primary null, we use a level $p_0=0
51,710
How to set the optimal number of simulations
It very much depends on what you're trying to simulate. We'd need more details in terms of your simulation. My answer, when it comes down to it, is "as many as your computer can handle in the time you have". That admittedly isn't a great criteria. If you're trying to simulate a distribution or obtain an empirical confidence interval, my instinct is at least 10,000. For other questions, it's very different. In terms of diagnostics for "have I done enough", I generally wait until two things have occurred: The distribution of the random variable being simulated begins to resemble the distribution its being drawn from. Until that happens, you haven't really had a chance to fully explore the potential of the simulation. As you mention in your comment, until the change between each new realization drops to 0. However, I don't do this by "If I add another, what happens", but instead grossly overshoot my gut feeling, then trim backwards if its clear it was unneeded in future simulations. I don't want to say "adding 1 didn't make a difference" in case it was drawn from near the mean anyway (a very likely scenario). I'm far more comfortable saying "Adding 1,000 didn't make a difference".
How to set the optimal number of simulations
It very much depends on what you're trying to simulate. We'd need more details in terms of your simulation. My answer, when it comes down to it, is "as many as your computer can handle in the time you
How to set the optimal number of simulations It very much depends on what you're trying to simulate. We'd need more details in terms of your simulation. My answer, when it comes down to it, is "as many as your computer can handle in the time you have". That admittedly isn't a great criteria. If you're trying to simulate a distribution or obtain an empirical confidence interval, my instinct is at least 10,000. For other questions, it's very different. In terms of diagnostics for "have I done enough", I generally wait until two things have occurred: The distribution of the random variable being simulated begins to resemble the distribution its being drawn from. Until that happens, you haven't really had a chance to fully explore the potential of the simulation. As you mention in your comment, until the change between each new realization drops to 0. However, I don't do this by "If I add another, what happens", but instead grossly overshoot my gut feeling, then trim backwards if its clear it was unneeded in future simulations. I don't want to say "adding 1 didn't make a difference" in case it was drawn from near the mean anyway (a very likely scenario). I'm far more comfortable saying "Adding 1,000 didn't make a difference".
How to set the optimal number of simulations It very much depends on what you're trying to simulate. We'd need more details in terms of your simulation. My answer, when it comes down to it, is "as many as your computer can handle in the time you
51,711
How to set the optimal number of simulations
People finding this question may find this article useful. It goes into some detail on how to calculate number of simulations, as well as other facets of setting up a simulation study. It mentions that you can calculate the size B with: $$B = (\frac{Z_{1-(\alpha/2)\sigma}}{\delta})^2$$ where $Z_{1-(\alpha/2)}$ is the $ 1-(\alpha/2)$ quantile of the standard normal distribution and $\sigma^2 $ is the variance for the parameter of interest. $\delta$ is here the the specified level of accuracy that one wants to achieve. Just as a whole the article is very useful in reviewing the quality of your design. Burton, A. , Altman, D. G., Royston, P. and Holder, R. L. (2006), The design of simulation studies in medical statistics. Statist. Med., 25: 4279-4292. doi:10.1002/sim.2673 https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2673
How to set the optimal number of simulations
People finding this question may find this article useful. It goes into some detail on how to calculate number of simulations, as well as other facets of setting up a simulation study. It mentions tha
How to set the optimal number of simulations People finding this question may find this article useful. It goes into some detail on how to calculate number of simulations, as well as other facets of setting up a simulation study. It mentions that you can calculate the size B with: $$B = (\frac{Z_{1-(\alpha/2)\sigma}}{\delta})^2$$ where $Z_{1-(\alpha/2)}$ is the $ 1-(\alpha/2)$ quantile of the standard normal distribution and $\sigma^2 $ is the variance for the parameter of interest. $\delta$ is here the the specified level of accuracy that one wants to achieve. Just as a whole the article is very useful in reviewing the quality of your design. Burton, A. , Altman, D. G., Royston, P. and Holder, R. L. (2006), The design of simulation studies in medical statistics. Statist. Med., 25: 4279-4292. doi:10.1002/sim.2673 https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.2673
How to set the optimal number of simulations People finding this question may find this article useful. It goes into some detail on how to calculate number of simulations, as well as other facets of setting up a simulation study. It mentions tha
51,712
Can I delete excessive number of multivariate outliers, like over 10% in sample?
It's hard to see how 10% of the data could be called outlying. There's nothing that says you can't omit them, as long as you say clearly exactly what you did. But, this particular instance seems a bit extreme. When it comes to outliers, I first ask, are they errors? If they're errors, I'd want to fix them; if I couldn't fix them, I'd be reasonably comfortable omitting them (though I'd worry about bias). If they seem not to be errors (or there's no way to tell), I'd ask: do they affect the results? If omitting them gives the same answer as not, I'd be happy and move on. If it does matter, I would look for more a robust method of analysis. I would look more closely at your method for identifying outliers: is it making some sort of assumption that is clearly wrong? Most importantly, I'd look at lots and lots of different plots of the data, to see what it is that is leading those 10% of points to be called outliers, and whether it seems at all reasonable (though I can't see how it could be).
Can I delete excessive number of multivariate outliers, like over 10% in sample?
It's hard to see how 10% of the data could be called outlying. There's nothing that says you can't omit them, as long as you say clearly exactly what you did. But, this particular instance seems a bi
Can I delete excessive number of multivariate outliers, like over 10% in sample? It's hard to see how 10% of the data could be called outlying. There's nothing that says you can't omit them, as long as you say clearly exactly what you did. But, this particular instance seems a bit extreme. When it comes to outliers, I first ask, are they errors? If they're errors, I'd want to fix them; if I couldn't fix them, I'd be reasonably comfortable omitting them (though I'd worry about bias). If they seem not to be errors (or there's no way to tell), I'd ask: do they affect the results? If omitting them gives the same answer as not, I'd be happy and move on. If it does matter, I would look for more a robust method of analysis. I would look more closely at your method for identifying outliers: is it making some sort of assumption that is clearly wrong? Most importantly, I'd look at lots and lots of different plots of the data, to see what it is that is leading those 10% of points to be called outliers, and whether it seems at all reasonable (though I can't see how it could be).
Can I delete excessive number of multivariate outliers, like over 10% in sample? It's hard to see how 10% of the data could be called outlying. There's nothing that says you can't omit them, as long as you say clearly exactly what you did. But, this particular instance seems a bi
51,713
Can I delete excessive number of multivariate outliers, like over 10% in sample?
In addition to @karl broman's excellent point, I'm curious as to how many variables there are. You could be running into the "curse of dimensionality". Also, I would NOT delete outliers just because of some arbitrary threshold. You haven't said what it is you are studying, but, often, the outliers are where the interest is. And I strongly agree with @Karl 's point about looking at graphs first - LOTS of graphs.
Can I delete excessive number of multivariate outliers, like over 10% in sample?
In addition to @karl broman's excellent point, I'm curious as to how many variables there are. You could be running into the "curse of dimensionality". Also, I would NOT delete outliers just becaus
Can I delete excessive number of multivariate outliers, like over 10% in sample? In addition to @karl broman's excellent point, I'm curious as to how many variables there are. You could be running into the "curse of dimensionality". Also, I would NOT delete outliers just because of some arbitrary threshold. You haven't said what it is you are studying, but, often, the outliers are where the interest is. And I strongly agree with @Karl 's point about looking at graphs first - LOTS of graphs.
Can I delete excessive number of multivariate outliers, like over 10% in sample? In addition to @karl broman's excellent point, I'm curious as to how many variables there are. You could be running into the "curse of dimensionality". Also, I would NOT delete outliers just becaus
51,714
Can I delete excessive number of multivariate outliers, like over 10% in sample?
While the above topics are interesting, with 171 items I think validity is going to be a concern that overrides statistical ones. There's a real risk that people are going to answer mechanically, resulting in straightlining or in a very large initial factor that represents a halo or horn effect. I think your team should be able to use non-statistical criteria to trim down the survey to a more manageable level that will make it more worthy of the statistical analyses you want to do.
Can I delete excessive number of multivariate outliers, like over 10% in sample?
While the above topics are interesting, with 171 items I think validity is going to be a concern that overrides statistical ones. There's a real risk that people are going to answer mechanically, res
Can I delete excessive number of multivariate outliers, like over 10% in sample? While the above topics are interesting, with 171 items I think validity is going to be a concern that overrides statistical ones. There's a real risk that people are going to answer mechanically, resulting in straightlining or in a very large initial factor that represents a halo or horn effect. I think your team should be able to use non-statistical criteria to trim down the survey to a more manageable level that will make it more worthy of the statistical analyses you want to do.
Can I delete excessive number of multivariate outliers, like over 10% in sample? While the above topics are interesting, with 171 items I think validity is going to be a concern that overrides statistical ones. There's a real risk that people are going to answer mechanically, res
51,715
Working with correlation coefficients
Can I say .8978 is the strongest relationship between shopping habits and weight gain? Descriptively, you can say that it is the strongest relationship. Whether it is significantly stronger than the other two depends on your sample size. There's an online calculator for that. Based on the diffference in the coefficients, can I say that there is a difference in the shopping habits and weight gain of the three age groups? That's the same statistical question as above. Test each pair of correlations for the significance of the difference. As you perform three tests, you might want to think about a correction of the $\alpha$ level. Another possibility elaborated here would be to add age group as a dummy coded variable into a regression analysis. Finally, can I just add the the three coefficients and divide by three to come up with an average? No. To get an average correlation you have to do an $r$-to-$Z$ transformation (Fisher's $Z$), average these transformed values, and backtransform the average $Z$ to an $r$ again. For the transformation, there are several online calculators.
Working with correlation coefficients
Can I say .8978 is the strongest relationship between shopping habits and weight gain? Descriptively, you can say that it is the strongest relationship. Whether it is significantly stronger than the
Working with correlation coefficients Can I say .8978 is the strongest relationship between shopping habits and weight gain? Descriptively, you can say that it is the strongest relationship. Whether it is significantly stronger than the other two depends on your sample size. There's an online calculator for that. Based on the diffference in the coefficients, can I say that there is a difference in the shopping habits and weight gain of the three age groups? That's the same statistical question as above. Test each pair of correlations for the significance of the difference. As you perform three tests, you might want to think about a correction of the $\alpha$ level. Another possibility elaborated here would be to add age group as a dummy coded variable into a regression analysis. Finally, can I just add the the three coefficients and divide by three to come up with an average? No. To get an average correlation you have to do an $r$-to-$Z$ transformation (Fisher's $Z$), average these transformed values, and backtransform the average $Z$ to an $r$ again. For the transformation, there are several online calculators.
Working with correlation coefficients Can I say .8978 is the strongest relationship between shopping habits and weight gain? Descriptively, you can say that it is the strongest relationship. Whether it is significantly stronger than the
51,716
Working with correlation coefficients
Averaging correlation coefficients is a meaningless operation. Correlation is $$\rho = \frac{\mbox{Cov}[X,Y]}{\sqrt{\mbox{Var}[X]\mbox{Var}[Y]}}.$$ You cannot even average the components of it (the covariance and two variances), unless the means of all groups on both variables are the same. If they are not, your population variance/covariance will be larger than/different from the (weighted) sum of variances/covariances due to between-group differences.
Working with correlation coefficients
Averaging correlation coefficients is a meaningless operation. Correlation is $$\rho = \frac{\mbox{Cov}[X,Y]}{\sqrt{\mbox{Var}[X]\mbox{Var}[Y]}}.$$ You cannot even average the components of it (the co
Working with correlation coefficients Averaging correlation coefficients is a meaningless operation. Correlation is $$\rho = \frac{\mbox{Cov}[X,Y]}{\sqrt{\mbox{Var}[X]\mbox{Var}[Y]}}.$$ You cannot even average the components of it (the covariance and two variances), unless the means of all groups on both variables are the same. If they are not, your population variance/covariance will be larger than/different from the (weighted) sum of variances/covariances due to between-group differences.
Working with correlation coefficients Averaging correlation coefficients is a meaningless operation. Correlation is $$\rho = \frac{\mbox{Cov}[X,Y]}{\sqrt{\mbox{Var}[X]\mbox{Var}[Y]}}.$$ You cannot even average the components of it (the co
51,717
Relationships between two variables
Normality seems to be strongly violated at least by your y variable. I would log transform y to see if that cleans things up a bit. Then, fit a regression to log(y) ~ x. The formula the regression will return will be of the form log(y) = \alpha + \beta*x which you can transform back to the original scale by y = exp(\alpha + \beta*x)
Relationships between two variables
Normality seems to be strongly violated at least by your y variable. I would log transform y to see if that cleans things up a bit. Then, fit a regression to log(y) ~ x. The formula the regression wi
Relationships between two variables Normality seems to be strongly violated at least by your y variable. I would log transform y to see if that cleans things up a bit. Then, fit a regression to log(y) ~ x. The formula the regression will return will be of the form log(y) = \alpha + \beta*x which you can transform back to the original scale by y = exp(\alpha + \beta*x)
Relationships between two variables Normality seems to be strongly violated at least by your y variable. I would log transform y to see if that cleans things up a bit. Then, fit a regression to log(y) ~ x. The formula the regression wi
51,718
Relationships between two variables
Another solution to your problem (without transforming variables) is regression with error distribution other then Gaussian for example Gamma or skewed t-Student. Gamma is in GLM family, so there is a lot of software to fit model with this error distribution.
Relationships between two variables
Another solution to your problem (without transforming variables) is regression with error distribution other then Gaussian for example Gamma or skewed t-Student. Gamma is in GLM family, so there is a
Relationships between two variables Another solution to your problem (without transforming variables) is regression with error distribution other then Gaussian for example Gamma or skewed t-Student. Gamma is in GLM family, so there is a lot of software to fit model with this error distribution.
Relationships between two variables Another solution to your problem (without transforming variables) is regression with error distribution other then Gaussian for example Gamma or skewed t-Student. Gamma is in GLM family, so there is a
51,719
Relationships between two variables
What you are looking for is called regression; there are a lot of methods you can do it, both statistical and machine learning ones. If you want to find f, you must use statistics; in that case you must first assume that f is of some form, like f:y=a*x+b and then use some regression method to fit the parameters. The plot suggests there are a lot of outliers (elements that does not follow f(x)); you may need robust regression to get rid of them.
Relationships between two variables
What you are looking for is called regression; there are a lot of methods you can do it, both statistical and machine learning ones. If you want to find f, you must use statistics; in that case you mu
Relationships between two variables What you are looking for is called regression; there are a lot of methods you can do it, both statistical and machine learning ones. If you want to find f, you must use statistics; in that case you must first assume that f is of some form, like f:y=a*x+b and then use some regression method to fit the parameters. The plot suggests there are a lot of outliers (elements that does not follow f(x)); you may need robust regression to get rid of them.
Relationships between two variables What you are looking for is called regression; there are a lot of methods you can do it, both statistical and machine learning ones. If you want to find f, you must use statistics; in that case you mu
51,720
Relationships between two variables
And just eyeballing the data, you are probably going to want to transform the data, as (at least to me) it looks skewed. Looking at the histograms of the two variables should suggest which transforms may be beneficial. As suggested by mbq, more text here.
Relationships between two variables
And just eyeballing the data, you are probably going to want to transform the data, as (at least to me) it looks skewed. Looking at the histograms of the two variables should suggest which transforms
Relationships between two variables And just eyeballing the data, you are probably going to want to transform the data, as (at least to me) it looks skewed. Looking at the histograms of the two variables should suggest which transforms may be beneficial. As suggested by mbq, more text here.
Relationships between two variables And just eyeballing the data, you are probably going to want to transform the data, as (at least to me) it looks skewed. Looking at the histograms of the two variables should suggest which transforms
51,721
Relationships between two variables
I agree with the suggestions about running a regression possibly with log(y) as the outcome variable or some other suitable transformation. I just wanted to add one comment, if you are reporting the bivariate association, you might prefer: (a) to correlate log(x) and log(y), (b) Spearman's rho, which correlates the ranks of the two variables.
Relationships between two variables
I agree with the suggestions about running a regression possibly with log(y) as the outcome variable or some other suitable transformation. I just wanted to add one comment, if you are reporting the b
Relationships between two variables I agree with the suggestions about running a regression possibly with log(y) as the outcome variable or some other suitable transformation. I just wanted to add one comment, if you are reporting the bivariate association, you might prefer: (a) to correlate log(x) and log(y), (b) Spearman's rho, which correlates the ranks of the two variables.
Relationships between two variables I agree with the suggestions about running a regression possibly with log(y) as the outcome variable or some other suitable transformation. I just wanted to add one comment, if you are reporting the b
51,722
Relationships between two variables
Try a bivariate robust regression (see http://cran.r-project.org/web/packages/rrcov/vignettes/rrcov.pdf for an intro). If your data points are all positive, you might want to try to regress log(y) on log(x). Note that log() is not a substitute for a robust regression, but it sometimes makes the results more interpretable.
Relationships between two variables
Try a bivariate robust regression (see http://cran.r-project.org/web/packages/rrcov/vignettes/rrcov.pdf for an intro). If your data points are all positive, you might want to try to regress log(y) on
Relationships between two variables Try a bivariate robust regression (see http://cran.r-project.org/web/packages/rrcov/vignettes/rrcov.pdf for an intro). If your data points are all positive, you might want to try to regress log(y) on log(x). Note that log() is not a substitute for a robust regression, but it sometimes makes the results more interpretable.
Relationships between two variables Try a bivariate robust regression (see http://cran.r-project.org/web/packages/rrcov/vignettes/rrcov.pdf for an intro). If your data points are all positive, you might want to try to regress log(y) on
51,723
Relationships between two variables
Many have already made excellent suggestions regarding transforming the variables and using robust regression methods. But, when looking at the scatter plot, I observe two separate data sets. One set has a very strong linear relationship where the correlation is a lot higher than the overall 0.6. And, visually it looks like Y = 0.13X. So, when X = 15,000 Y is around 2,000 or so. Thus, a regression line with a similar slope would fit the vast majority of the data points really well. Then, you have a second data set of 300 datapoints that are wild outliers that are random. I would focus on those 300 outliers. Can you explain them? Are there reasons why they are so far off the regression line? Are those datapoints a fractional % of your whole data set? Are they material events you need to keep for your study? Or can you afford to take them out? If you can take them out, you may have a pretty strong regression with a high R Square. You just would have to accept that in a few percentage of the time things go wild and your regression model will be off. But, that's the truth of any model you built. If you have to keep those 300 outliers in your overall data set, they will materially affect your regression. And, you will end up with a regression model that does not fit well the majority of your data point. And, it won't fit the outliers either because they are random and won't fit any regression line.
Relationships between two variables
Many have already made excellent suggestions regarding transforming the variables and using robust regression methods. But, when looking at the scatter plot, I observe two separate data sets. One se
Relationships between two variables Many have already made excellent suggestions regarding transforming the variables and using robust regression methods. But, when looking at the scatter plot, I observe two separate data sets. One set has a very strong linear relationship where the correlation is a lot higher than the overall 0.6. And, visually it looks like Y = 0.13X. So, when X = 15,000 Y is around 2,000 or so. Thus, a regression line with a similar slope would fit the vast majority of the data points really well. Then, you have a second data set of 300 datapoints that are wild outliers that are random. I would focus on those 300 outliers. Can you explain them? Are there reasons why they are so far off the regression line? Are those datapoints a fractional % of your whole data set? Are they material events you need to keep for your study? Or can you afford to take them out? If you can take them out, you may have a pretty strong regression with a high R Square. You just would have to accept that in a few percentage of the time things go wild and your regression model will be off. But, that's the truth of any model you built. If you have to keep those 300 outliers in your overall data set, they will materially affect your regression. And, you will end up with a regression model that does not fit well the majority of your data point. And, it won't fit the outliers either because they are random and won't fit any regression line.
Relationships between two variables Many have already made excellent suggestions regarding transforming the variables and using robust regression methods. But, when looking at the scatter plot, I observe two separate data sets. One se
51,724
Relationships between two variables
Like the others have said, some sort of transformation is recommended. Your data seems highly clustered, and could be roughly linear, but it's difficult to tell with all the other points around it. Others have suggested trying a log transformation, but it might also be a good idea to try a Box-Cox Transformation. If the resulting exponent it tells you to multiply by is 0, then a log transform is the best. All software packages that I know of allow you to do Box-Cox. In R, it's in the MASS package. Here's some information about that: Doing Box-Cox Transformations in R That's not going to give a you a perfectly linear fit, but it'll probably make the interpretation of your data a little easier.
Relationships between two variables
Like the others have said, some sort of transformation is recommended. Your data seems highly clustered, and could be roughly linear, but it's difficult to tell with all the other points around it. Ot
Relationships between two variables Like the others have said, some sort of transformation is recommended. Your data seems highly clustered, and could be roughly linear, but it's difficult to tell with all the other points around it. Others have suggested trying a log transformation, but it might also be a good idea to try a Box-Cox Transformation. If the resulting exponent it tells you to multiply by is 0, then a log transform is the best. All software packages that I know of allow you to do Box-Cox. In R, it's in the MASS package. Here's some information about that: Doing Box-Cox Transformations in R That's not going to give a you a perfectly linear fit, but it'll probably make the interpretation of your data a little easier.
Relationships between two variables Like the others have said, some sort of transformation is recommended. Your data seems highly clustered, and could be roughly linear, but it's difficult to tell with all the other points around it. Ot
51,725
Gradient Descent and Learning Rate
Consider the convex function $f(x) = |x|$. Then gradient descent would bounce back and forth forever: since the gradient is always either $+1$ or $-1$, with learning rate $\lambda$ and if $x\in(0, \lambda)$, then $x$ would alternate between $x$ and $x-\lambda$. If, as another example, $f(x) = x^2$, then gradient descent would converge to the minimum without attaining it. But you would not observe it because of the finite resolution of floats. Note that there are various extensions of ordinary gradient descent, from more basic ones with decreasing learning rates up to more sophisticated ones like Adam or Nadam. If there are several local minima, gradient descent can be caught in local minima without reaching the global one. But then your presumption of convexity would be violated.
Gradient Descent and Learning Rate
Consider the convex function $f(x) = |x|$. Then gradient descent would bounce back and forth forever: since the gradient is always either $+1$ or $-1$, with learning rate $\lambda$ and if $x\in(0, \la
Gradient Descent and Learning Rate Consider the convex function $f(x) = |x|$. Then gradient descent would bounce back and forth forever: since the gradient is always either $+1$ or $-1$, with learning rate $\lambda$ and if $x\in(0, \lambda)$, then $x$ would alternate between $x$ and $x-\lambda$. If, as another example, $f(x) = x^2$, then gradient descent would converge to the minimum without attaining it. But you would not observe it because of the finite resolution of floats. Note that there are various extensions of ordinary gradient descent, from more basic ones with decreasing learning rates up to more sophisticated ones like Adam or Nadam. If there are several local minima, gradient descent can be caught in local minima without reaching the global one. But then your presumption of convexity would be violated.
Gradient Descent and Learning Rate Consider the convex function $f(x) = |x|$. Then gradient descent would bounce back and forth forever: since the gradient is always either $+1$ or $-1$, with learning rate $\lambda$ and if $x\in(0, \la
51,726
Gradient Descent and Learning Rate
With arbitrary precision numbers, we can't expect that the $k$th iterate of gradient descent will ever exactly equal the optimal value $f(x^*)$. In practice on a computer with floating-point numbers, we may actually get exact convergence. In terms of the rate of convergence, there are two theorems that can be generally applied in common cases. Fixed step size and convex, $L$-Lipschitz gradient function The first theorem says that if we have a convex function with $L$-Lipschitz-continuous gradient, we get convergence if the step size is less than $1/L$, and the rate of convergence is $\mathcal{O}(1/k)$ for $k$ iterates. Formally, given: A function $f: \mathbb{R}^n \to \mathbb{R}$ which is convex, differentiable and with $L$-Lipschitz-continuous gradient, so $\|\nabla f(x)-\nabla f(y)\|_{2} \leq L\|x-y\|_{2}$ for all $x$, $y$. We do $k$ steps of gradient descent with fixed step size $t \leq 1/L$ to obtain a point $x^k$. Then the optimality gap satisfies $$f\left(x^{k}\right)-f\left(x^{*}\right) \leq \frac{\left\|x^{(0)}-x^{*}\right\|_{2}^{2}}{2 t k}$$ As an example, the quadratic function $f(x) = x^2$ has a 2-Lipschitz gradient, so a fixed step size $t \leq \frac12$ will converge. Convex $L$-Lipschitz non-differentiable function Gradient descent is not a well-defined algorithm on a non-differentiable function, since it's not clear what to do if we have an iterate where $f$ doesn't have a gradient. A useful extension of gradient descent is subgradient descent, where we pick a subgradient if the function doesn't have a gradient at that point. The second theorem says that for a convex but non-differentiable function $f$ with the function itself having Lipschitz constant $G$, i.e. $\|f(x)-f(y)\|_{2} \leq G\|x-y\|_{2}$ for all $x$, $y$, then doing subgradient descent with fixed step size $t$ will result in convergence to a final iterate with $$\lim _{k \rightarrow \infty} f\left(x^{k}\right) \leq f\left(x^{*}\right)+G^{2} \frac{t}{2}$$ In other words, a fixed step size will not necessarily result in convergence in the limit. However, if we pick a step size $t_i$ that decreases at the right rate, so that $\sum_{k=1}^\infty t_i^2 \leq \infty$ and $\sum_{k=1}^\infty t_i = \infty$, we do indeed still get convergence with $$\lim _{k \rightarrow+\infty} f\left(x^{k}\right)=f\left(x^{*}\right)$$
Gradient Descent and Learning Rate
With arbitrary precision numbers, we can't expect that the $k$th iterate of gradient descent will ever exactly equal the optimal value $f(x^*)$. In practice on a computer with floating-point numbers,
Gradient Descent and Learning Rate With arbitrary precision numbers, we can't expect that the $k$th iterate of gradient descent will ever exactly equal the optimal value $f(x^*)$. In practice on a computer with floating-point numbers, we may actually get exact convergence. In terms of the rate of convergence, there are two theorems that can be generally applied in common cases. Fixed step size and convex, $L$-Lipschitz gradient function The first theorem says that if we have a convex function with $L$-Lipschitz-continuous gradient, we get convergence if the step size is less than $1/L$, and the rate of convergence is $\mathcal{O}(1/k)$ for $k$ iterates. Formally, given: A function $f: \mathbb{R}^n \to \mathbb{R}$ which is convex, differentiable and with $L$-Lipschitz-continuous gradient, so $\|\nabla f(x)-\nabla f(y)\|_{2} \leq L\|x-y\|_{2}$ for all $x$, $y$. We do $k$ steps of gradient descent with fixed step size $t \leq 1/L$ to obtain a point $x^k$. Then the optimality gap satisfies $$f\left(x^{k}\right)-f\left(x^{*}\right) \leq \frac{\left\|x^{(0)}-x^{*}\right\|_{2}^{2}}{2 t k}$$ As an example, the quadratic function $f(x) = x^2$ has a 2-Lipschitz gradient, so a fixed step size $t \leq \frac12$ will converge. Convex $L$-Lipschitz non-differentiable function Gradient descent is not a well-defined algorithm on a non-differentiable function, since it's not clear what to do if we have an iterate where $f$ doesn't have a gradient. A useful extension of gradient descent is subgradient descent, where we pick a subgradient if the function doesn't have a gradient at that point. The second theorem says that for a convex but non-differentiable function $f$ with the function itself having Lipschitz constant $G$, i.e. $\|f(x)-f(y)\|_{2} \leq G\|x-y\|_{2}$ for all $x$, $y$, then doing subgradient descent with fixed step size $t$ will result in convergence to a final iterate with $$\lim _{k \rightarrow \infty} f\left(x^{k}\right) \leq f\left(x^{*}\right)+G^{2} \frac{t}{2}$$ In other words, a fixed step size will not necessarily result in convergence in the limit. However, if we pick a step size $t_i$ that decreases at the right rate, so that $\sum_{k=1}^\infty t_i^2 \leq \infty$ and $\sum_{k=1}^\infty t_i = \infty$, we do indeed still get convergence with $$\lim _{k \rightarrow+\infty} f\left(x^{k}\right)=f\left(x^{*}\right)$$
Gradient Descent and Learning Rate With arbitrary precision numbers, we can't expect that the $k$th iterate of gradient descent will ever exactly equal the optimal value $f(x^*)$. In practice on a computer with floating-point numbers,
51,727
Why are my test data R-squared's identical despite using different training data?
This is a simple linear regression, so the predictions are a linear function of the x values. The correlation of y with a linear function of x is the same as the correlation of y with x; the coefficients of the function don't matter. Exceptions to this rule are slopes of zero (where correlation doesn't exist, because the sd of the predictions is zero), and negative slopes, where the correlation will change sign. But you're looking at squared correlation so the sign doesn't matter, and it's extremely unlikely to get a fitted slope that is exactly zero.
Why are my test data R-squared's identical despite using different training data?
This is a simple linear regression, so the predictions are a linear function of the x values. The correlation of y with a linear function of x is the same as the correlation of y with x; the coeffici
Why are my test data R-squared's identical despite using different training data? This is a simple linear regression, so the predictions are a linear function of the x values. The correlation of y with a linear function of x is the same as the correlation of y with x; the coefficients of the function don't matter. Exceptions to this rule are slopes of zero (where correlation doesn't exist, because the sd of the predictions is zero), and negative slopes, where the correlation will change sign. But you're looking at squared correlation so the sign doesn't matter, and it's extremely unlikely to get a fitted slope that is exactly zero.
Why are my test data R-squared's identical despite using different training data? This is a simple linear regression, so the predictions are a linear function of the x values. The correlation of y with a linear function of x is the same as the correlation of y with x; the coeffici
51,728
Why are my test data R-squared's identical despite using different training data?
This is to help you understand what user2554330 means. Let $x$ and $y$ be test data, and a predicted line be $\hat{y} = \hat{a} + \hat{b}x$. Then \begin{equation} \begin{split} \textrm{cor}(y, \hat{y}) &= \frac{\textrm{cov}(y, \hat{y})}{\sqrt{\textrm{var}(y)}\sqrt{\textrm{var}(\hat{y})}}\\ &= \frac{\textrm{cov}(y, \hat{a} + \hat{b}x)}{\sqrt{\textrm{var}(y)}\sqrt{\textrm{var}(\hat{a} + \hat{b}x)}}\\ &= \frac{\hat{b}\textrm{cov}(y, x)}{\sqrt{\textrm{var}(y)}\sqrt{\textrm{var}(x)}|\hat{b}|}\\ &= \frac{\hat{b}}{|\hat{b}|}\textrm{cor}(y, x) \end{split} \end{equation} As a result, $R^2 = [\textrm{cor}(y, \hat{y})]^2 = \frac{\hat{b}^2}{|\hat{b}|^2}[\textrm{cor}(y, x)]^2 = [\textrm{cor}(y, x)]^2$. Note that the R-squared on the test data is independent of estimate of intercept and slope. This only holds for simple linear regression. As soon as your model becomes $y = a + b_1x_1 + b_2x_2$, the R-squared will depend on estimated coefficients. Anyway, as I warned elsewhere, R-squared is not always appropriate for assessing out-of-sample prediction. You really want to compare mean prediction squared error, i.e., mean((pred_small - xte$y) ^ 2) and mean((pred_big - xte$y) ^ 2).
Why are my test data R-squared's identical despite using different training data?
This is to help you understand what user2554330 means. Let $x$ and $y$ be test data, and a predicted line be $\hat{y} = \hat{a} + \hat{b}x$. Then \begin{equation} \begin{split} \textrm{cor}(y, \hat{y}
Why are my test data R-squared's identical despite using different training data? This is to help you understand what user2554330 means. Let $x$ and $y$ be test data, and a predicted line be $\hat{y} = \hat{a} + \hat{b}x$. Then \begin{equation} \begin{split} \textrm{cor}(y, \hat{y}) &= \frac{\textrm{cov}(y, \hat{y})}{\sqrt{\textrm{var}(y)}\sqrt{\textrm{var}(\hat{y})}}\\ &= \frac{\textrm{cov}(y, \hat{a} + \hat{b}x)}{\sqrt{\textrm{var}(y)}\sqrt{\textrm{var}(\hat{a} + \hat{b}x)}}\\ &= \frac{\hat{b}\textrm{cov}(y, x)}{\sqrt{\textrm{var}(y)}\sqrt{\textrm{var}(x)}|\hat{b}|}\\ &= \frac{\hat{b}}{|\hat{b}|}\textrm{cor}(y, x) \end{split} \end{equation} As a result, $R^2 = [\textrm{cor}(y, \hat{y})]^2 = \frac{\hat{b}^2}{|\hat{b}|^2}[\textrm{cor}(y, x)]^2 = [\textrm{cor}(y, x)]^2$. Note that the R-squared on the test data is independent of estimate of intercept and slope. This only holds for simple linear regression. As soon as your model becomes $y = a + b_1x_1 + b_2x_2$, the R-squared will depend on estimated coefficients. Anyway, as I warned elsewhere, R-squared is not always appropriate for assessing out-of-sample prediction. You really want to compare mean prediction squared error, i.e., mean((pred_small - xte$y) ^ 2) and mean((pred_big - xte$y) ^ 2).
Why are my test data R-squared's identical despite using different training data? This is to help you understand what user2554330 means. Let $x$ and $y$ be test data, and a predicted line be $\hat{y} = \hat{a} + \hat{b}x$. Then \begin{equation} \begin{split} \textrm{cor}(y, \hat{y}
51,729
Are the RandomForest and Ranger libraries the same?
They are different implementations of the same algorithm. As random forest utilises bagging and bagging is inherently stochastic, we cannot guarantee that they will give exactly the same result. That said, if one downright errs, this is a coding issue rather than a statistical one.
Are the RandomForest and Ranger libraries the same?
They are different implementations of the same algorithm. As random forest utilises bagging and bagging is inherently stochastic, we cannot guarantee that they will give exactly the same result. That
Are the RandomForest and Ranger libraries the same? They are different implementations of the same algorithm. As random forest utilises bagging and bagging is inherently stochastic, we cannot guarantee that they will give exactly the same result. That said, if one downright errs, this is a coding issue rather than a statistical one.
Are the RandomForest and Ranger libraries the same? They are different implementations of the same algorithm. As random forest utilises bagging and bagging is inherently stochastic, we cannot guarantee that they will give exactly the same result. That
51,730
Are the RandomForest and Ranger libraries the same?
The title of the paper introducing it is literally ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R (highlighting by myself). But as said by usεr11852, random forest is randomized and there may be implementational differences, so exactly the same results are not guaranteed.
Are the RandomForest and Ranger libraries the same?
The title of the paper introducing it is literally ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R (highlighting by myself). But as said by usεr11852, random f
Are the RandomForest and Ranger libraries the same? The title of the paper introducing it is literally ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R (highlighting by myself). But as said by usεr11852, random forest is randomized and there may be implementational differences, so exactly the same results are not guaranteed.
Are the RandomForest and Ranger libraries the same? The title of the paper introducing it is literally ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R (highlighting by myself). But as said by usεr11852, random f
51,731
Validity of AUC for binary categorical variables
The ROC curve is a statistic of ranks, so it's valid as long as the way you're sorting the data is meaningful. In its most common application, we're sorting according to the predicted probabilities produced by a model. This is meaningful, in the sense that we have the most likely events at one extreme and the least likely events at the other extreme. This is useful because each operating point on the curve tells you (1) how much of your outcome you capture at each threshold using the decision rule "alert if $\hat{p} > \text{threshold}$" and (2) how many false positives you capture with that same rule. The ROC AUC is the probability a randomly-chosen positive example is ranked more highly than a randomly-chosen negative example. When we're using ROC AUC to assess a machine learning model, we always want a higher AUC value, because we want our model to give positives a higher rank. On the other hand, if we built a model that had an out-of-sample AUC well below 0.5, we'd know that the model was garbage. In OP's example, OP demonstrated that the arbitrary choice of how they encoded the categorical data can reverse the meaning of AUC. In the initial post, OP wrote: AUC for sex to predict survived: 0.2331 but then edited to reverse how genders were sorted and found Edit: I have reversed the coding of sex to 0 and 1, so that the AUC now is 0.7669. The results are completely opposite. In the first case, we had an AUC of $c$, but in the second case, we had an AUC of $1-c$. This is an effective demonstration of why the choice of how you sort the categorical data is crucial! For this reason, I wouldn't recommend using AUC to interpret unordered data. This is usually where people will point out that you can reverse really bad predictions to get a really high AUC. This is true as far as it goes, but "Let's run 2 tests, fiddle with our data, and report the most favorable result" is not sound statistical practice. Your suggested procedure of reporting the larger of AUC and 1-AUC gives you a massive optimism bias. If your data has 3 or more categories and you impose an arbitrary order on them, you might need to test all permutations to get the highest AUC, not just reverse the ordering (reporting 1 - AUC is equivalent to reversing the ordering). An example is that the categories are "red," "green," and "blue" instead of "male" and "female." There's more than 2 ways to sort them, so simply reversing the order doesn't cover all possible permutations. In extrema, you may encounter categorical variables that uniquely identify each observational unit (e.g. national ID numbers, telephone numbers, geolocation coordinates, or similar information). The optimal sorting of these unique identifiers will have an AUC of 1 (put all the positives at the lowest rank), but it won't generalize because you won't know where new unique identifiers should be placed. If you’ve badly overfit a classifier, this method cheerfully reports a much higher AUC than you have in reality. Hypothesis tests will be bogus, because you’re choosing the most favorable statistic. On the other hand, a chi-squared-test does not give a different statistic if you change how you order your categories. It also works when you have 3 or more categories.
Validity of AUC for binary categorical variables
The ROC curve is a statistic of ranks, so it's valid as long as the way you're sorting the data is meaningful. In its most common application, we're sorting according to the predicted probabilities pr
Validity of AUC for binary categorical variables The ROC curve is a statistic of ranks, so it's valid as long as the way you're sorting the data is meaningful. In its most common application, we're sorting according to the predicted probabilities produced by a model. This is meaningful, in the sense that we have the most likely events at one extreme and the least likely events at the other extreme. This is useful because each operating point on the curve tells you (1) how much of your outcome you capture at each threshold using the decision rule "alert if $\hat{p} > \text{threshold}$" and (2) how many false positives you capture with that same rule. The ROC AUC is the probability a randomly-chosen positive example is ranked more highly than a randomly-chosen negative example. When we're using ROC AUC to assess a machine learning model, we always want a higher AUC value, because we want our model to give positives a higher rank. On the other hand, if we built a model that had an out-of-sample AUC well below 0.5, we'd know that the model was garbage. In OP's example, OP demonstrated that the arbitrary choice of how they encoded the categorical data can reverse the meaning of AUC. In the initial post, OP wrote: AUC for sex to predict survived: 0.2331 but then edited to reverse how genders were sorted and found Edit: I have reversed the coding of sex to 0 and 1, so that the AUC now is 0.7669. The results are completely opposite. In the first case, we had an AUC of $c$, but in the second case, we had an AUC of $1-c$. This is an effective demonstration of why the choice of how you sort the categorical data is crucial! For this reason, I wouldn't recommend using AUC to interpret unordered data. This is usually where people will point out that you can reverse really bad predictions to get a really high AUC. This is true as far as it goes, but "Let's run 2 tests, fiddle with our data, and report the most favorable result" is not sound statistical practice. Your suggested procedure of reporting the larger of AUC and 1-AUC gives you a massive optimism bias. If your data has 3 or more categories and you impose an arbitrary order on them, you might need to test all permutations to get the highest AUC, not just reverse the ordering (reporting 1 - AUC is equivalent to reversing the ordering). An example is that the categories are "red," "green," and "blue" instead of "male" and "female." There's more than 2 ways to sort them, so simply reversing the order doesn't cover all possible permutations. In extrema, you may encounter categorical variables that uniquely identify each observational unit (e.g. national ID numbers, telephone numbers, geolocation coordinates, or similar information). The optimal sorting of these unique identifiers will have an AUC of 1 (put all the positives at the lowest rank), but it won't generalize because you won't know where new unique identifiers should be placed. If you’ve badly overfit a classifier, this method cheerfully reports a much higher AUC than you have in reality. Hypothesis tests will be bogus, because you’re choosing the most favorable statistic. On the other hand, a chi-squared-test does not give a different statistic if you change how you order your categories. It also works when you have 3 or more categories.
Validity of AUC for binary categorical variables The ROC curve is a statistic of ranks, so it's valid as long as the way you're sorting the data is meaningful. In its most common application, we're sorting according to the predicted probabilities pr
51,732
Validity of AUC for binary categorical variables
It's helpful to see that the ROC curve here isn't really a curve. Instead, you're effectively producing a model that says P(Survive|Male) = .18 and P(Survive|Female) = .74 (the averages in the data), and making predictions using a range of thresholds, e.g. prediction = 1 if p_survive > threshold, or 0 otherwise. You end up predicting everyone will survive for any threshold < .18, that all females and no males will survive for thresholds between .18 and .74, and that no one will survive with a threshold > .74. This should hopefully make it clear that calculating the AUC or drawing the ROC doesn't really provide any extra information here, since changing the threshold doesn't affect the predictions unless you set it to a daft value. However, it also shows that the AUC score you obtain is still a valid one. true_positives false_positives threshold 0.0 1.00 1.00 0.1 1.00 1.00 0.2 0.68 0.15 0.3 0.68 0.15 0.4 0.68 0.15 0.5 0.68 0.15 0.6 0.68 0.15 0.7 0.68 0.15 0.8 0.00 0.00 0.9 0.00 0.00 1.0 0.00 0.00 Code p_male, p_female = [tdf.loc[tdf['sex'] == sex, 'survived'].mean() for sex in ['male', 'female']] tdf['p_survived'] = np.where(tdf['sex'] == 'male', p_male, p_female) thresholds = np.linspace(0, 1, 11) def check_calibration(threshold, predicted_probs, outcome): prediction = 1 * (predicted_probs > threshold) return { 'true_positives' : prediction[outcome == 1].mean(), 'false_positives' : prediction[outcome == 0].mean() } calibration = pd.DataFrame([ check_calibration(thresh, tdf['p_survived'], tdf['survived']) for thresh in thresholds ]).fillna(0) calibration.index = pd.Index(thresholds, name = 'threshold') print(calibration.round(2)) accuracies.plot() plt.xlabel('Threshold (Predict "Survived" if P(Survived > Threshold))') plt.ylabel('True/False Positive Rate') plt.title('Calibration') plt.figure(figsize=(5,5)) plt.plot(accuracies['false_positives'], accuracies['true_positives']) plt.scatter(accuracies['false_positives'], accuracies['true_positives']) plt.plot([0,1], [0,1], linestyle = 'dashed', color = 'k') plt.xlabel('False Positives') plt.ylabel('True Positives') plt.title('ROC Curve')
Validity of AUC for binary categorical variables
It's helpful to see that the ROC curve here isn't really a curve. Instead, you're effectively producing a model that says P(Survive|Male) = .18 and P(Survive|Female) = .74 (the averages in the data),
Validity of AUC for binary categorical variables It's helpful to see that the ROC curve here isn't really a curve. Instead, you're effectively producing a model that says P(Survive|Male) = .18 and P(Survive|Female) = .74 (the averages in the data), and making predictions using a range of thresholds, e.g. prediction = 1 if p_survive > threshold, or 0 otherwise. You end up predicting everyone will survive for any threshold < .18, that all females and no males will survive for thresholds between .18 and .74, and that no one will survive with a threshold > .74. This should hopefully make it clear that calculating the AUC or drawing the ROC doesn't really provide any extra information here, since changing the threshold doesn't affect the predictions unless you set it to a daft value. However, it also shows that the AUC score you obtain is still a valid one. true_positives false_positives threshold 0.0 1.00 1.00 0.1 1.00 1.00 0.2 0.68 0.15 0.3 0.68 0.15 0.4 0.68 0.15 0.5 0.68 0.15 0.6 0.68 0.15 0.7 0.68 0.15 0.8 0.00 0.00 0.9 0.00 0.00 1.0 0.00 0.00 Code p_male, p_female = [tdf.loc[tdf['sex'] == sex, 'survived'].mean() for sex in ['male', 'female']] tdf['p_survived'] = np.where(tdf['sex'] == 'male', p_male, p_female) thresholds = np.linspace(0, 1, 11) def check_calibration(threshold, predicted_probs, outcome): prediction = 1 * (predicted_probs > threshold) return { 'true_positives' : prediction[outcome == 1].mean(), 'false_positives' : prediction[outcome == 0].mean() } calibration = pd.DataFrame([ check_calibration(thresh, tdf['p_survived'], tdf['survived']) for thresh in thresholds ]).fillna(0) calibration.index = pd.Index(thresholds, name = 'threshold') print(calibration.round(2)) accuracies.plot() plt.xlabel('Threshold (Predict "Survived" if P(Survived > Threshold))') plt.ylabel('True/False Positive Rate') plt.title('Calibration') plt.figure(figsize=(5,5)) plt.plot(accuracies['false_positives'], accuracies['true_positives']) plt.scatter(accuracies['false_positives'], accuracies['true_positives']) plt.plot([0,1], [0,1], linestyle = 'dashed', color = 'k') plt.xlabel('False Positives') plt.ylabel('True Positives') plt.title('ROC Curve')
Validity of AUC for binary categorical variables It's helpful to see that the ROC curve here isn't really a curve. Instead, you're effectively producing a model that says P(Survive|Male) = .18 and P(Survive|Female) = .74 (the averages in the data),
51,733
Validity of AUC for binary categorical variables
This approach isn't wrong, but it's not a very useful application of the ROC. The purpose of an ROC curve is to show model performance over a range of classification thresholds, and the AUC summarizes the quality of the model over all possible thresholds. With a two-class categorical predictor variable, you have only three possible choices, two of which are degenerate one-class models - you can classify everything as one class, or classify everything as the other class, or actually use the predictor variable to predict outcome. The ROC curve consists of only three points, one at (0,1), one at (1,0), and one at the particular sensitivity/specificity of the actual useful model. Since you really only have one reasonable choice of "threshold", you can more directly summarize the model using sensitivity and specificity, rather than using AUC. Note that in this particular example, you've set the categories backwards. The AUC of a random classifier is 0.5, so if you find an AUC of less than 0.5, you're doing worse than random. This usually means that you should flip the ordering of the classes. You've built a model that's good at getting the wrong answer, so you should actually classify as the opposite of whatever it says.
Validity of AUC for binary categorical variables
This approach isn't wrong, but it's not a very useful application of the ROC. The purpose of an ROC curve is to show model performance over a range of classification thresholds, and the AUC summarizes
Validity of AUC for binary categorical variables This approach isn't wrong, but it's not a very useful application of the ROC. The purpose of an ROC curve is to show model performance over a range of classification thresholds, and the AUC summarizes the quality of the model over all possible thresholds. With a two-class categorical predictor variable, you have only three possible choices, two of which are degenerate one-class models - you can classify everything as one class, or classify everything as the other class, or actually use the predictor variable to predict outcome. The ROC curve consists of only three points, one at (0,1), one at (1,0), and one at the particular sensitivity/specificity of the actual useful model. Since you really only have one reasonable choice of "threshold", you can more directly summarize the model using sensitivity and specificity, rather than using AUC. Note that in this particular example, you've set the categories backwards. The AUC of a random classifier is 0.5, so if you find an AUC of less than 0.5, you're doing worse than random. This usually means that you should flip the ordering of the classes. You've built a model that's good at getting the wrong answer, so you should actually classify as the opposite of whatever it says.
Validity of AUC for binary categorical variables This approach isn't wrong, but it's not a very useful application of the ROC. The purpose of an ROC curve is to show model performance over a range of classification thresholds, and the AUC summarizes
51,734
Validity of AUC for binary categorical variables
Just to clarify, ROC curve means plotting how much True Positives you get compared to False Positives. Whether the target label is numerical or categorical is a matter of implementation but it does not change the validity of the principles, you are still assessing how "good" (AUC) your model is at discriminating between two distributions. The higher the AUC the higher the TP to FP ratio you can get by adjusting the threshold. This is how AUC is interpreted as a measure of model performance, to my knowledge AUC does not quantify the relationship between two variables.
Validity of AUC for binary categorical variables
Just to clarify, ROC curve means plotting how much True Positives you get compared to False Positives. Whether the target label is numerical or categorical is a matter of implementation but it does no
Validity of AUC for binary categorical variables Just to clarify, ROC curve means plotting how much True Positives you get compared to False Positives. Whether the target label is numerical or categorical is a matter of implementation but it does not change the validity of the principles, you are still assessing how "good" (AUC) your model is at discriminating between two distributions. The higher the AUC the higher the TP to FP ratio you can get by adjusting the threshold. This is how AUC is interpreted as a measure of model performance, to my knowledge AUC does not quantify the relationship between two variables.
Validity of AUC for binary categorical variables Just to clarify, ROC curve means plotting how much True Positives you get compared to False Positives. Whether the target label is numerical or categorical is a matter of implementation but it does no
51,735
What does $X=x|Y=y$ actually mean by itself?
Example: Say you have a group of men and women and know their handedness (left/right). It is like depicted in the table below $$\begin{array}{r|c|c | c} &\text{men}&\text{women} &\text{total}\\ \hline \text{left handed}&9&4&13\\\hline \text{right handed}&43&44&87\\\hline \text{total}&52&48&100 \end{array}$$ Say you pick randomly a person out of this group then it is $13\%$ probability that they are left handed. But if you know that the person is a woman, then the probability is $4/48 \approx 9 \%$. To express this latter case, the probability of an event, given another event or condition, one uses the vertical bar symbol $\vert$. $$P(X\vert Y) = \text{probability of event $X$ given/conditional on event $Y$}$$ So it is about both events $X$ and $Y$ happening. But, this is different from $P(X,Y)$, the probability that both $X$ and $Y$ are happening. The probability for left handedness given that a person is a woman, is not equal to $4 \%$ the probability that someone is a woman and left handed. The expression $X\vert Y$ occurs within the probability operator $P()$. But you should not read all the contents as a single event. So this is not how you must interpret it: "$P(\dots)$ is probability of the event on the dots. So $P(X\vert Y)$ is the probability of the event $X\vert Y$." This $X\vert Y$ is not an event (as Henry noted in the comments). The vertical bar $\vert$ adds additional parameters to the probability operator and refers to conditions.
What does $X=x|Y=y$ actually mean by itself?
Example: Say you have a group of men and women and know their handedness (left/right). It is like depicted in the table below $$\begin{array}{r|c|c | c} &\text{men}&\text{women} &\text{total}\\ \hline
What does $X=x|Y=y$ actually mean by itself? Example: Say you have a group of men and women and know their handedness (left/right). It is like depicted in the table below $$\begin{array}{r|c|c | c} &\text{men}&\text{women} &\text{total}\\ \hline \text{left handed}&9&4&13\\\hline \text{right handed}&43&44&87\\\hline \text{total}&52&48&100 \end{array}$$ Say you pick randomly a person out of this group then it is $13\%$ probability that they are left handed. But if you know that the person is a woman, then the probability is $4/48 \approx 9 \%$. To express this latter case, the probability of an event, given another event or condition, one uses the vertical bar symbol $\vert$. $$P(X\vert Y) = \text{probability of event $X$ given/conditional on event $Y$}$$ So it is about both events $X$ and $Y$ happening. But, this is different from $P(X,Y)$, the probability that both $X$ and $Y$ are happening. The probability for left handedness given that a person is a woman, is not equal to $4 \%$ the probability that someone is a woman and left handed. The expression $X\vert Y$ occurs within the probability operator $P()$. But you should not read all the contents as a single event. So this is not how you must interpret it: "$P(\dots)$ is probability of the event on the dots. So $P(X\vert Y)$ is the probability of the event $X\vert Y$." This $X\vert Y$ is not an event (as Henry noted in the comments). The vertical bar $\vert$ adds additional parameters to the probability operator and refers to conditions.
What does $X=x|Y=y$ actually mean by itself? Example: Say you have a group of men and women and know their handedness (left/right). It is like depicted in the table below $$\begin{array}{r|c|c | c} &\text{men}&\text{women} &\text{total}\\ \hline
51,736
What does $X=x|Y=y$ actually mean by itself?
The $|$ symbol in probability theory stands for “given”. You would most commonly see it used for conditional probability $P(Y|X)$, the probability of $X$ given $Y$. While it’s a slight abuse of notation, you could see something like $$ Y|X \sim \mathcal{N}(\mu, \sigma) $$ for $Y$ conditionally on $X$ following normal distribution. You would also see it to show some properties of distributions, like conditional expectations $E[Y|X]$, or variance $\operatorname{Var}(Y|X)$, etc. Notice that something like $X|Y$ alone doesn’t make much sense. What would it be? “A random variable conditional on another random variable”? Conditioning is about the perspective you take when looking at the variable, not a property of the variable. You can “transform” conditional probability to joint, or marginal, or reverse it (Bayes theorem) with just simple mathematical manipulations on the distributions.
What does $X=x|Y=y$ actually mean by itself?
The $|$ symbol in probability theory stands for “given”. You would most commonly see it used for conditional probability $P(Y|X)$, the probability of $X$ given $Y$. While it’s a slight abuse of notati
What does $X=x|Y=y$ actually mean by itself? The $|$ symbol in probability theory stands for “given”. You would most commonly see it used for conditional probability $P(Y|X)$, the probability of $X$ given $Y$. While it’s a slight abuse of notation, you could see something like $$ Y|X \sim \mathcal{N}(\mu, \sigma) $$ for $Y$ conditionally on $X$ following normal distribution. You would also see it to show some properties of distributions, like conditional expectations $E[Y|X]$, or variance $\operatorname{Var}(Y|X)$, etc. Notice that something like $X|Y$ alone doesn’t make much sense. What would it be? “A random variable conditional on another random variable”? Conditioning is about the perspective you take when looking at the variable, not a property of the variable. You can “transform” conditional probability to joint, or marginal, or reverse it (Bayes theorem) with just simple mathematical manipulations on the distributions.
What does $X=x|Y=y$ actually mean by itself? The $|$ symbol in probability theory stands for “given”. You would most commonly see it used for conditional probability $P(Y|X)$, the probability of $X$ given $Y$. While it’s a slight abuse of notati
51,737
What does $X=x|Y=y$ actually mean by itself?
This can get quite philosophical fast. But, Judea Pearl's book Causal Inference for Statistics Section 1.3.3 provides a nice intuition, the operator $|$ implies a filtering of the data in the frequentist interpretation. An intuitive example would be two variables having bounds $P(X>a|Y<b)$, so conditioning implies filtering the data, i.e., removing parts of the data where condition $Y<b$ doesn't hold first. Regarding if $X|Y$ is an event or not. It isn't an event in plain form but the resulting filtering operation leads to an event. Philosophical part of this then, what would be the Bayesian interpretation and even conditional probability can exist in isolation.
What does $X=x|Y=y$ actually mean by itself?
This can get quite philosophical fast. But, Judea Pearl's book Causal Inference for Statistics Section 1.3.3 provides a nice intuition, the operator $|$ implies a filtering of the data in the freque
What does $X=x|Y=y$ actually mean by itself? This can get quite philosophical fast. But, Judea Pearl's book Causal Inference for Statistics Section 1.3.3 provides a nice intuition, the operator $|$ implies a filtering of the data in the frequentist interpretation. An intuitive example would be two variables having bounds $P(X>a|Y<b)$, so conditioning implies filtering the data, i.e., removing parts of the data where condition $Y<b$ doesn't hold first. Regarding if $X|Y$ is an event or not. It isn't an event in plain form but the resulting filtering operation leads to an event. Philosophical part of this then, what would be the Bayesian interpretation and even conditional probability can exist in isolation.
What does $X=x|Y=y$ actually mean by itself? This can get quite philosophical fast. But, Judea Pearl's book Causal Inference for Statistics Section 1.3.3 provides a nice intuition, the operator $|$ implies a filtering of the data in the freque
51,738
Why does the t-test produce non significant p-values when there are outliers?
When you move an observation up you impact that group's sd as well as the mean. With the Welch test you also generally pull down the df. For two samples, of sizes 10 and 11, initially with the same standard deviation and half a standard deviation apart, here's the effect on the difference in means of moving one observation from the mean of the larger group up higher and higher, as well as on the standard error of the difference in means (top left), on the t-statistic (top right), on the d.f. in the Welch test (bottom left) and on the p-value (bottom right): (These are actually empirical influence functions for data values that are based off expected normal scores for n=10 and 10, shifting the second group up a little and then adding an additional observation to the second group and moving it up in multiple stages.) As you pull an observation up higher, the t-statistic increases for a little while, but then starts to come back down and then approaches an asymptote (as indicated at 1 in the plot above). The df decreases toward the df of the (now) smaller-variance group and the p-value initially decreases but then climbs again before levelling off. While this situation is not identical to that in your data, the basic pattern (t increases and then decreases, p decreases then increases) is fairly general.
Why does the t-test produce non significant p-values when there are outliers?
When you move an observation up you impact that group's sd as well as the mean. With the Welch test you also generally pull down the df. For two samples, of sizes 10 and 11, initially with the same st
Why does the t-test produce non significant p-values when there are outliers? When you move an observation up you impact that group's sd as well as the mean. With the Welch test you also generally pull down the df. For two samples, of sizes 10 and 11, initially with the same standard deviation and half a standard deviation apart, here's the effect on the difference in means of moving one observation from the mean of the larger group up higher and higher, as well as on the standard error of the difference in means (top left), on the t-statistic (top right), on the d.f. in the Welch test (bottom left) and on the p-value (bottom right): (These are actually empirical influence functions for data values that are based off expected normal scores for n=10 and 10, shifting the second group up a little and then adding an additional observation to the second group and moving it up in multiple stages.) As you pull an observation up higher, the t-statistic increases for a little while, but then starts to come back down and then approaches an asymptote (as indicated at 1 in the plot above). The df decreases toward the df of the (now) smaller-variance group and the p-value initially decreases but then climbs again before levelling off. While this situation is not identical to that in your data, the basic pattern (t increases and then decreases, p decreases then increases) is fairly general.
Why does the t-test produce non significant p-values when there are outliers? When you move an observation up you impact that group's sd as well as the mean. With the Welch test you also generally pull down the df. For two samples, of sizes 10 and 11, initially with the same st
51,739
Why does the t-test produce non significant p-values when there are outliers?
You’re expecting to drag up the mean of that group by having a gigantic number, right? You’ll be successful in dragging up that mean by doing that. It also expands the variance, and that’s why you’re not getting a low p-value, despite the considerably different means. The t-test is hard to trick with these kinds of extreme points (so-called “outliers”, though most of us on here don’t like that term).
Why does the t-test produce non significant p-values when there are outliers?
You’re expecting to drag up the mean of that group by having a gigantic number, right? You’ll be successful in dragging up that mean by doing that. It also expands the variance, and that’s why you’re
Why does the t-test produce non significant p-values when there are outliers? You’re expecting to drag up the mean of that group by having a gigantic number, right? You’ll be successful in dragging up that mean by doing that. It also expands the variance, and that’s why you’re not getting a low p-value, despite the considerably different means. The t-test is hard to trick with these kinds of extreme points (so-called “outliers”, though most of us on here don’t like that term).
Why does the t-test produce non significant p-values when there are outliers? You’re expecting to drag up the mean of that group by having a gigantic number, right? You’ll be successful in dragging up that mean by doing that. It also expands the variance, and that’s why you’re
51,740
Why does the t-test produce non significant p-values when there are outliers?
You get a small p-value from a t-test (and many other types of test as well) when the mean difference between the sample means is large, as you probably expected. However, the test is looking for 'large' relative the the variability in the samples and your introduction of an 'outlier' has inflated the variability and so the scaled difference is not large. Technically the scaling of the mean difference is relative to the standard error of the mean, which is the standard deviation divided by the square root of the number of observations.
Why does the t-test produce non significant p-values when there are outliers?
You get a small p-value from a t-test (and many other types of test as well) when the mean difference between the sample means is large, as you probably expected. However, the test is looking for 'lar
Why does the t-test produce non significant p-values when there are outliers? You get a small p-value from a t-test (and many other types of test as well) when the mean difference between the sample means is large, as you probably expected. However, the test is looking for 'large' relative the the variability in the samples and your introduction of an 'outlier' has inflated the variability and so the scaled difference is not large. Technically the scaling of the mean difference is relative to the standard error of the mean, which is the standard deviation divided by the square root of the number of observations.
Why does the t-test produce non significant p-values when there are outliers? You get a small p-value from a t-test (and many other types of test as well) when the mean difference between the sample means is large, as you probably expected. However, the test is looking for 'lar
51,741
Interpretation of coefficients in a poorly performing GLM
The statistical interpretation of the coefficients doesn't depend on how the model was fit. I could make completely random guesses of the coefficients and they would have the same interpretation as they would had I estimated them with maximum likelihood. For two units identical on all measured variables except that they differed on $X_1$ by one unit, the difference in the log odds of success is $\beta_1$. That interpretation comes directly from simply writing down the regression equation and has nothing to do with the fitting process. To interpret the coefficients as consistent estimates of some "true" association, or as total effects rather than direct effects, or as causal effects rather than mere conditional assocations, requires more assumptions, far more than whether the model fit well in your sample. For example, let's say the true data-generating (i.e., structural causal) model was $$P(Y=1|X_1,X_2) = expit(\gamma_0 + \gamma_1 X_1 + \gamma_2 X_2)$$ Let's say I'm considering the model $$P(Y=1|X_1) = expit(\beta_0 + \beta_1 X_1)$$ which excludes $X_2$. $\beta_1$ doesn't have a causal interpretation, but it's the regression slope you would get if you were to fit that model to the population data (i.e., so there is no sampling error). The interpretation of $\beta_1$ in this model is: For two units that differed on $X_1$ by one unit, the difference in the log odds of success is $\beta_1$. Let's say I collect a sample and then pull an estimate of $\beta_1$ out of a hat and call it $\hat \beta_1^{guess}$. Even though that value is completely unconnected to the sample, it still has the same interpretation as any other estimate of $\beta_1$, which is as an estimate of the difference in the log odds of success for two units that differed on $X_1$ by one unit. It's not a valid or consistent estimate, but it's an estimate of a quantity that has a clear interpretation. The quantity ($\beta_1$) does not have a causal interpretation, but it's still meaningfully interpretable as an associational quantity. If I were to estimate $\beta_1$ with maximum likelihood, and call the estimate $\hat \beta_1^{MLE}$, it has the same interpretation as $\hat \beta_1^{guess}$, which is that it is an estimate of $\beta_1$, which, again, has a clear interpretation. $\hat \beta_1^{MLE}$ is a consistent estimate of $\beta_1$, so if I were to want to know what $\beta_1$ was I would be inclined to say it's closer to $\hat \beta_1^{MLE}$ than it is to $\hat \beta_1^{guess}$. $\hat \beta_1^{MLE}$ could result from a terribly fitting model, and that would say nothing of its interpretation. A terribly fitting model might result because we failed to include $X_2$ in it. That doesn't change how $\beta_1$, and thus how $\hat \beta_1^{MLE}$ and $\hat \beta_1^{guess}$, are interpreted. If you wanted to interpret a regression coefficient as causal, then you want to estimate $\gamma_1$, not $\beta_1$. The interpretation of $\gamma_1$ is the change in the log odds of success caused by intervening on $X_1$ by one unit while holding $X_2$ constant. Any estimate of $\gamma_1$, regardless of how it came to be, could be interpreted as an estimate of the change in the log odds of success caused by intervening on $X_1$ by one unit while holding $X_2$ constant. You could even use $\hat \beta_1^{guess}$ as an estimate of $\gamma_1$ and it would still have this interpretation. It would likely be a bad estimate that you shouldn't trust, but that doesn't change its interpretation. Even if you estimated $\gamma_1$ using maximum likelihood estimation of a model that included both $X_1$ and $X_2$, its interpretation would be the same; it would likely just be a better estimate (but it doesn't mean it's a good estimate!). All this is to say that the interpretation of coefficients comes from the model as it is written, not the way they are estimated or how well the estimated model fits. These may serve as indicators as to whether the estimated coefficients might be close to the population versions they are trying to approximate, but not how they should be interpreted. For example, a poorly fitting model resulting from regressing $Y$ on $X_1$ may indicate that $\hat \beta_1$ is a poor estimate of $\gamma_1$, but it may be a good estimate of $\beta_1$. The interpretations of $\beta_1$ and $\gamma_1$ are unrelated to how the estimates were generated, and the interpretation of the estimates is simply as estimates of those quantities.
Interpretation of coefficients in a poorly performing GLM
The statistical interpretation of the coefficients doesn't depend on how the model was fit. I could make completely random guesses of the coefficients and they would have the same interpretation as th
Interpretation of coefficients in a poorly performing GLM The statistical interpretation of the coefficients doesn't depend on how the model was fit. I could make completely random guesses of the coefficients and they would have the same interpretation as they would had I estimated them with maximum likelihood. For two units identical on all measured variables except that they differed on $X_1$ by one unit, the difference in the log odds of success is $\beta_1$. That interpretation comes directly from simply writing down the regression equation and has nothing to do with the fitting process. To interpret the coefficients as consistent estimates of some "true" association, or as total effects rather than direct effects, or as causal effects rather than mere conditional assocations, requires more assumptions, far more than whether the model fit well in your sample. For example, let's say the true data-generating (i.e., structural causal) model was $$P(Y=1|X_1,X_2) = expit(\gamma_0 + \gamma_1 X_1 + \gamma_2 X_2)$$ Let's say I'm considering the model $$P(Y=1|X_1) = expit(\beta_0 + \beta_1 X_1)$$ which excludes $X_2$. $\beta_1$ doesn't have a causal interpretation, but it's the regression slope you would get if you were to fit that model to the population data (i.e., so there is no sampling error). The interpretation of $\beta_1$ in this model is: For two units that differed on $X_1$ by one unit, the difference in the log odds of success is $\beta_1$. Let's say I collect a sample and then pull an estimate of $\beta_1$ out of a hat and call it $\hat \beta_1^{guess}$. Even though that value is completely unconnected to the sample, it still has the same interpretation as any other estimate of $\beta_1$, which is as an estimate of the difference in the log odds of success for two units that differed on $X_1$ by one unit. It's not a valid or consistent estimate, but it's an estimate of a quantity that has a clear interpretation. The quantity ($\beta_1$) does not have a causal interpretation, but it's still meaningfully interpretable as an associational quantity. If I were to estimate $\beta_1$ with maximum likelihood, and call the estimate $\hat \beta_1^{MLE}$, it has the same interpretation as $\hat \beta_1^{guess}$, which is that it is an estimate of $\beta_1$, which, again, has a clear interpretation. $\hat \beta_1^{MLE}$ is a consistent estimate of $\beta_1$, so if I were to want to know what $\beta_1$ was I would be inclined to say it's closer to $\hat \beta_1^{MLE}$ than it is to $\hat \beta_1^{guess}$. $\hat \beta_1^{MLE}$ could result from a terribly fitting model, and that would say nothing of its interpretation. A terribly fitting model might result because we failed to include $X_2$ in it. That doesn't change how $\beta_1$, and thus how $\hat \beta_1^{MLE}$ and $\hat \beta_1^{guess}$, are interpreted. If you wanted to interpret a regression coefficient as causal, then you want to estimate $\gamma_1$, not $\beta_1$. The interpretation of $\gamma_1$ is the change in the log odds of success caused by intervening on $X_1$ by one unit while holding $X_2$ constant. Any estimate of $\gamma_1$, regardless of how it came to be, could be interpreted as an estimate of the change in the log odds of success caused by intervening on $X_1$ by one unit while holding $X_2$ constant. You could even use $\hat \beta_1^{guess}$ as an estimate of $\gamma_1$ and it would still have this interpretation. It would likely be a bad estimate that you shouldn't trust, but that doesn't change its interpretation. Even if you estimated $\gamma_1$ using maximum likelihood estimation of a model that included both $X_1$ and $X_2$, its interpretation would be the same; it would likely just be a better estimate (but it doesn't mean it's a good estimate!). All this is to say that the interpretation of coefficients comes from the model as it is written, not the way they are estimated or how well the estimated model fits. These may serve as indicators as to whether the estimated coefficients might be close to the population versions they are trying to approximate, but not how they should be interpreted. For example, a poorly fitting model resulting from regressing $Y$ on $X_1$ may indicate that $\hat \beta_1$ is a poor estimate of $\gamma_1$, but it may be a good estimate of $\beta_1$. The interpretations of $\beta_1$ and $\gamma_1$ are unrelated to how the estimates were generated, and the interpretation of the estimates is simply as estimates of those quantities.
Interpretation of coefficients in a poorly performing GLM The statistical interpretation of the coefficients doesn't depend on how the model was fit. I could make completely random guesses of the coefficients and they would have the same interpretation as th
51,742
Interpretation of coefficients in a poorly performing GLM
We do something like this all the time when we do t-testing of means. Remember that a t-test of means is a two-sample ANOVA, meaning that we do a regression like: $$\hat{y}_i = \hat{\beta}_0 + \hat{\beta}_1x_i$$ where $x_i$ is a $0/1$ indicator variable for group membership. When you do a t-test, you often leave lots of variance unexplained. set.seed(2020) N <- 250 x <- c(rep(0, N), rep(1, N)) y <- c(rnorm(N, 0, 1), rnorm(N, 0.5, 1)) tt <- t.test(y[x==0], y[x==1], var.equal=T)$p.value L <- lm(y~x) summary(L) tt The p-value is tiny, $8.48\times 10^{-5}$, and the correct value of $\beta_1=0.5$ is within the $95\%$ confidence interval, yet the $R^2 = 0.03057$. So yes, it can be acceptable to do the same when you do a logistic regression instead of a linear regression. It might be a terrible idea, but poor fit alone is not a reason to keep from interpreting the coefficients. Consider the situation where the true conditional probabilities are all around $0.5$. You shouldn't be able to do much better than guessing. Finally, be leery of using improper scoring rules like AUCROC. There are many posts on here about this topic, some of which are mine. This linked post has an excellent answer with some links. The "Frank Harrell" I mention says that ROCAUC can be used for diagnostics of a model on its own---does it perform well at all---but is not for model comparisons.
Interpretation of coefficients in a poorly performing GLM
We do something like this all the time when we do t-testing of means. Remember that a t-test of means is a two-sample ANOVA, meaning that we do a regression like: $$\hat{y}_i = \hat{\beta}_0 + \hat{\b
Interpretation of coefficients in a poorly performing GLM We do something like this all the time when we do t-testing of means. Remember that a t-test of means is a two-sample ANOVA, meaning that we do a regression like: $$\hat{y}_i = \hat{\beta}_0 + \hat{\beta}_1x_i$$ where $x_i$ is a $0/1$ indicator variable for group membership. When you do a t-test, you often leave lots of variance unexplained. set.seed(2020) N <- 250 x <- c(rep(0, N), rep(1, N)) y <- c(rnorm(N, 0, 1), rnorm(N, 0.5, 1)) tt <- t.test(y[x==0], y[x==1], var.equal=T)$p.value L <- lm(y~x) summary(L) tt The p-value is tiny, $8.48\times 10^{-5}$, and the correct value of $\beta_1=0.5$ is within the $95\%$ confidence interval, yet the $R^2 = 0.03057$. So yes, it can be acceptable to do the same when you do a logistic regression instead of a linear regression. It might be a terrible idea, but poor fit alone is not a reason to keep from interpreting the coefficients. Consider the situation where the true conditional probabilities are all around $0.5$. You shouldn't be able to do much better than guessing. Finally, be leery of using improper scoring rules like AUCROC. There are many posts on here about this topic, some of which are mine. This linked post has an excellent answer with some links. The "Frank Harrell" I mention says that ROCAUC can be used for diagnostics of a model on its own---does it perform well at all---but is not for model comparisons.
Interpretation of coefficients in a poorly performing GLM We do something like this all the time when we do t-testing of means. Remember that a t-test of means is a two-sample ANOVA, meaning that we do a regression like: $$\hat{y}_i = \hat{\beta}_0 + \hat{\b
51,743
Interpretation of coefficients in a poorly performing GLM
My advice on how to gain some guidance in a particular context of a poor regression model, is to proceed to construct a model where, if the correct model specification is provided, along with its random error structure, it actually performs well. The latter is determined based on parameter estimation routines as commonly employed over repeated simulation runs. This exercise also assists in interpreting the coefficients of a particular model when the model's underlying assumptions are theoretically accurate. The next step requires specific knowledge of the context so as to introduce a reasonable occurring model misspecification error (by say lacking availability to a significant contributing variable, or having to employ a less than perfect correlated variable). Re-estimate and now compare observed coefficients over repeated trials to the actual known values for the correct theoretical model. If the particular analysis you are employing is, say, highly sensitive to such misspecifications, you will be quantifiably educated and may wish to investigate other robust alternatives. You may also find modeling approaches that a surprisingly robust. Also, it may be the case, that the estimation routine itself is not particularly robust based on the particular parameter values, and not, per se, the model itself.
Interpretation of coefficients in a poorly performing GLM
My advice on how to gain some guidance in a particular context of a poor regression model, is to proceed to construct a model where, if the correct model specification is provided, along with its ran
Interpretation of coefficients in a poorly performing GLM My advice on how to gain some guidance in a particular context of a poor regression model, is to proceed to construct a model where, if the correct model specification is provided, along with its random error structure, it actually performs well. The latter is determined based on parameter estimation routines as commonly employed over repeated simulation runs. This exercise also assists in interpreting the coefficients of a particular model when the model's underlying assumptions are theoretically accurate. The next step requires specific knowledge of the context so as to introduce a reasonable occurring model misspecification error (by say lacking availability to a significant contributing variable, or having to employ a less than perfect correlated variable). Re-estimate and now compare observed coefficients over repeated trials to the actual known values for the correct theoretical model. If the particular analysis you are employing is, say, highly sensitive to such misspecifications, you will be quantifiably educated and may wish to investigate other robust alternatives. You may also find modeling approaches that a surprisingly robust. Also, it may be the case, that the estimation routine itself is not particularly robust based on the particular parameter values, and not, per se, the model itself.
Interpretation of coefficients in a poorly performing GLM My advice on how to gain some guidance in a particular context of a poor regression model, is to proceed to construct a model where, if the correct model specification is provided, along with its ran
51,744
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for the mean?
In regression problems the marginal distribution of Y does not matter. The conditional distribution of Y | X is what matters. For some problems this translates to examining the distribution of the model residuals. But your sample size is too small to check assumptions. It would be far better to use a robust approach that has many fewer assumptions, e.g. If you have one X and want to quantify the strength of relationship between X and Y, use a rank correlation coefficient Use a semiparametric regression model such as the proportional odds model that makes no assumption about the distribution of Y | X but only makes assumptions about the relative shapes of conditional distributions across different value of X. This generalizes rank correlation and Wilcoxon-type methods. Use a Bayesian model that generalizes the usual models, e.g., that has a prior distribution for the degree of non-normality or non-constant variance of conditional distributions. Use the bootstrap as advised above, but be cautious because the bootstrap is only approximate.
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for
In regression problems the marginal distribution of Y does not matter. The conditional distribution of Y | X is what matters. For some problems this translates to examining the distribution of the m
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for the mean? In regression problems the marginal distribution of Y does not matter. The conditional distribution of Y | X is what matters. For some problems this translates to examining the distribution of the model residuals. But your sample size is too small to check assumptions. It would be far better to use a robust approach that has many fewer assumptions, e.g. If you have one X and want to quantify the strength of relationship between X and Y, use a rank correlation coefficient Use a semiparametric regression model such as the proportional odds model that makes no assumption about the distribution of Y | X but only makes assumptions about the relative shapes of conditional distributions across different value of X. This generalizes rank correlation and Wilcoxon-type methods. Use a Bayesian model that generalizes the usual models, e.g., that has a prior distribution for the degree of non-normality or non-constant variance of conditional distributions. Use the bootstrap as advised above, but be cautious because the bootstrap is only approximate.
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for In regression problems the marginal distribution of Y does not matter. The conditional distribution of Y | X is what matters. For some problems this translates to examining the distribution of the m
51,745
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for the mean?
Can I treat this as log-normal data even though its logarithm is clearly not following a bell curve? No, in this case you are dealing with a variable that is log-uniform distributed. One approach is to use the bootstrap, where you takes samples repeatedly with replacement and compute the means of the samples. See the answers to this question, which deals with log-normal data, but the principles are the same: How do I calculate a confidence interval for the mean of a log-normal data set?
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for
Can I treat this as log-normal data even though its logarithm is clearly not following a bell curve? No, in this case you are dealing with a variable that is log-uniform distributed. One approach is
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for the mean? Can I treat this as log-normal data even though its logarithm is clearly not following a bell curve? No, in this case you are dealing with a variable that is log-uniform distributed. One approach is to use the bootstrap, where you takes samples repeatedly with replacement and compute the means of the samples. See the answers to this question, which deals with log-normal data, but the principles are the same: How do I calculate a confidence interval for the mean of a log-normal data set?
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for Can I treat this as log-normal data even though its logarithm is clearly not following a bell curve? No, in this case you are dealing with a variable that is log-uniform distributed. One approach is
51,746
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for the mean?
Something whose log would be close to uniform is somewhat skew, but not particularly difficult to deal with; sample means of size 24 will be very close to normally distributed. If it were actually log-uniform we could work out a suitable interval fairly readily but I wouldn't actually use the fact that the sample looks like its log is uniform; with only 24 observations that judgement may be rather suspect and you certainly don't want to be doing such model selection/identification on the very sample you're using for inference, since you don't have a good way of accounting for the effect of that (e.g. it will tend to make intervals narrower than they should be but quantifying how much is rather tricky). If they were actually log-uniform those narrower intervals would be "honestly" narrow, but there's no good basis to say so. Simulation at n=24 suggests that a 95% two-tailed t-interval on the untransformed data should perform reasonably well (i.e. give coverage very close to 95%) for data something similar to this, even though this is not actually normal and the distribution of sample means is slightly skew. If you want to go far into the tails it may be more of an issue, but a 95% two-sided interval should be fine.
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for
Something whose log would be close to uniform is somewhat skew, but not particularly difficult to deal with; sample means of size 24 will be very close to normally distributed. If it were actually log
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for the mean? Something whose log would be close to uniform is somewhat skew, but not particularly difficult to deal with; sample means of size 24 will be very close to normally distributed. If it were actually log-uniform we could work out a suitable interval fairly readily but I wouldn't actually use the fact that the sample looks like its log is uniform; with only 24 observations that judgement may be rather suspect and you certainly don't want to be doing such model selection/identification on the very sample you're using for inference, since you don't have a good way of accounting for the effect of that (e.g. it will tend to make intervals narrower than they should be but quantifying how much is rather tricky). If they were actually log-uniform those narrower intervals would be "honestly" narrow, but there's no good basis to say so. Simulation at n=24 suggests that a 95% two-tailed t-interval on the untransformed data should perform reasonably well (i.e. give coverage very close to 95%) for data something similar to this, even though this is not actually normal and the distribution of sample means is slightly skew. If you want to go far into the tails it may be more of an issue, but a 95% two-sided interval should be fine.
Logarithm of dependent variable is uniformly distributed. How to calculate a confidence interval for Something whose log would be close to uniform is somewhat skew, but not particularly difficult to deal with; sample means of size 24 will be very close to normally distributed. If it were actually log
51,747
Inverse of the covariance matrix of a multivariate normal distribution
If the variables are perfectly correlated, i.e. $\rho=1$, then covariance matrix becomes: $$\Sigma=\begin{bmatrix}\sigma_1^2 & \sigma_1\sigma_2 \\ \sigma_1\sigma_2 & \sigma_2^2 \end{bmatrix}$$ and its determinant is $\Delta=\sigma_1^2\sigma_2^2-\sigma_1\sigma_2\sigma_1\sigma_2=0$, which means the matrix is not invertible. A possible case this occurs is $X_1=\alpha X_2$ as in @Xian's comment. Here $\alpha>0$, but for $\alpha<0$ $\rho=-1$ which still doesn't save the $\Sigma$. It is only invertible when $|\rho|<1$ since the covariance matrix is actually $$\Sigma=\begin{bmatrix}\sigma_1^2 & \rho\sigma_1\sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma_2^2 \end{bmatrix}$$ And, the determinant is $\Delta=\sigma_1^2\sigma_2^2(1-\rho^2)$, which is $>0$ when $|\rho|<1$.
Inverse of the covariance matrix of a multivariate normal distribution
If the variables are perfectly correlated, i.e. $\rho=1$, then covariance matrix becomes: $$\Sigma=\begin{bmatrix}\sigma_1^2 & \sigma_1\sigma_2 \\ \sigma_1\sigma_2 & \sigma_2^2 \end{bmatrix}$$ and its
Inverse of the covariance matrix of a multivariate normal distribution If the variables are perfectly correlated, i.e. $\rho=1$, then covariance matrix becomes: $$\Sigma=\begin{bmatrix}\sigma_1^2 & \sigma_1\sigma_2 \\ \sigma_1\sigma_2 & \sigma_2^2 \end{bmatrix}$$ and its determinant is $\Delta=\sigma_1^2\sigma_2^2-\sigma_1\sigma_2\sigma_1\sigma_2=0$, which means the matrix is not invertible. A possible case this occurs is $X_1=\alpha X_2$ as in @Xian's comment. Here $\alpha>0$, but for $\alpha<0$ $\rho=-1$ which still doesn't save the $\Sigma$. It is only invertible when $|\rho|<1$ since the covariance matrix is actually $$\Sigma=\begin{bmatrix}\sigma_1^2 & \rho\sigma_1\sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma_2^2 \end{bmatrix}$$ And, the determinant is $\Delta=\sigma_1^2\sigma_2^2(1-\rho^2)$, which is $>0$ when $|\rho|<1$.
Inverse of the covariance matrix of a multivariate normal distribution If the variables are perfectly correlated, i.e. $\rho=1$, then covariance matrix becomes: $$\Sigma=\begin{bmatrix}\sigma_1^2 & \sigma_1\sigma_2 \\ \sigma_1\sigma_2 & \sigma_2^2 \end{bmatrix}$$ and its
51,748
Inverse of the covariance matrix of a multivariate normal distribution
No. The covariance matrix of two perfectly correlated standard normal random variables is given by $\Sigma = \pmatrix{1 & 1 \\1 & 1}$, which is not invertible.
Inverse of the covariance matrix of a multivariate normal distribution
No. The covariance matrix of two perfectly correlated standard normal random variables is given by $\Sigma = \pmatrix{1 & 1 \\1 & 1}$, which is not invertible.
Inverse of the covariance matrix of a multivariate normal distribution No. The covariance matrix of two perfectly correlated standard normal random variables is given by $\Sigma = \pmatrix{1 & 1 \\1 & 1}$, which is not invertible.
Inverse of the covariance matrix of a multivariate normal distribution No. The covariance matrix of two perfectly correlated standard normal random variables is given by $\Sigma = \pmatrix{1 & 1 \\1 & 1}$, which is not invertible.
51,749
How can we calculate the probability that the randomly chosen function will be strictly increasing?
Let us pick $m$ elements from $\{1,\dotsc,n\}$, let us call these $a_1 < a_2 < \dotsc , a_m$. Clearly these define a strictly increasing function $f$ from $\{1,\dotsc,m\} \to \{1,\dotsc,n\}$ via the rule $f(i) = a_i$. Furthermore, any strictly increasing function defined on the above sets is of this form. Hence there are exactly ${n \choose m}$ strictly increasing functions. On the other hand, in total there are $n^m$ functions mapping between these two sets. Assuming that by "random" the OP means the uniform measure on the $n^m$ functions above, then the probability of picking a strictly increasing function is: $$ \frac{{n \choose m}}{n^m} $$ For example, for $n >> m$, an application of Stirling's approximation, shows that the RHS is $ \approx \frac{1}{m!}$.
How can we calculate the probability that the randomly chosen function will be strictly increasing?
Let us pick $m$ elements from $\{1,\dotsc,n\}$, let us call these $a_1 < a_2 < \dotsc , a_m$. Clearly these define a strictly increasing function $f$ from $\{1,\dotsc,m\} \to \{1,\dotsc,n\}$ via the r
How can we calculate the probability that the randomly chosen function will be strictly increasing? Let us pick $m$ elements from $\{1,\dotsc,n\}$, let us call these $a_1 < a_2 < \dotsc , a_m$. Clearly these define a strictly increasing function $f$ from $\{1,\dotsc,m\} \to \{1,\dotsc,n\}$ via the rule $f(i) = a_i$. Furthermore, any strictly increasing function defined on the above sets is of this form. Hence there are exactly ${n \choose m}$ strictly increasing functions. On the other hand, in total there are $n^m$ functions mapping between these two sets. Assuming that by "random" the OP means the uniform measure on the $n^m$ functions above, then the probability of picking a strictly increasing function is: $$ \frac{{n \choose m}}{n^m} $$ For example, for $n >> m$, an application of Stirling's approximation, shows that the RHS is $ \approx \frac{1}{m!}$.
How can we calculate the probability that the randomly chosen function will be strictly increasing? Let us pick $m$ elements from $\{1,\dotsc,n\}$, let us call these $a_1 < a_2 < \dotsc , a_m$. Clearly these define a strictly increasing function $f$ from $\{1,\dotsc,m\} \to \{1,\dotsc,n\}$ via the r
51,750
How can we calculate the probability that the randomly chosen function will be strictly increasing?
Let $S(n,m)$ be the number of sub-arrays $1 \leqslant k_1 < k_2 < \cdots < k_m \leqslant n$ containing $m$ integer values that are increasing and are bounded by the values one and $n$. This binary function is well-defined for all integers $1 \leqslant m \leqslant n$, giving a triangular array of values. With a simple combinatorial argument$^\dagger$ we can establish the following recursive equations that define this binary function: $$S(n+1,m) = S(n,m) + S(n,m-1) \quad \quad \quad \quad S(n,1) = n.$$ Solving this recursive equation gives us the explicit formula: $$S(n,m) = {n \choose m} = \frac{n!}{m!(n-m)!}.$$ (There are other combinatorial arguments that also lead you to this result. For example, choosing an increasing function is equivalent to choosing $m$ values in the co-domain, which are then placed in increasing order.) Now, to get the result we need to be clear on exactly how a "random function" on this domain and co-domain is chosen. The simplest specification is to say that each possible mapping is chosen with equal probability, which means that there are $n^m$ equiprobable functions. Hence, the probability of interest is: $$\mathbb{P}(\text{Increasing Function}) = \frac{n!}{m!(n-m)! \cdot n^m}.$$ Taking a first-order Stirling approximation for large $n$ gives $\mathbb{P}(\text{Increasing Function}) \approx 1/m!$, which is a very crude estimate that is suitable when $n$ is substantially larger than $m$. So basically, we see that once the co-domain in this problem is large, the probability of getting an increasing sequence at random is small; this accords with intuition. $^\dagger$ If $m=1$ then we have only a single value in the mapping and every mapping to any of the $n$ places gives an increasing map. We therefore have $S(n,1)=n$ for all $n \in \mathbb{N}$. Moreover, the number of sub-arrays $S(n+1,m)$ includes all sub-arrays where the values occurs in the first $n$ places (there are $S(n,m)$ of these) and all the sub-arrays where the last value occurs in the last place and the remaining values occur before this (there are $S(n,m-1)$ of these).
How can we calculate the probability that the randomly chosen function will be strictly increasing?
Let $S(n,m)$ be the number of sub-arrays $1 \leqslant k_1 < k_2 < \cdots < k_m \leqslant n$ containing $m$ integer values that are increasing and are bounded by the values one and $n$. This binary fu
How can we calculate the probability that the randomly chosen function will be strictly increasing? Let $S(n,m)$ be the number of sub-arrays $1 \leqslant k_1 < k_2 < \cdots < k_m \leqslant n$ containing $m$ integer values that are increasing and are bounded by the values one and $n$. This binary function is well-defined for all integers $1 \leqslant m \leqslant n$, giving a triangular array of values. With a simple combinatorial argument$^\dagger$ we can establish the following recursive equations that define this binary function: $$S(n+1,m) = S(n,m) + S(n,m-1) \quad \quad \quad \quad S(n,1) = n.$$ Solving this recursive equation gives us the explicit formula: $$S(n,m) = {n \choose m} = \frac{n!}{m!(n-m)!}.$$ (There are other combinatorial arguments that also lead you to this result. For example, choosing an increasing function is equivalent to choosing $m$ values in the co-domain, which are then placed in increasing order.) Now, to get the result we need to be clear on exactly how a "random function" on this domain and co-domain is chosen. The simplest specification is to say that each possible mapping is chosen with equal probability, which means that there are $n^m$ equiprobable functions. Hence, the probability of interest is: $$\mathbb{P}(\text{Increasing Function}) = \frac{n!}{m!(n-m)! \cdot n^m}.$$ Taking a first-order Stirling approximation for large $n$ gives $\mathbb{P}(\text{Increasing Function}) \approx 1/m!$, which is a very crude estimate that is suitable when $n$ is substantially larger than $m$. So basically, we see that once the co-domain in this problem is large, the probability of getting an increasing sequence at random is small; this accords with intuition. $^\dagger$ If $m=1$ then we have only a single value in the mapping and every mapping to any of the $n$ places gives an increasing map. We therefore have $S(n,1)=n$ for all $n \in \mathbb{N}$. Moreover, the number of sub-arrays $S(n+1,m)$ includes all sub-arrays where the values occurs in the first $n$ places (there are $S(n,m)$ of these) and all the sub-arrays where the last value occurs in the last place and the remaining values occur before this (there are $S(n,m-1)$ of these).
How can we calculate the probability that the randomly chosen function will be strictly increasing? Let $S(n,m)$ be the number of sub-arrays $1 \leqslant k_1 < k_2 < \cdots < k_m \leqslant n$ containing $m$ integer values that are increasing and are bounded by the values one and $n$. This binary fu
51,751
Why i.i.d. is the most conservative distribution assumption
I think the use of a word conservative here is interesting to say the least. I'm used to saying it's the strongest assumption, the one that's hardest to prove that it holds and frankly the one that's probably violated most easily. It's the assumption that's easiest to build upon when teaching the regression theory. You don't need to worry about correlations and all the problems that they bring. You can easily apply CLT to get the asymptotic variances of parameters etc. You'll notice how easy it is to work with i.i.d. errors the moment you start talking about time series. All of a sudden you realize that the assumptions that are somewhat reasonable in cross-sectional analysis, do not hold in time series usually. Even in the cross-sectional analysis you don't really need independence and get get away with weakened assumption, e.g. see Gauss-Markov theorem. To me semantically it's better to use a word conservative when referencing the weakest assumption, i.e. the one that should hold true in most situations, not the strongest one, that holds rarely if ever. I would call i.i.d. assumption the most liberal, because it also liberates you from the necessity to deal with all the correlation and dependence issues, it lets you build this wonderful ideal world of independent errors. I could also call IID assumption outlandish
Why i.i.d. is the most conservative distribution assumption
I think the use of a word conservative here is interesting to say the least. I'm used to saying it's the strongest assumption, the one that's hardest to prove that it holds and frankly the one that's
Why i.i.d. is the most conservative distribution assumption I think the use of a word conservative here is interesting to say the least. I'm used to saying it's the strongest assumption, the one that's hardest to prove that it holds and frankly the one that's probably violated most easily. It's the assumption that's easiest to build upon when teaching the regression theory. You don't need to worry about correlations and all the problems that they bring. You can easily apply CLT to get the asymptotic variances of parameters etc. You'll notice how easy it is to work with i.i.d. errors the moment you start talking about time series. All of a sudden you realize that the assumptions that are somewhat reasonable in cross-sectional analysis, do not hold in time series usually. Even in the cross-sectional analysis you don't really need independence and get get away with weakened assumption, e.g. see Gauss-Markov theorem. To me semantically it's better to use a word conservative when referencing the weakest assumption, i.e. the one that should hold true in most situations, not the strongest one, that holds rarely if ever. I would call i.i.d. assumption the most liberal, because it also liberates you from the necessity to deal with all the correlation and dependence issues, it lets you build this wonderful ideal world of independent errors. I could also call IID assumption outlandish
Why i.i.d. is the most conservative distribution assumption I think the use of a word conservative here is interesting to say the least. I'm used to saying it's the strongest assumption, the one that's hardest to prove that it holds and frankly the one that's
51,752
Why i.i.d. is the most conservative distribution assumption
(I must note that I have not read the book, and thus may be misinterpreting this passage, or criticizing it inappropriately. That said...) I don't think this is correct. The standard regression assumption of i.i.d. errors does not pertain to the population from which the data were drawn. It is about the data that you are using to fit the model. That is, no one should ever believe that all human adult female heights that have ever existed or will exist, are independent of each other. They cannot be, due to shared genes, among other reasons. However, it is certainly possible, and often quite reasonable (IMHO) to imagine that the data in your sample are independent, e.g., when you have a set of young women all of whom are unrelated to each other. In that case, fitting a model that assumes the data are independent can be just fine. The import of the assumption of independence is not for the shape of the population distribution. While it can depend on the nature of the nonindependence and the estimation of the model, it is often the case that the mean estimates are unbiased, even when the data are not independent. Instead, the concern is typically about the appropriate width of a confidence interval around that estimated mean (or in a different framing, about the correctness of the p-value from a test of that parameter). As excerpted, the comment seems to be off-base to me. I am not primarily a Bayesian, and am considerably less sophisticated with Bayesian statistics, so it is possible there is some alternative Bayesian framing or interpretation of this such that the iid assumption is only about the whole possible (infinite) population, and specifically about its shape. But I am not aware of this.
Why i.i.d. is the most conservative distribution assumption
(I must note that I have not read the book, and thus may be misinterpreting this passage, or criticizing it inappropriately. That said...) I don't think this is correct. The standard regression a
Why i.i.d. is the most conservative distribution assumption (I must note that I have not read the book, and thus may be misinterpreting this passage, or criticizing it inappropriately. That said...) I don't think this is correct. The standard regression assumption of i.i.d. errors does not pertain to the population from which the data were drawn. It is about the data that you are using to fit the model. That is, no one should ever believe that all human adult female heights that have ever existed or will exist, are independent of each other. They cannot be, due to shared genes, among other reasons. However, it is certainly possible, and often quite reasonable (IMHO) to imagine that the data in your sample are independent, e.g., when you have a set of young women all of whom are unrelated to each other. In that case, fitting a model that assumes the data are independent can be just fine. The import of the assumption of independence is not for the shape of the population distribution. While it can depend on the nature of the nonindependence and the estimation of the model, it is often the case that the mean estimates are unbiased, even when the data are not independent. Instead, the concern is typically about the appropriate width of a confidence interval around that estimated mean (or in a different framing, about the correctness of the p-value from a test of that parameter). As excerpted, the comment seems to be off-base to me. I am not primarily a Bayesian, and am considerably less sophisticated with Bayesian statistics, so it is possible there is some alternative Bayesian framing or interpretation of this such that the iid assumption is only about the whole possible (infinite) population, and specifically about its shape. But I am not aware of this.
Why i.i.d. is the most conservative distribution assumption (I must note that I have not read the book, and thus may be misinterpreting this passage, or criticizing it inappropriately. That said...) I don't think this is correct. The standard regression a
51,753
Why i.i.d. is the most conservative distribution assumption
One possible interpretation of "conservative" is in the context of statistical testing. Conservative tests reject the null hypothesis less often than they should. An example of a conservative test is the Fisher's Exact Test: the actual false positive error rate is less than the nominal size of the test due to the discrete distribution of the odds ratio under permutations of the table values. In linear regression, we often test the hypothesis that one or more of the regression parameters is 0. If the errors are not in fact IID, the optimal solution due to the Gauss Markov Theorem, as @Aksakal mentioned, is the inverse variance weighted least squares. Naively using unweighted least squares does not bias estimates when the mean model is true. The lack of weighting does affect the level of the test of the regression parameters. The test with unweighted least squares may be conservative or anticonservative. If there are unmeasured sources of dependence or heteroscedasticity in observations, the robust sandwich variance estimator from generalized estimating equations produces standard errors that are consistent and produce tests of the correct level. I would argue that if we are discussing violations of model assumptions, the GEE should be mentioned. The GEE has nothing to do with being conservative, but producing correct inference.
Why i.i.d. is the most conservative distribution assumption
One possible interpretation of "conservative" is in the context of statistical testing. Conservative tests reject the null hypothesis less often than they should. An example of a conservative test is
Why i.i.d. is the most conservative distribution assumption One possible interpretation of "conservative" is in the context of statistical testing. Conservative tests reject the null hypothesis less often than they should. An example of a conservative test is the Fisher's Exact Test: the actual false positive error rate is less than the nominal size of the test due to the discrete distribution of the odds ratio under permutations of the table values. In linear regression, we often test the hypothesis that one or more of the regression parameters is 0. If the errors are not in fact IID, the optimal solution due to the Gauss Markov Theorem, as @Aksakal mentioned, is the inverse variance weighted least squares. Naively using unweighted least squares does not bias estimates when the mean model is true. The lack of weighting does affect the level of the test of the regression parameters. The test with unweighted least squares may be conservative or anticonservative. If there are unmeasured sources of dependence or heteroscedasticity in observations, the robust sandwich variance estimator from generalized estimating equations produces standard errors that are consistent and produce tests of the correct level. I would argue that if we are discussing violations of model assumptions, the GEE should be mentioned. The GEE has nothing to do with being conservative, but producing correct inference.
Why i.i.d. is the most conservative distribution assumption One possible interpretation of "conservative" is in the context of statistical testing. Conservative tests reject the null hypothesis less often than they should. An example of a conservative test is
51,754
Is there any limitation for the number of categories in Multinomial logistic regression?
There are a number of ways to think about this question. Probably the first consideration is resource dependent, boiling down to where you are doing your analysis: laptop or massively parallel platform? You should ask how much RAM or memory is accessible. RAM impacts the ability of your software to, e.g., invert a cross-product matrix or converge to a solution with a closed form algorithm. Quite obviously, the bigger the platform and the more RAM available, the bigger the matrix that can be handled. Next, there are software considerations, for instance, R is notoriously unable to deal with too much categorical information, whether in the target or feature variables. Other packages such as SAS have much greater inherent capacity. Next, there is the issue of the approach or theory underpinning the analysis -- e.g., frequentist or Bayesian? Inference or prediction and classification? Statistics or machine learning and computer science? Precise or approximate? Historically, frequentists have thrown up their hands in defeat when, e.g., a cross-product matrix became too big to invert. A good example of this are probit models with more than 3 levels to the target. Using classic, closed form statistical models, there isn't enough CPU in 10,000 years to solve this. Bayesians, on the other hand, were the first to identify workarounds to this problem. Let me illustrate this with a couple of examples. Fifteen years ago Steenburgh and Ainslie wrote a paper Massively Categorical Variables: Revealing the Information in Zip Codes offering a hierarchical bayesian solution to this problem. In your case, you have a multinomial target -- their approach is readily generalizable from features to targets. That the Ainslie method (and many Bayesian models) generates a boatload of parameters is not insuperable. It just may not be the most efficient solution. Next, in Gelman and Hill's book Data Analysis Using Regression and Multilevel/Hierarchical Models, they propose the possibility of Bayesian analysis with a multilevel categorical variable some of which contain only a single observation, i.e., very sparse information. The key to this counter-intuitive notion is that the information for that single observation across multiple draws will be summarized by the posterior. Note that these are Bayesian approximating heuristic workarounds. Today, even frequentists have access to such heuristic, approximating workarounds, e.g., bootstrapping, jacknifing, Breiman's random forests, computer science driven algorithms such as "divide and conquer" (D&C) or "bags of little jacknifes" (BLJ) for massive data mining, see, e.g., Wang, et al's paper, A Survey of Statistical Methods and Computing for Big Data Survey of Statistical Methods and Computing for Big Data*. These approaches don't render Bayesian solutions obsolete (previously the only game in town for, e.g., inverting huge cross-products matrices), they just make Bayesian approaches unnecessary. Software considerations arise again with these resampling methods insofar as I've heard that R doesn't easily permit the large, even massive, numbers of iterative looping required but, then, I'm not an R guy so I could easily be wrong. Questions concerning the accuracy of these approximating workarounds have been addressed by Minge and Chen in a paper titled A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data. They concluded that there was no significant loss of precision with these approaches relative to analyses based on "full information," fixed data. Finally, the consideration of inference in the face of massive information has had many implications for statistical analysis in the 21st c. To mention only one, classic 20th c statistical analyses and approaches have to be adapted and updated to reflect today's realities. Hastie and Efron's new book Computer Age Statistical Inference contains a multitude of suggestions wrt deriving inferences from large amounts of information. In particular, I like their discussion in chapter 10 of bootstrapping and jacknifing versus, e.g., classic approaches to Taylor expansion.
Is there any limitation for the number of categories in Multinomial logistic regression?
There are a number of ways to think about this question. Probably the first consideration is resource dependent, boiling down to where you are doing your analysis: laptop or massively parallel platfor
Is there any limitation for the number of categories in Multinomial logistic regression? There are a number of ways to think about this question. Probably the first consideration is resource dependent, boiling down to where you are doing your analysis: laptop or massively parallel platform? You should ask how much RAM or memory is accessible. RAM impacts the ability of your software to, e.g., invert a cross-product matrix or converge to a solution with a closed form algorithm. Quite obviously, the bigger the platform and the more RAM available, the bigger the matrix that can be handled. Next, there are software considerations, for instance, R is notoriously unable to deal with too much categorical information, whether in the target or feature variables. Other packages such as SAS have much greater inherent capacity. Next, there is the issue of the approach or theory underpinning the analysis -- e.g., frequentist or Bayesian? Inference or prediction and classification? Statistics or machine learning and computer science? Precise or approximate? Historically, frequentists have thrown up their hands in defeat when, e.g., a cross-product matrix became too big to invert. A good example of this are probit models with more than 3 levels to the target. Using classic, closed form statistical models, there isn't enough CPU in 10,000 years to solve this. Bayesians, on the other hand, were the first to identify workarounds to this problem. Let me illustrate this with a couple of examples. Fifteen years ago Steenburgh and Ainslie wrote a paper Massively Categorical Variables: Revealing the Information in Zip Codes offering a hierarchical bayesian solution to this problem. In your case, you have a multinomial target -- their approach is readily generalizable from features to targets. That the Ainslie method (and many Bayesian models) generates a boatload of parameters is not insuperable. It just may not be the most efficient solution. Next, in Gelman and Hill's book Data Analysis Using Regression and Multilevel/Hierarchical Models, they propose the possibility of Bayesian analysis with a multilevel categorical variable some of which contain only a single observation, i.e., very sparse information. The key to this counter-intuitive notion is that the information for that single observation across multiple draws will be summarized by the posterior. Note that these are Bayesian approximating heuristic workarounds. Today, even frequentists have access to such heuristic, approximating workarounds, e.g., bootstrapping, jacknifing, Breiman's random forests, computer science driven algorithms such as "divide and conquer" (D&C) or "bags of little jacknifes" (BLJ) for massive data mining, see, e.g., Wang, et al's paper, A Survey of Statistical Methods and Computing for Big Data Survey of Statistical Methods and Computing for Big Data*. These approaches don't render Bayesian solutions obsolete (previously the only game in town for, e.g., inverting huge cross-products matrices), they just make Bayesian approaches unnecessary. Software considerations arise again with these resampling methods insofar as I've heard that R doesn't easily permit the large, even massive, numbers of iterative looping required but, then, I'm not an R guy so I could easily be wrong. Questions concerning the accuracy of these approximating workarounds have been addressed by Minge and Chen in a paper titled A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data. They concluded that there was no significant loss of precision with these approaches relative to analyses based on "full information," fixed data. Finally, the consideration of inference in the face of massive information has had many implications for statistical analysis in the 21st c. To mention only one, classic 20th c statistical analyses and approaches have to be adapted and updated to reflect today's realities. Hastie and Efron's new book Computer Age Statistical Inference contains a multitude of suggestions wrt deriving inferences from large amounts of information. In particular, I like their discussion in chapter 10 of bootstrapping and jacknifing versus, e.g., classic approaches to Taylor expansion.
Is there any limitation for the number of categories in Multinomial logistic regression? There are a number of ways to think about this question. Probably the first consideration is resource dependent, boiling down to where you are doing your analysis: laptop or massively parallel platfor
51,755
Is there any limitation for the number of categories in Multinomial logistic regression?
There is no hard limit for the number of categories in multinomial logistic regression, but the number of parameters will grow very fast, so you will need a lot of data with many categories. Also, the interpretation of results will be difficult with many categories. This question is very broad, you are probably better off asking a more focused question about your real problem!
Is there any limitation for the number of categories in Multinomial logistic regression?
There is no hard limit for the number of categories in multinomial logistic regression, but the number of parameters will grow very fast, so you will need a lot of data with many categories. Also, th
Is there any limitation for the number of categories in Multinomial logistic regression? There is no hard limit for the number of categories in multinomial logistic regression, but the number of parameters will grow very fast, so you will need a lot of data with many categories. Also, the interpretation of results will be difficult with many categories. This question is very broad, you are probably better off asking a more focused question about your real problem!
Is there any limitation for the number of categories in Multinomial logistic regression? There is no hard limit for the number of categories in multinomial logistic regression, but the number of parameters will grow very fast, so you will need a lot of data with many categories. Also, th
51,756
Is there any limitation for the number of categories in Multinomial logistic regression?
As estimating a single probability requires 96 observations to achieve a margin of error of +/- 0.10, one could say that if you have at least 96 observations in the smallest cell formed by cross-classifying $Y$ with any of the categorical $X$s the number of categories for $Y$ is not statistically problematic. See Section 10.2.3 of my Regression Modeling Strategies course notes available here
Is there any limitation for the number of categories in Multinomial logistic regression?
As estimating a single probability requires 96 observations to achieve a margin of error of +/- 0.10, one could say that if you have at least 96 observations in the smallest cell formed by cross-class
Is there any limitation for the number of categories in Multinomial logistic regression? As estimating a single probability requires 96 observations to achieve a margin of error of +/- 0.10, one could say that if you have at least 96 observations in the smallest cell formed by cross-classifying $Y$ with any of the categorical $X$s the number of categories for $Y$ is not statistically problematic. See Section 10.2.3 of my Regression Modeling Strategies course notes available here
Is there any limitation for the number of categories in Multinomial logistic regression? As estimating a single probability requires 96 observations to achieve a margin of error of +/- 0.10, one could say that if you have at least 96 observations in the smallest cell formed by cross-class
51,757
Skew and Standard Deviation
No. Most simple example in excel: Skewness is equal in both cases, standard deviation highly different. Therefore it is not appropriate to say in general that high skew means large std. Skewness is a direction in which a sample "leans" and does not depend on scaling, whereas standard deviation highly depends on scaling Further examples could be found where standard deviation is same, yet skewness is different.
Skew and Standard Deviation
No. Most simple example in excel: Skewness is equal in both cases, standard deviation highly different. Therefore it is not appropriate to say in general that high skew means large std. Skewness is
Skew and Standard Deviation No. Most simple example in excel: Skewness is equal in both cases, standard deviation highly different. Therefore it is not appropriate to say in general that high skew means large std. Skewness is a direction in which a sample "leans" and does not depend on scaling, whereas standard deviation highly depends on scaling Further examples could be found where standard deviation is same, yet skewness is different.
Skew and Standard Deviation No. Most simple example in excel: Skewness is equal in both cases, standard deviation highly different. Therefore it is not appropriate to say in general that high skew means large std. Skewness is
51,758
Skew and Standard Deviation
Skewness (by any reasonable measure - and there are a number that are used) is a consequence of the shape of the distribution, not of its location or scale - you could add(/subtract) a constant to the random variable or multiply(/divide) by a constant, and it would not change how skewed the distribution was. For example, the most common measure of skewness, the one based on the third moment, can be written as the third moment of a standardized variable: $E[(\frac{X-\mu_X}{\sigma_X})^3]$. If you rescale the variable ($Y = kX$) -- which changes the standard deviation by a factor of $k$ - you scale both the numerator and denominator in that fraction by the same scale, which then cancels: $\frac{Y-\mu_Y}{\sigma_Y} = \frac{kX-k\mu_X}{k\sigma_X} = \frac{X-\mu_X}{\sigma_X}$, and so the expectation of its cube is unchanged. Similar cancellation occurs with other measures of skewness. So "high skew" can go with small standard deviation just as easily as with large standard deviation, since standard deviation is a measure of scale but skewness isn't impacted by scale at all. $X$ and $Y$ above have identical skewness, but the standard deviation of the distribution of $Y$ might be vastly larger or smaller than $X$, depending on the choice of $k$.
Skew and Standard Deviation
Skewness (by any reasonable measure - and there are a number that are used) is a consequence of the shape of the distribution, not of its location or scale - you could add(/subtract) a constant to the
Skew and Standard Deviation Skewness (by any reasonable measure - and there are a number that are used) is a consequence of the shape of the distribution, not of its location or scale - you could add(/subtract) a constant to the random variable or multiply(/divide) by a constant, and it would not change how skewed the distribution was. For example, the most common measure of skewness, the one based on the third moment, can be written as the third moment of a standardized variable: $E[(\frac{X-\mu_X}{\sigma_X})^3]$. If you rescale the variable ($Y = kX$) -- which changes the standard deviation by a factor of $k$ - you scale both the numerator and denominator in that fraction by the same scale, which then cancels: $\frac{Y-\mu_Y}{\sigma_Y} = \frac{kX-k\mu_X}{k\sigma_X} = \frac{X-\mu_X}{\sigma_X}$, and so the expectation of its cube is unchanged. Similar cancellation occurs with other measures of skewness. So "high skew" can go with small standard deviation just as easily as with large standard deviation, since standard deviation is a measure of scale but skewness isn't impacted by scale at all. $X$ and $Y$ above have identical skewness, but the standard deviation of the distribution of $Y$ might be vastly larger or smaller than $X$, depending on the choice of $k$.
Skew and Standard Deviation Skewness (by any reasonable measure - and there are a number that are used) is a consequence of the shape of the distribution, not of its location or scale - you could add(/subtract) a constant to the
51,759
Are these approaches Bayesian, Frequentist or both?
You're making a familiar category error here. The methods you are talking about all correspond to some logical or algebraic structure or other, which is 'just math'. Similarly, they each have a particular implementation, which we might describe as 'just programming'. Neither math nor programming are well described as Frequentist, Bayesian, or anything like that. Put another way, the same mathematical structure can be Frequentist, Bayesian, some third thing, or a mix depending on how its elements are used for inference (which is something you do, not something the model has or does). Two examples OLS is an algorithm (minimize the sum of squared errors) and can be motivated as a Frequentist method when it is chosen for the behavior of its output in repeated samples, or a Bayesian tool for getting a key parameter of the posterior distribution under certain assumptions about the prior distribution of parameters, or as neither when it is motivated as an interesting application of singular value decomposition, or some other linear algebraic tool. Brown, Cai and Dasgupta, 2001 show that a Jeffrey's prior on a binomial proportion - something at least notionally Bayesian - behaves very well in repeated samples, and can be justified in a Frequentist way (Section 4.3), that is, in a way that makes no mention of beliefs about the value of the parameter. An analogy Think about dessert wines. A dessert wine is a wine that typically drunk with dessert, not a wine made in a special way or from a special kind of grape. It's true that dessert wines tend to have some characteristic properties, e.g. they're sweeter, but those features are not what makes them dessert wines; that's the dessert.
Are these approaches Bayesian, Frequentist or both?
You're making a familiar category error here. The methods you are talking about all correspond to some logical or algebraic structure or other, which is 'just math'. Similarly, they each have a part
Are these approaches Bayesian, Frequentist or both? You're making a familiar category error here. The methods you are talking about all correspond to some logical or algebraic structure or other, which is 'just math'. Similarly, they each have a particular implementation, which we might describe as 'just programming'. Neither math nor programming are well described as Frequentist, Bayesian, or anything like that. Put another way, the same mathematical structure can be Frequentist, Bayesian, some third thing, or a mix depending on how its elements are used for inference (which is something you do, not something the model has or does). Two examples OLS is an algorithm (minimize the sum of squared errors) and can be motivated as a Frequentist method when it is chosen for the behavior of its output in repeated samples, or a Bayesian tool for getting a key parameter of the posterior distribution under certain assumptions about the prior distribution of parameters, or as neither when it is motivated as an interesting application of singular value decomposition, or some other linear algebraic tool. Brown, Cai and Dasgupta, 2001 show that a Jeffrey's prior on a binomial proportion - something at least notionally Bayesian - behaves very well in repeated samples, and can be justified in a Frequentist way (Section 4.3), that is, in a way that makes no mention of beliefs about the value of the parameter. An analogy Think about dessert wines. A dessert wine is a wine that typically drunk with dessert, not a wine made in a special way or from a special kind of grape. It's true that dessert wines tend to have some characteristic properties, e.g. they're sweeter, but those features are not what makes them dessert wines; that's the dessert.
Are these approaches Bayesian, Frequentist or both? You're making a familiar category error here. The methods you are talking about all correspond to some logical or algebraic structure or other, which is 'just math'. Similarly, they each have a part
51,760
How can I prove that the median is a nonlinear function?
Median is homogenous of degree 1 Let $a$ be a real scalar and $\mathbf{x}$ be a vector in $\mathcal{R}^n$. Let us number the elements of $\mathbf{x}$ in order so that $x_1 \leq x_2 \leq \ldots \leq x_n $. Let $x_m = f(\mathbf{x})$ be the median of $\mathbf{x}$. Observe that for $a \geq 0$, the elements of the vector $a\mathbf{x}$ have the same order: $$ a x_1 \leq \ldots \leq a x_m \leq \ldots \leq a x_n $$ And for $a < 0$ the order is reversed: $$ a x_n \leq \ldots \leq a x_m \leq \ldots \leq a x_1 $$ In either case $ax_m$ is in the middle, it's the median. Median violates additivity Counterexample to additivity: $$\mathbf{x} = \left[\begin{array}{c}2 \\ 4 \\ 6 \end{array} \right] \quad \quad \mathbf{y} = \left[\begin{array}{c} 0 \\ -4 \\ 4 \end{array}\right] \quad \quad \mathbf{x} + \mathbf{y} = \left[\begin{array}{c} 2 \\ 0 \\ 10 \end{array}\right] $$ $$ f(\mathbf{x}) = 4\quad \quad f(\mathbf{y}) = 0 \quad \quad f(\mathbf{x} + \mathbf{y}) = 2$$ Pedantic subpoint: if all the elements of both $\mathbf{x}$ and $\mathbf{y}$ are in ascending (or descending) order, then the median does satisfy additivity.
How can I prove that the median is a nonlinear function?
Median is homogenous of degree 1 Let $a$ be a real scalar and $\mathbf{x}$ be a vector in $\mathcal{R}^n$. Let us number the elements of $\mathbf{x}$ in order so that $x_1 \leq x_2 \leq \ldots \leq x_
How can I prove that the median is a nonlinear function? Median is homogenous of degree 1 Let $a$ be a real scalar and $\mathbf{x}$ be a vector in $\mathcal{R}^n$. Let us number the elements of $\mathbf{x}$ in order so that $x_1 \leq x_2 \leq \ldots \leq x_n $. Let $x_m = f(\mathbf{x})$ be the median of $\mathbf{x}$. Observe that for $a \geq 0$, the elements of the vector $a\mathbf{x}$ have the same order: $$ a x_1 \leq \ldots \leq a x_m \leq \ldots \leq a x_n $$ And for $a < 0$ the order is reversed: $$ a x_n \leq \ldots \leq a x_m \leq \ldots \leq a x_1 $$ In either case $ax_m$ is in the middle, it's the median. Median violates additivity Counterexample to additivity: $$\mathbf{x} = \left[\begin{array}{c}2 \\ 4 \\ 6 \end{array} \right] \quad \quad \mathbf{y} = \left[\begin{array}{c} 0 \\ -4 \\ 4 \end{array}\right] \quad \quad \mathbf{x} + \mathbf{y} = \left[\begin{array}{c} 2 \\ 0 \\ 10 \end{array}\right] $$ $$ f(\mathbf{x}) = 4\quad \quad f(\mathbf{y}) = 0 \quad \quad f(\mathbf{x} + \mathbf{y}) = 2$$ Pedantic subpoint: if all the elements of both $\mathbf{x}$ and $\mathbf{y}$ are in ascending (or descending) order, then the median does satisfy additivity.
How can I prove that the median is a nonlinear function? Median is homogenous of degree 1 Let $a$ be a real scalar and $\mathbf{x}$ be a vector in $\mathcal{R}^n$. Let us number the elements of $\mathbf{x}$ in order so that $x_1 \leq x_2 \leq \ldots \leq x_
51,761
How can I prove that the median is a nonlinear function?
The OP is correct -- median is not linear since additivity does not hold, but homogeneity of degree $1$ holds. Additivity does not hold We show by counterexample that the additivity does not hold: let $x=(0,1,2)$ and $y=(2,0,0)$ and let $f$ be the mapping from a vector to the median of its elements. Now, $f(x)=1,~f(y)=0$, so, \begin{equation} f(x+y) = f((2,1,2)) = 2 \neq 1 = f(x) + f(y). \end{equation} Homogeneity of degree 1 holds The homogeneity of degree $1$ indeed holds as postulated in the answer:multiplying by a scalar does not change the order (except by reversing it if the scalar is negative, but this does not change who is in the middle), so if the median of $x$ is $x_i$, then also the median of $a\,x$ is $a\,x_i$. For even number of elements, the reasoning works if the median is defined to be the average of the two middle elements.
How can I prove that the median is a nonlinear function?
The OP is correct -- median is not linear since additivity does not hold, but homogeneity of degree $1$ holds. Additivity does not hold We show by counterexample that the additivity does not hold: let
How can I prove that the median is a nonlinear function? The OP is correct -- median is not linear since additivity does not hold, but homogeneity of degree $1$ holds. Additivity does not hold We show by counterexample that the additivity does not hold: let $x=(0,1,2)$ and $y=(2,0,0)$ and let $f$ be the mapping from a vector to the median of its elements. Now, $f(x)=1,~f(y)=0$, so, \begin{equation} f(x+y) = f((2,1,2)) = 2 \neq 1 = f(x) + f(y). \end{equation} Homogeneity of degree 1 holds The homogeneity of degree $1$ indeed holds as postulated in the answer:multiplying by a scalar does not change the order (except by reversing it if the scalar is negative, but this does not change who is in the middle), so if the median of $x$ is $x_i$, then also the median of $a\,x$ is $a\,x_i$. For even number of elements, the reasoning works if the median is defined to be the average of the two middle elements.
How can I prove that the median is a nonlinear function? The OP is correct -- median is not linear since additivity does not hold, but homogeneity of degree $1$ holds. Additivity does not hold We show by counterexample that the additivity does not hold: let
51,762
How can I prove that the median is a nonlinear function?
First, median minimizes the absolute error (Hurley, 2009) and $\mathrm{abs}$ is not a linear function. As about $\alpha f(x) = f(\alpha x)$, there can be two interpretations depending on if you ask about case where $\alpha$ is a scalar, or a vector. Let's consider both cases, but first recall that we calculate median by sorting the values and taking the middle one. If $\alpha$ is a scalar (as implied by the definition), then $\alpha f(x) = f(\alpha x)$ holds since multiplying $x$ by a scalar does not change the ordering. If $\alpha$ is a vector, then taken at face value it doesn't have much sense since median is a function that maps $\mathbb{R}^n \to \mathbb{R}$, as noticed by @JuhoKokkala. In such case we only can re-phrase your question to comparing multiplying vectors and then sorting, versus sorting and then multiplying. In such case, the ordering of elements in $x\alpha$ may be different then ordering of $x$. So in both cases you may take different $x_i$ multiplied by $\alpha_i$ as your median. You can easily produce numerical examples to convince yourself about that: set.seed(123) N <- 51 x <- rnorm(N) a <- runif(N) (x*a)[order(x*a)][(N+1)/2] # multiply and then sort (x[order(x)]*a[order(x)])[(N+1)/2] # sort and then multiply It is similar for $f(xy) = f(x) + f(y)$, where $x$ and $y$ are vectors, since sum of the elements may change the ordering (again, this is very simple to check numerically). Hurley, W. J. (2009) An Inductive Approach to Calculate the MLE for the Double Exponential Distribution. Journal of Modern Applied Statistical Methods: 8(2), Article 25.
How can I prove that the median is a nonlinear function?
First, median minimizes the absolute error (Hurley, 2009) and $\mathrm{abs}$ is not a linear function. As about $\alpha f(x) = f(\alpha x)$, there can be two interpretations depending on if you ask ab
How can I prove that the median is a nonlinear function? First, median minimizes the absolute error (Hurley, 2009) and $\mathrm{abs}$ is not a linear function. As about $\alpha f(x) = f(\alpha x)$, there can be two interpretations depending on if you ask about case where $\alpha$ is a scalar, or a vector. Let's consider both cases, but first recall that we calculate median by sorting the values and taking the middle one. If $\alpha$ is a scalar (as implied by the definition), then $\alpha f(x) = f(\alpha x)$ holds since multiplying $x$ by a scalar does not change the ordering. If $\alpha$ is a vector, then taken at face value it doesn't have much sense since median is a function that maps $\mathbb{R}^n \to \mathbb{R}$, as noticed by @JuhoKokkala. In such case we only can re-phrase your question to comparing multiplying vectors and then sorting, versus sorting and then multiplying. In such case, the ordering of elements in $x\alpha$ may be different then ordering of $x$. So in both cases you may take different $x_i$ multiplied by $\alpha_i$ as your median. You can easily produce numerical examples to convince yourself about that: set.seed(123) N <- 51 x <- rnorm(N) a <- runif(N) (x*a)[order(x*a)][(N+1)/2] # multiply and then sort (x[order(x)]*a[order(x)])[(N+1)/2] # sort and then multiply It is similar for $f(xy) = f(x) + f(y)$, where $x$ and $y$ are vectors, since sum of the elements may change the ordering (again, this is very simple to check numerically). Hurley, W. J. (2009) An Inductive Approach to Calculate the MLE for the Double Exponential Distribution. Journal of Modern Applied Statistical Methods: 8(2), Article 25.
How can I prove that the median is a nonlinear function? First, median minimizes the absolute error (Hurley, 2009) and $\mathrm{abs}$ is not a linear function. As about $\alpha f(x) = f(\alpha x)$, there can be two interpretations depending on if you ask ab
51,763
What's wrong to fit periodic data with polynomials?
In just the dataset you've provided, the only real downside to using polynomials over the Fourier basis is the issue of discontinuity at $T = 0$ and $T = 24$. As you stated, you can add constraints to fix this up if you really wished to. But more typically for this type of data, we observe several cycles. In this case, it would be the number of days of data. The whole point is to take advantage of the fact that 3pm on Monday has very similar features to 3pm on Tuesday. This relation would not show up at all in the "vanilla" polynomial expansion, and so you would not be borrowing at all from different cycles for estimation. For similar reasons, you would have almost no hope of getting a good extrapolation, even just 1 day out, where as even from a very basic Fourier expansion, you could say "I think at 3pm tomorrow, it will probably be the same heat as it usually is at 3pm".
What's wrong to fit periodic data with polynomials?
In just the dataset you've provided, the only real downside to using polynomials over the Fourier basis is the issue of discontinuity at $T = 0$ and $T = 24$. As you stated, you can add constraints to
What's wrong to fit periodic data with polynomials? In just the dataset you've provided, the only real downside to using polynomials over the Fourier basis is the issue of discontinuity at $T = 0$ and $T = 24$. As you stated, you can add constraints to fix this up if you really wished to. But more typically for this type of data, we observe several cycles. In this case, it would be the number of days of data. The whole point is to take advantage of the fact that 3pm on Monday has very similar features to 3pm on Tuesday. This relation would not show up at all in the "vanilla" polynomial expansion, and so you would not be borrowing at all from different cycles for estimation. For similar reasons, you would have almost no hope of getting a good extrapolation, even just 1 day out, where as even from a very basic Fourier expansion, you could say "I think at 3pm tomorrow, it will probably be the same heat as it usually is at 3pm".
What's wrong to fit periodic data with polynomials? In just the dataset you've provided, the only real downside to using polynomials over the Fourier basis is the issue of discontinuity at $T = 0$ and $T = 24$. As you stated, you can add constraints to
51,764
What's wrong to fit periodic data with polynomials?
The wrong is that to exactly capture the simplest periodic process such as a monochrome sine wave you need infinite number of polynomial terms. Look at Taylor expansion formula. Intuitively you want to fit function that (in some sense) looks like your underlying process. This way you'll have the fewest number of parameters to estimate. Say you have a round hole, and need to fit a cork into it. If your cork is square it's harder to fit it well than if the cork were round.
What's wrong to fit periodic data with polynomials?
The wrong is that to exactly capture the simplest periodic process such as a monochrome sine wave you need infinite number of polynomial terms. Look at Taylor expansion formula. Intuitively you want t
What's wrong to fit periodic data with polynomials? The wrong is that to exactly capture the simplest periodic process such as a monochrome sine wave you need infinite number of polynomial terms. Look at Taylor expansion formula. Intuitively you want to fit function that (in some sense) looks like your underlying process. This way you'll have the fewest number of parameters to estimate. Say you have a round hole, and need to fit a cork into it. If your cork is square it's harder to fit it well than if the cork were round.
What's wrong to fit periodic data with polynomials? The wrong is that to exactly capture the simplest periodic process such as a monochrome sine wave you need infinite number of polynomial terms. Look at Taylor expansion formula. Intuitively you want t
51,765
What's wrong to fit periodic data with polynomials?
Discontinuity at $T=0$ and $T=24$ is problem. In fact, the plot is misleading because it only plots $T$ up to $21$. If we change the plot code as: plot(d$t,d$temp,type='b',xlim=c(0,24),ylim=c(-7.5,1.5)) We can see 3rd order polynomial is not a good fit: At time $0$, the temperature is $-1.7$, but next day at time $0$ the temperature at $-7.04$ !: In addition, it is very nature to have function input $T$ as any real number, instead of limited to 0 to 23. For example, when $T=25$ it just means 1:00 in next day and $T=-1$ means 23:00 in previous day. Using polynomial basis we need to make to inside 0 to 23 to generate output. But with Fourier basis expansion, everything is build in.
What's wrong to fit periodic data with polynomials?
Discontinuity at $T=0$ and $T=24$ is problem. In fact, the plot is misleading because it only plots $T$ up to $21$. If we change the plot code as: plot(d$t,d$temp,type='b',xlim=c(0,24),ylim=c(-7.5,1.5
What's wrong to fit periodic data with polynomials? Discontinuity at $T=0$ and $T=24$ is problem. In fact, the plot is misleading because it only plots $T$ up to $21$. If we change the plot code as: plot(d$t,d$temp,type='b',xlim=c(0,24),ylim=c(-7.5,1.5)) We can see 3rd order polynomial is not a good fit: At time $0$, the temperature is $-1.7$, but next day at time $0$ the temperature at $-7.04$ !: In addition, it is very nature to have function input $T$ as any real number, instead of limited to 0 to 23. For example, when $T=25$ it just means 1:00 in next day and $T=-1$ means 23:00 in previous day. Using polynomial basis we need to make to inside 0 to 23 to generate output. But with Fourier basis expansion, everything is build in.
What's wrong to fit periodic data with polynomials? Discontinuity at $T=0$ and $T=24$ is problem. In fact, the plot is misleading because it only plots $T$ up to $21$. If we change the plot code as: plot(d$t,d$temp,type='b',xlim=c(0,24),ylim=c(-7.5,1.5
51,766
What's wrong to fit periodic data with polynomials?
If You fit data from a limited timeinterval, say one day, using splines, this does not take into account, the values of the preceding and following intervals. You find this effect even with fitting non periodic data with a polynom: the fitted data are "over reacting " to the last and first span interval. One way to smoothen this, is to repeat the data of the interval to be fitted three times, make the polynomial fit over the long interval and use as a "better" fit only the fitted data of the middle interval. But certainly, the use of a periodic as a "basis function" is the best approach if You know, that the effect considered is periodic. Polynoms with a limited number of coefficients can not fit a periodic signal.As already said by Aksakal
What's wrong to fit periodic data with polynomials?
If You fit data from a limited timeinterval, say one day, using splines, this does not take into account, the values of the preceding and following intervals. You find this effect even with fitting no
What's wrong to fit periodic data with polynomials? If You fit data from a limited timeinterval, say one day, using splines, this does not take into account, the values of the preceding and following intervals. You find this effect even with fitting non periodic data with a polynom: the fitted data are "over reacting " to the last and first span interval. One way to smoothen this, is to repeat the data of the interval to be fitted three times, make the polynomial fit over the long interval and use as a "better" fit only the fitted data of the middle interval. But certainly, the use of a periodic as a "basis function" is the best approach if You know, that the effect considered is periodic. Polynoms with a limited number of coefficients can not fit a periodic signal.As already said by Aksakal
What's wrong to fit periodic data with polynomials? If You fit data from a limited timeinterval, say one day, using splines, this does not take into account, the values of the preceding and following intervals. You find this effect even with fitting no
51,767
Does Bayesian Statistics have no concept of statistical hypothesis testing?
I am surprised at the textbook statement as testing hypotheses and comparing models are a most fundamental feature of Bayesian analysis, with a wide variety of possible resolutions that exposes the multiple and sometimes incompatible facets of the problem. (excerpt from our book, Bayesian essentials with R, Chapter 2, p.29:) For the null and alternative hypotheses $$ H_0:\ \theta \in > \Theta_0\text{ and }H_a:\ \theta \in \Theta_1 $$ and under the loss function $$ L_{a_0,a_1} (\theta ,d) = \begin{cases} a_0 & \hbox{if}\quad \theta \in \Theta_0\quad\hbox{and}\quad d=0\,, \cr a_1 & \hbox{if}\quad \theta \in \Theta_1\quad\hbox{and}\quad d=1\,, \cr 0 & \hbox{otherwise.} \cr \end{cases} $$ where $d=0$ denotes the rejection of $H_0$, the Bayes optimal decision associated with a prior $\pi$ is given by $$ \delta^\pi(x) = \begin{cases} 1 & \hbox{if}\quad \mathbb{P}^\pi(\theta \in \Theta_0|x)>a_1\big/{a_0+a_1}, \cr 0 & \hbox{otherwise.}\cr \end{cases} $$ For this class of losses, the null hypothesis $H_0$ is rejected when the posterior probability of $H_0$ is too small, the acceptance level $a_1/(a_0+a_1)$ being determined by the choice of $(a_0,a_1)$. The Bayesian paradigm allows for testing and model comparison, to a larger extent than other statistical paradigms, I would say. What may sound at first like a drawback is that all aspects of this decision have to be spelled out, from the specification of the sampling models under the null and under the alternative hypotheses (which explains why I cannot spell out a strict distinction between hypothesis testing and model choice), to the construction of prior distributions on the parameters of both sampling models, to prior weights on the prior likelihood of both hypotheses, to the impact of selecting the "wrong "model". Outside this Neyman-Pearson decision framework, there are further Bayesian resolutions of the testing issue, like the substitute Bayes factor$$\dfrac{\mathbb{P}^\pi(\theta \in \Theta_0|x)}{\mathbb{P}^\pi(\theta \in \Theta_1|x)}\Big/\dfrac{\mathbb{P}^\pi(\theta \in \Theta_0)}{\mathbb{P}^\pi(\theta \in \Theta_1)}$$that avoids selecting the prior weights but which are not free from foundational drawbacks; information criteria like BIC, DIC, WAIC and Aitkin's integrated likelihood; score functions and related information approaches; posterior predictive assessments like the posterior $p$-value of Gelman et al. and others; Evans' relative belief; divergence criteria like ABC$\mu$; model averaging; embedding models like our mixture representation.
Does Bayesian Statistics have no concept of statistical hypothesis testing?
I am surprised at the textbook statement as testing hypotheses and comparing models are a most fundamental feature of Bayesian analysis, with a wide variety of possible resolutions that exposes the mu
Does Bayesian Statistics have no concept of statistical hypothesis testing? I am surprised at the textbook statement as testing hypotheses and comparing models are a most fundamental feature of Bayesian analysis, with a wide variety of possible resolutions that exposes the multiple and sometimes incompatible facets of the problem. (excerpt from our book, Bayesian essentials with R, Chapter 2, p.29:) For the null and alternative hypotheses $$ H_0:\ \theta \in > \Theta_0\text{ and }H_a:\ \theta \in \Theta_1 $$ and under the loss function $$ L_{a_0,a_1} (\theta ,d) = \begin{cases} a_0 & \hbox{if}\quad \theta \in \Theta_0\quad\hbox{and}\quad d=0\,, \cr a_1 & \hbox{if}\quad \theta \in \Theta_1\quad\hbox{and}\quad d=1\,, \cr 0 & \hbox{otherwise.} \cr \end{cases} $$ where $d=0$ denotes the rejection of $H_0$, the Bayes optimal decision associated with a prior $\pi$ is given by $$ \delta^\pi(x) = \begin{cases} 1 & \hbox{if}\quad \mathbb{P}^\pi(\theta \in \Theta_0|x)>a_1\big/{a_0+a_1}, \cr 0 & \hbox{otherwise.}\cr \end{cases} $$ For this class of losses, the null hypothesis $H_0$ is rejected when the posterior probability of $H_0$ is too small, the acceptance level $a_1/(a_0+a_1)$ being determined by the choice of $(a_0,a_1)$. The Bayesian paradigm allows for testing and model comparison, to a larger extent than other statistical paradigms, I would say. What may sound at first like a drawback is that all aspects of this decision have to be spelled out, from the specification of the sampling models under the null and under the alternative hypotheses (which explains why I cannot spell out a strict distinction between hypothesis testing and model choice), to the construction of prior distributions on the parameters of both sampling models, to prior weights on the prior likelihood of both hypotheses, to the impact of selecting the "wrong "model". Outside this Neyman-Pearson decision framework, there are further Bayesian resolutions of the testing issue, like the substitute Bayes factor$$\dfrac{\mathbb{P}^\pi(\theta \in \Theta_0|x)}{\mathbb{P}^\pi(\theta \in \Theta_1|x)}\Big/\dfrac{\mathbb{P}^\pi(\theta \in \Theta_0)}{\mathbb{P}^\pi(\theta \in \Theta_1)}$$that avoids selecting the prior weights but which are not free from foundational drawbacks; information criteria like BIC, DIC, WAIC and Aitkin's integrated likelihood; score functions and related information approaches; posterior predictive assessments like the posterior $p$-value of Gelman et al. and others; Evans' relative belief; divergence criteria like ABC$\mu$; model averaging; embedding models like our mixture representation.
Does Bayesian Statistics have no concept of statistical hypothesis testing? I am surprised at the textbook statement as testing hypotheses and comparing models are a most fundamental feature of Bayesian analysis, with a wide variety of possible resolutions that exposes the mu
51,768
Does Bayesian Statistics have no concept of statistical hypothesis testing?
No. Bayesian statistics has a concept of hypothesis testing. From Wagenmakers and Grünweld: A Bayesian hypothesis test (Jeffreys, 1961) proceeds by contrasting two quantities: the probability of the observed data $D$ given $H_{0}$ (i.e., $\theta = \frac{1}{2}$) and the probability of the observed data $D$ given $H_{1}$ (i.e., $\theta \ne \frac{1}{2}$). The ratio $B_{01} = p(D|H_{0})/p(D|H_{1})$ is the Bayes factor, and it quantifies the evidence that the data provide for $H_{0}$ vis-à-vis $H_{1}$. Eric-Jan Wagenmaker and Peter Grünweld. 2006. A Bayesian Perspective on Hypothesis Testing A Comment on Killeen (2005). Psychological Science. 17(7):641–642.
Does Bayesian Statistics have no concept of statistical hypothesis testing?
No. Bayesian statistics has a concept of hypothesis testing. From Wagenmakers and Grünweld: A Bayesian hypothesis test (Jeffreys, 1961) proceeds by contrasting two quantities: the probability of the
Does Bayesian Statistics have no concept of statistical hypothesis testing? No. Bayesian statistics has a concept of hypothesis testing. From Wagenmakers and Grünweld: A Bayesian hypothesis test (Jeffreys, 1961) proceeds by contrasting two quantities: the probability of the observed data $D$ given $H_{0}$ (i.e., $\theta = \frac{1}{2}$) and the probability of the observed data $D$ given $H_{1}$ (i.e., $\theta \ne \frac{1}{2}$). The ratio $B_{01} = p(D|H_{0})/p(D|H_{1})$ is the Bayes factor, and it quantifies the evidence that the data provide for $H_{0}$ vis-à-vis $H_{1}$. Eric-Jan Wagenmaker and Peter Grünweld. 2006. A Bayesian Perspective on Hypothesis Testing A Comment on Killeen (2005). Psychological Science. 17(7):641–642.
Does Bayesian Statistics have no concept of statistical hypothesis testing? No. Bayesian statistics has a concept of hypothesis testing. From Wagenmakers and Grünweld: A Bayesian hypothesis test (Jeffreys, 1961) proceeds by contrasting two quantities: the probability of the
51,769
Regress IV on DV, or DV on IV?
Traditionally speaking, one regresses the dependent variable (the Y, the outcome) on the independent variable (the X, the input). However, this is such an egregious abuse of statistical language, many disciplines have abandoned such verbiage altogether. The mistake is that "dependence" (in the proper statistical sense) is commutative. If A depends on B, then B depends on A. We only call the "X" (input variable) "independent" because it is considered fixed or given as part of an experimental design, or is representative of a population of interest. Regression models estimate the conditional mean of the outcome as a function of one or more predictors. To this end, the mean of one variable conditional on another may be a flat response although the variables are indeed dependent (suppose the error of the Y varies according to X). To belabor this point, one nice way of writing what a regression model estimates is the following: $$E[Y|X] = \beta_0 + \beta_1 X$$ Better options would be calling the "dependent" variable (Y): an outcome, a response, an output, and calling the "independent" variable (X): an input, a predictor, a regressor, a covariate, or an exposure.
Regress IV on DV, or DV on IV?
Traditionally speaking, one regresses the dependent variable (the Y, the outcome) on the independent variable (the X, the input). However, this is such an egregious abuse of statistical language, many
Regress IV on DV, or DV on IV? Traditionally speaking, one regresses the dependent variable (the Y, the outcome) on the independent variable (the X, the input). However, this is such an egregious abuse of statistical language, many disciplines have abandoned such verbiage altogether. The mistake is that "dependence" (in the proper statistical sense) is commutative. If A depends on B, then B depends on A. We only call the "X" (input variable) "independent" because it is considered fixed or given as part of an experimental design, or is representative of a population of interest. Regression models estimate the conditional mean of the outcome as a function of one or more predictors. To this end, the mean of one variable conditional on another may be a flat response although the variables are indeed dependent (suppose the error of the Y varies according to X). To belabor this point, one nice way of writing what a regression model estimates is the following: $$E[Y|X] = \beta_0 + \beta_1 X$$ Better options would be calling the "dependent" variable (Y): an outcome, a response, an output, and calling the "independent" variable (X): an input, a predictor, a regressor, a covariate, or an exposure.
Regress IV on DV, or DV on IV? Traditionally speaking, one regresses the dependent variable (the Y, the outcome) on the independent variable (the X, the input). However, this is such an egregious abuse of statistical language, many
51,770
Regress IV on DV, or DV on IV?
The question prejudges another question, good terminology for the variables concerned. Let's take that first. DV is common, but not universal, shorthand for dependent variable. It's probably old-fashioned to remind that DV has often been used to mean Deo volente, God willing, but those who know that and also some statistics seem unlikely to confuse or conflate those two meanings. IV is common, but not universal, shorthand for independent variable. It's not at all old-fashioned to point out that among many economists, and some other social scientists, IV is now more likely to mean instrumental variable. This isn't much of a problem either: by the time people have learned about instrumental variables, they should be able to distinguish the two usages, at least in context. Let's take it for the moment that at least in many situations, a dependent variable can be identified on substantive grounds as whatever is the outcome, response or effect which we are in practice interested in explaining or predicting in some way. The independent variable is then the cause or factor used to predict the response. Most introductory courses and texts seem to use notation $y$ for dependent variable and $x$ for independent variable; whenever there are many such independent variables, they can be distinguished by subscripts and/or denoted collectively as a matrix. That said, there are many examples in which predictive interest runs either way: if rainfall is a predictor of the response corn yield, so also we might use abundance of some taxon in reverse to predict temperature, rainfall or salinity of past environments. Yet again, and very much to the point, there are many problems in regression in which variables are on the same footing: properties of partners or siblings, two methods of measurement of ostensibly the same property, rainfall or temperature over time at two gauges or stations, and so on and so forth. The problem here has more symmetry and distinction between two kinds of variables is likely to be arbitrary if not meaningless. As far as terminology is concerned, we note that many would prefer some other term rather than DV or dependent variable. This preference goes back at least some decades: John Wilder Tukey often used the term response in writings in the 1960s and 1970s. but teachers, writers and researchers often seem reluctant to abandon the terminology of dependent and independent. Grounds for objection include (a) many students and even researchers confuse the two words, which apparently seem so similar; (b) the words have other meanings, even in probability and statistics; (c) why use dull words when evocative alternatives are available? Similarly, many find terms such as predictor, covariate, explanatory variable more congenial for independent variable. There are many such terms, and some, particularly the first two, have other meanings in statistical science. (For example, covariate had a very specific meaning in analysis of covariance for some decades, but somehow has morphed into also acquiring a more general meaning as any kind of predictor. I conjecture that writings of John Nelder had some influence there.) Yet again, some have favoured terms invented for the purpose, such as regressand and regressor: to me, these are so unattractive that it is slightly distressing even to think about them. All this is lengthy preamble to the question given here (which to me is less interesting). In short, the usual or standard regression is that of $y$ on $x$ (or $X$), but at least in the case of single predictors it can also make sense to talk about the regression of $x$ on $y$, with different assumptions on error structure. There can be equal interest in both regressions when variables are on the same footing (and also in other models for the joint relationship, which are left to another story).
Regress IV on DV, or DV on IV?
The question prejudges another question, good terminology for the variables concerned. Let's take that first. DV is common, but not universal, shorthand for dependent variable. It's probably old-fash
Regress IV on DV, or DV on IV? The question prejudges another question, good terminology for the variables concerned. Let's take that first. DV is common, but not universal, shorthand for dependent variable. It's probably old-fashioned to remind that DV has often been used to mean Deo volente, God willing, but those who know that and also some statistics seem unlikely to confuse or conflate those two meanings. IV is common, but not universal, shorthand for independent variable. It's not at all old-fashioned to point out that among many economists, and some other social scientists, IV is now more likely to mean instrumental variable. This isn't much of a problem either: by the time people have learned about instrumental variables, they should be able to distinguish the two usages, at least in context. Let's take it for the moment that at least in many situations, a dependent variable can be identified on substantive grounds as whatever is the outcome, response or effect which we are in practice interested in explaining or predicting in some way. The independent variable is then the cause or factor used to predict the response. Most introductory courses and texts seem to use notation $y$ for dependent variable and $x$ for independent variable; whenever there are many such independent variables, they can be distinguished by subscripts and/or denoted collectively as a matrix. That said, there are many examples in which predictive interest runs either way: if rainfall is a predictor of the response corn yield, so also we might use abundance of some taxon in reverse to predict temperature, rainfall or salinity of past environments. Yet again, and very much to the point, there are many problems in regression in which variables are on the same footing: properties of partners or siblings, two methods of measurement of ostensibly the same property, rainfall or temperature over time at two gauges or stations, and so on and so forth. The problem here has more symmetry and distinction between two kinds of variables is likely to be arbitrary if not meaningless. As far as terminology is concerned, we note that many would prefer some other term rather than DV or dependent variable. This preference goes back at least some decades: John Wilder Tukey often used the term response in writings in the 1960s and 1970s. but teachers, writers and researchers often seem reluctant to abandon the terminology of dependent and independent. Grounds for objection include (a) many students and even researchers confuse the two words, which apparently seem so similar; (b) the words have other meanings, even in probability and statistics; (c) why use dull words when evocative alternatives are available? Similarly, many find terms such as predictor, covariate, explanatory variable more congenial for independent variable. There are many such terms, and some, particularly the first two, have other meanings in statistical science. (For example, covariate had a very specific meaning in analysis of covariance for some decades, but somehow has morphed into also acquiring a more general meaning as any kind of predictor. I conjecture that writings of John Nelder had some influence there.) Yet again, some have favoured terms invented for the purpose, such as regressand and regressor: to me, these are so unattractive that it is slightly distressing even to think about them. All this is lengthy preamble to the question given here (which to me is less interesting). In short, the usual or standard regression is that of $y$ on $x$ (or $X$), but at least in the case of single predictors it can also make sense to talk about the regression of $x$ on $y$, with different assumptions on error structure. There can be equal interest in both regressions when variables are on the same footing (and also in other models for the joint relationship, which are left to another story).
Regress IV on DV, or DV on IV? The question prejudges another question, good terminology for the variables concerned. Let's take that first. DV is common, but not universal, shorthand for dependent variable. It's probably old-fash
51,771
How to interpret a VIF of 4?
When you estimate a regression equation $y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \epsilon$, where in your case $y$ is the election result, $x_1$ is personal income and $x_2$ is presidential popularity, then, when the 'usual' assumptions are fullfilled, the estimated coefficients $\hat{\beta}_i$ are random variables (i.e. with another sample you will get other estimates) that have a normal distribution with mean the 'true' but unknown $\beta_i$ and a standard deviation that can be computed from the sample. i.e. $\hat{\beta}_i \sim N(\beta_i;\sigma_{\hat{\beta}_i})$. (I am assuming here that the standard deviation of the error term $\epsilon$ is known, the reasoning does not change when it is unknown but the normal distribution is no longer applicable then, and one should use the t-distribution). If one wants to test whether a coefficient $\beta_i$ is significant, then one performs the statistical hypothesis test $H_0: \beta_i=0$ versus $H_1: \beta_i \ne 0$. If $H_0$ is true, then the estimator $\hat{\beta}_i$ follows (see supra) a normal distribution with mean 0 and the standard deviation as supra, i.e. if $H_0$ is true then $\hat{\beta}_i \sim N(0;\sigma_{\hat{\beta}_i})$. The value for $\bar{\beta}_i$ that we compute from our sample comes from this distribution, therefore $\frac{|\bar{\beta}_i - 0|}{\sigma_{\hat{\beta}_i}}$ is an outcome of a standard normal random variable. So for a significance level $\alpha$ we will reject the $H_0$ whenever $\frac{|\bar{\beta}_i | }{\sigma_{\hat{\beta}_i}} \ge z_{\frac{\alpha}{2}}$ If there is correlation between your independent variables $x_1$ and $x_2$ then it can be shown that $\sigma_{\hat{\beta}_i}$ will be larger than when $x_1$ and $x_2$ are uncorrelated. Therefore, if $x_1$ and $x_2$ are correlated the null hypothesis will be 'more difficult to reject' because of the higher denominator. The Variance Inflating Factor (VIF) tells you how much higher the variance $\sigma_{\hat{\beta}_i}$ are when $x_1$ and $x_2$ are correlated compared to when they are uncorrelated. In your case, the variance is higher by a factor four. High VIFs are a sign of multicollinearity. EDIT: added because of the question in your comment: If you want it in simple words, but less precise, then I think that you have some correlation between the two independent variables personal income ($x_1$) an president's popularity ($x_2$) (but you also have as you say a limited sample). Can you compute their correlation ? If $x_1$ and $x_2$ are strongly correlated then that means that they 'move together'. What linear regression tries to do is to ''assign'' a change in the dependent variable $y$ to either $x_1$ or $x_2$. Obviously, if both 'move together' (because of high correlation) then it will be difficult to 'decide' which of the $x$'s is 'responsible' for the change in $y$ (because they both change). Therefore the estimates of the $\beta_i$ coefficients will be less precise. A VIF of four means that the variance (a measure of imprecision) of the estimated coefficients is four times higher because of correlation between the two independent variables. If your goal is to predict the election results, then multicollinearity is not necessarily a problem, if you want to analyse the impact of e.g. the personal income on the results, then there may be a problem because the estimates of the coefficients are imprecise (i.e. if you would estimate them with another sample then they may change a lot).
How to interpret a VIF of 4?
When you estimate a regression equation $y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \epsilon$, where in your case $y$ is the election result, $x_1$ is personal income and $x_2$ is presidential popularity
How to interpret a VIF of 4? When you estimate a regression equation $y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \epsilon$, where in your case $y$ is the election result, $x_1$ is personal income and $x_2$ is presidential popularity, then, when the 'usual' assumptions are fullfilled, the estimated coefficients $\hat{\beta}_i$ are random variables (i.e. with another sample you will get other estimates) that have a normal distribution with mean the 'true' but unknown $\beta_i$ and a standard deviation that can be computed from the sample. i.e. $\hat{\beta}_i \sim N(\beta_i;\sigma_{\hat{\beta}_i})$. (I am assuming here that the standard deviation of the error term $\epsilon$ is known, the reasoning does not change when it is unknown but the normal distribution is no longer applicable then, and one should use the t-distribution). If one wants to test whether a coefficient $\beta_i$ is significant, then one performs the statistical hypothesis test $H_0: \beta_i=0$ versus $H_1: \beta_i \ne 0$. If $H_0$ is true, then the estimator $\hat{\beta}_i$ follows (see supra) a normal distribution with mean 0 and the standard deviation as supra, i.e. if $H_0$ is true then $\hat{\beta}_i \sim N(0;\sigma_{\hat{\beta}_i})$. The value for $\bar{\beta}_i$ that we compute from our sample comes from this distribution, therefore $\frac{|\bar{\beta}_i - 0|}{\sigma_{\hat{\beta}_i}}$ is an outcome of a standard normal random variable. So for a significance level $\alpha$ we will reject the $H_0$ whenever $\frac{|\bar{\beta}_i | }{\sigma_{\hat{\beta}_i}} \ge z_{\frac{\alpha}{2}}$ If there is correlation between your independent variables $x_1$ and $x_2$ then it can be shown that $\sigma_{\hat{\beta}_i}$ will be larger than when $x_1$ and $x_2$ are uncorrelated. Therefore, if $x_1$ and $x_2$ are correlated the null hypothesis will be 'more difficult to reject' because of the higher denominator. The Variance Inflating Factor (VIF) tells you how much higher the variance $\sigma_{\hat{\beta}_i}$ are when $x_1$ and $x_2$ are correlated compared to when they are uncorrelated. In your case, the variance is higher by a factor four. High VIFs are a sign of multicollinearity. EDIT: added because of the question in your comment: If you want it in simple words, but less precise, then I think that you have some correlation between the two independent variables personal income ($x_1$) an president's popularity ($x_2$) (but you also have as you say a limited sample). Can you compute their correlation ? If $x_1$ and $x_2$ are strongly correlated then that means that they 'move together'. What linear regression tries to do is to ''assign'' a change in the dependent variable $y$ to either $x_1$ or $x_2$. Obviously, if both 'move together' (because of high correlation) then it will be difficult to 'decide' which of the $x$'s is 'responsible' for the change in $y$ (because they both change). Therefore the estimates of the $\beta_i$ coefficients will be less precise. A VIF of four means that the variance (a measure of imprecision) of the estimated coefficients is four times higher because of correlation between the two independent variables. If your goal is to predict the election results, then multicollinearity is not necessarily a problem, if you want to analyse the impact of e.g. the personal income on the results, then there may be a problem because the estimates of the coefficients are imprecise (i.e. if you would estimate them with another sample then they may change a lot).
How to interpret a VIF of 4? When you estimate a regression equation $y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \epsilon$, where in your case $y$ is the election result, $x_1$ is personal income and $x_2$ is presidential popularity
51,772
How to interpret a VIF of 4?
Variance inflation factors (VIF) measure how much the variance of the estimated regression coefficients are inflated as compared to when the predictor variables are not linearly related. It is used to explain how much amount multicollinearity (correlation between predictors) exists in a regression analysis. Multicollinearity is dangerous because it can increase the variance of the regression coefficients. Below are the guidelines to interpret the VIF easily: VIF = 1 (Not correlated) 1 < VIF < 5 (Moderately correlated) VIF >=5 (Highly correlated) In your case, VIF 4 so the variables used in the regression analysis are "moderately correlated". Hope this helps!
How to interpret a VIF of 4?
Variance inflation factors (VIF) measure how much the variance of the estimated regression coefficients are inflated as compared to when the predictor variables are not linearly related. It is used to
How to interpret a VIF of 4? Variance inflation factors (VIF) measure how much the variance of the estimated regression coefficients are inflated as compared to when the predictor variables are not linearly related. It is used to explain how much amount multicollinearity (correlation between predictors) exists in a regression analysis. Multicollinearity is dangerous because it can increase the variance of the regression coefficients. Below are the guidelines to interpret the VIF easily: VIF = 1 (Not correlated) 1 < VIF < 5 (Moderately correlated) VIF >=5 (Highly correlated) In your case, VIF 4 so the variables used in the regression analysis are "moderately correlated". Hope this helps!
How to interpret a VIF of 4? Variance inflation factors (VIF) measure how much the variance of the estimated regression coefficients are inflated as compared to when the predictor variables are not linearly related. It is used to
51,773
Returning unlikely results from a probability distribution
You can use geometric distribution to reason about such events after several trials. Let $p$ be the probability of obtaining 18 or more ($p = P(X > 18)$). Then the amount of trials $k$ until success (that is, until 18+ is generated) is distributed according to the geometric distribution: $$ k \sim (1-p)^{k-1} p $$ Then you can use quantiles of that distribution to estimate how many trials you need to get this event in, say, 95% of runs. Or, just use the mean of that distribution ($\frac{1}{p}$) as the average number of trials until a rare event.
Returning unlikely results from a probability distribution
You can use geometric distribution to reason about such events after several trials. Let $p$ be the probability of obtaining 18 or more ($p = P(X > 18)$). Then the amount of trials $k$ until success (
Returning unlikely results from a probability distribution You can use geometric distribution to reason about such events after several trials. Let $p$ be the probability of obtaining 18 or more ($p = P(X > 18)$). Then the amount of trials $k$ until success (that is, until 18+ is generated) is distributed according to the geometric distribution: $$ k \sim (1-p)^{k-1} p $$ Then you can use quantiles of that distribution to estimate how many trials you need to get this event in, say, 95% of runs. Or, just use the mean of that distribution ($\frac{1}{p}$) as the average number of trials until a rare event.
Returning unlikely results from a probability distribution You can use geometric distribution to reason about such events after several trials. Let $p$ be the probability of obtaining 18 or more ($p = P(X > 18)$). Then the amount of trials $k$ until success (
51,774
Returning unlikely results from a probability distribution
If you want to learn about some general results (and general phenomena) for probabilities of "extreme" values, or values very far away from the mean, you should look into extreme value theory or large deviations theory. To get you started, some links here on CV: Extreme value theory for count data Extreme Value Theory and heavy (long) tailed distributions Large deviation theory exercise
Returning unlikely results from a probability distribution
If you want to learn about some general results (and general phenomena) for probabilities of "extreme" values, or values very far away from the mean, you should look into extreme value theory or large
Returning unlikely results from a probability distribution If you want to learn about some general results (and general phenomena) for probabilities of "extreme" values, or values very far away from the mean, you should look into extreme value theory or large deviations theory. To get you started, some links here on CV: Extreme value theory for count data Extreme Value Theory and heavy (long) tailed distributions Large deviation theory exercise
Returning unlikely results from a probability distribution If you want to learn about some general results (and general phenomena) for probabilities of "extreme" values, or values very far away from the mean, you should look into extreme value theory or large
51,775
Returning unlikely results from a probability distribution
You have a sequence of trials, with probability $p$ per trial of a "success" (that the simulation runs). a) If you want P(simulation is triggered at least once in $n$ trials) that's a calculation from a binomial distribution, but you can work the probability out from first principles by working out the probability of the complementary event (no successes) and subtracting from 1. In your case, $p = 3.167\times10^{-5}$, P(0 successes) = $(1-p)^n$, so P(at least 1 success) = $1-(1-p)^n$. b) If you want the distribution of the number of trials to the first success that's a geometric($p$); it has mean $1/p$. One useful rule of thumb: The probability that you observe success at least once when $n=1/p$ is $1-(1-p)^{1/p} =1-(1-1/n)^{n} \approx 1-1/e \approx 63.2\%$. So the expected number of trials to the first success is $1/p \approx 31574$ and the probability of at least one success in that many trials is about 63.2% If $n$ is some multiple of $\frac{1}{p}$, $n = k\cdot\frac{1}{p}$, say, then it has an approximate probability of $1-\exp(-k)$ of seeing at least one success. So in 10000 trials you have about $1-\exp(-10000/31574)\approx 27\%$ chance of the simulation starting at least once. This approximation can also be seen directly by applying the Poisson approximation to the binomial. With $n$ trials with probability of success per trial $p$, P(0 successes) = ${n}\choose{0}$$p^0(1-p)^n = (1-p)^n$, and the Poisson approximation (with $\lambda=np$) is $\exp(-\lambda)\lambda^0/0!=\exp(-np)$. [This approximation is also related to the one mentioned at the end of the section on related distributions in the Wikipedia page on the geometric distribution (just above "See also"), which deals with the probability that it will take more than $a$ trials to start] -- On the terminology part of the question -- that the probability of at least one success increases as you add more trials might be called a number of things, but I don't know that it has any particularly widespread names.
Returning unlikely results from a probability distribution
You have a sequence of trials, with probability $p$ per trial of a "success" (that the simulation runs). a) If you want P(simulation is triggered at least once in $n$ trials) that's a calculation from
Returning unlikely results from a probability distribution You have a sequence of trials, with probability $p$ per trial of a "success" (that the simulation runs). a) If you want P(simulation is triggered at least once in $n$ trials) that's a calculation from a binomial distribution, but you can work the probability out from first principles by working out the probability of the complementary event (no successes) and subtracting from 1. In your case, $p = 3.167\times10^{-5}$, P(0 successes) = $(1-p)^n$, so P(at least 1 success) = $1-(1-p)^n$. b) If you want the distribution of the number of trials to the first success that's a geometric($p$); it has mean $1/p$. One useful rule of thumb: The probability that you observe success at least once when $n=1/p$ is $1-(1-p)^{1/p} =1-(1-1/n)^{n} \approx 1-1/e \approx 63.2\%$. So the expected number of trials to the first success is $1/p \approx 31574$ and the probability of at least one success in that many trials is about 63.2% If $n$ is some multiple of $\frac{1}{p}$, $n = k\cdot\frac{1}{p}$, say, then it has an approximate probability of $1-\exp(-k)$ of seeing at least one success. So in 10000 trials you have about $1-\exp(-10000/31574)\approx 27\%$ chance of the simulation starting at least once. This approximation can also be seen directly by applying the Poisson approximation to the binomial. With $n$ trials with probability of success per trial $p$, P(0 successes) = ${n}\choose{0}$$p^0(1-p)^n = (1-p)^n$, and the Poisson approximation (with $\lambda=np$) is $\exp(-\lambda)\lambda^0/0!=\exp(-np)$. [This approximation is also related to the one mentioned at the end of the section on related distributions in the Wikipedia page on the geometric distribution (just above "See also"), which deals with the probability that it will take more than $a$ trials to start] -- On the terminology part of the question -- that the probability of at least one success increases as you add more trials might be called a number of things, but I don't know that it has any particularly widespread names.
Returning unlikely results from a probability distribution You have a sequence of trials, with probability $p$ per trial of a "success" (that the simulation runs). a) If you want P(simulation is triggered at least once in $n$ trials) that's a calculation from
51,776
Gaussian is conjugate of Gaussian?
If we take your question to mean whether the product of the densities are Gaussian, then the answer is "yes" (P.A. Bromiley. Tina Memo No. 2003-003. "Products and Convolutions of Gaussian Probability Density Functions."). Take $f(x)$ and $g(x)$ to be two normal densities with means $\mu_f$ and $\mu_g$ and variances $\sigma_f^2$ and $\sigma_g^2$. The product is $$f(x)g(x)=\frac{1}{2\pi\sigma_f\sigma_g}\exp\left(-\frac{(x-\mu_f)^2}{2\sigma_f^2}-\frac{(x-\mu_g)^2}{2\sigma_g^2}\right).$$ Denote $\beta=\frac{(x-\mu_f)^2}{2\sigma_f^2}+\frac{(x-\mu_g)^2}{2\sigma_g^2}.$ Expand: $$\beta=\frac{(\sigma^2_f+\sigma^2_g)x^2-2(\mu_f\sigma^2_g+\mu_g\sigma^2_f)x+ \mu^2_f\sigma^2_g+\mu^2_g\sigma^2_f} {2\sigma^2_f\sigma^2_g}$$ Divide through by the coefficient of the leading power, $x^2:$ $$\beta=\frac{x^2-2\frac{\mu_f\sigma^2_g+\mu_g\sigma^2_f}{\sigma^2_f+\sigma^2_g}x+\frac{\mu_f^2\sigma^2_g+\mu_g\sigma^2_f}{\sigma^2_f+\sigma^2_g}}{2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}$$ This is quadratic in $x$, so it's Gaussian. But if we continue with the algebra, we can make this even more explicit. Completing the square is a procedure that expresses a quadratic in $x$ with the form $(x+b)^2$. We can apply this here. If $\epsilon$ is the term required to complete the square in $\beta$, $$\epsilon=\frac{\left(\frac{\mu_f\sigma^2+\mu_g\sigma^2_f}{\sigma_f^2+\sigma_g^2}\right)- \left(\frac{\mu_f\sigma_g^2+\mu_g\sigma_f^2}{\sigma_f^2+\sigma_g^2}\right)}{2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}=0.$$ We add this to $\beta$. Its value is zero, so it does not change the value of $\beta$ for the same reason that $5+0=5$. However, it does allow us to re-express $\beta:$ $$\begin{align} \beta&=\frac{x^2- 2\frac{\mu_f\sigma^2_g+\mu_g\sigma^2_f}{\sigma^2_f+\sigma^2_g}x+ \left(\frac{\mu_f^2\sigma^2_g+\mu_g\sigma^2_f} {\sigma^2_f+\sigma^2_g}\right)^2} {2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}+ \frac{\left(\frac{\mu_f\sigma^2+\mu_g\sigma^2_f}{\sigma_f^2+\sigma_g^2}\right)- \left(\frac{\mu_f\sigma_g^2+\mu_g\sigma_f^2}{\sigma_f^2+\sigma_g^2}\right)^2}{2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}\\ &=\frac{\left(x- \frac{\mu_f\sigma_g^2+\mu_g\sigma_f^2} {\sigma_f^2+\sigma_g^2}\right)^2} {2\frac{\sigma^2_f\sigma_g^2} {\sigma_f^2+\sigma_g^2}}+ \frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)}\\ &=\frac{(x-\mu_{fg})^2}{2\sigma^2_{fg}}+\frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)} \end{align}$$ Where $$\mu_{fg}=\frac{\mu_f\sigma^2_g+\mu_g\sigma_f^2}{\sigma_f^2+\sigma_g^2}$$ and $$\sigma_{fg}^2=\frac{\sigma_f^2\sigma_g^2}{\sigma_f^2+\sigma_g^2}.$$ So $$f(x)g(s)=\frac{1}{2\pi\sigma_f\sigma_g}\exp\left(-\frac{(x-\mu_{fg})^2}{2\sigma^2_{fg}}\right)\exp\left(\frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)}\right)$$ This can be written as a scaled Gaussian PDF: $$f(x)g(x)=\frac{S_{fg}}{\sigma_{fg}\sqrt{2\pi}}\exp\left(-\frac{(x-\mu_{fg})^2}{2\sigma_{fg}^2}\right)$$ where $$ S_{fg}=\frac{1}{\sqrt{2\pi(\sigma_f^2+\sigma_g^2)}}\exp\left(-\frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)}\right) $$ Note that the scaling constant is also a Gaussian function of the two means and two variances. The product of two Gaussian densities is Gaussian, and the Gaussian is a member of the exponential family. Therefore, the Gaussian is conjugate prior to itself by the definition of conjugacy.
Gaussian is conjugate of Gaussian?
If we take your question to mean whether the product of the densities are Gaussian, then the answer is "yes" (P.A. Bromiley. Tina Memo No. 2003-003. "Products and Convolutions of Gaussian Probability
Gaussian is conjugate of Gaussian? If we take your question to mean whether the product of the densities are Gaussian, then the answer is "yes" (P.A. Bromiley. Tina Memo No. 2003-003. "Products and Convolutions of Gaussian Probability Density Functions."). Take $f(x)$ and $g(x)$ to be two normal densities with means $\mu_f$ and $\mu_g$ and variances $\sigma_f^2$ and $\sigma_g^2$. The product is $$f(x)g(x)=\frac{1}{2\pi\sigma_f\sigma_g}\exp\left(-\frac{(x-\mu_f)^2}{2\sigma_f^2}-\frac{(x-\mu_g)^2}{2\sigma_g^2}\right).$$ Denote $\beta=\frac{(x-\mu_f)^2}{2\sigma_f^2}+\frac{(x-\mu_g)^2}{2\sigma_g^2}.$ Expand: $$\beta=\frac{(\sigma^2_f+\sigma^2_g)x^2-2(\mu_f\sigma^2_g+\mu_g\sigma^2_f)x+ \mu^2_f\sigma^2_g+\mu^2_g\sigma^2_f} {2\sigma^2_f\sigma^2_g}$$ Divide through by the coefficient of the leading power, $x^2:$ $$\beta=\frac{x^2-2\frac{\mu_f\sigma^2_g+\mu_g\sigma^2_f}{\sigma^2_f+\sigma^2_g}x+\frac{\mu_f^2\sigma^2_g+\mu_g\sigma^2_f}{\sigma^2_f+\sigma^2_g}}{2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}$$ This is quadratic in $x$, so it's Gaussian. But if we continue with the algebra, we can make this even more explicit. Completing the square is a procedure that expresses a quadratic in $x$ with the form $(x+b)^2$. We can apply this here. If $\epsilon$ is the term required to complete the square in $\beta$, $$\epsilon=\frac{\left(\frac{\mu_f\sigma^2+\mu_g\sigma^2_f}{\sigma_f^2+\sigma_g^2}\right)- \left(\frac{\mu_f\sigma_g^2+\mu_g\sigma_f^2}{\sigma_f^2+\sigma_g^2}\right)}{2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}=0.$$ We add this to $\beta$. Its value is zero, so it does not change the value of $\beta$ for the same reason that $5+0=5$. However, it does allow us to re-express $\beta:$ $$\begin{align} \beta&=\frac{x^2- 2\frac{\mu_f\sigma^2_g+\mu_g\sigma^2_f}{\sigma^2_f+\sigma^2_g}x+ \left(\frac{\mu_f^2\sigma^2_g+\mu_g\sigma^2_f} {\sigma^2_f+\sigma^2_g}\right)^2} {2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}+ \frac{\left(\frac{\mu_f\sigma^2+\mu_g\sigma^2_f}{\sigma_f^2+\sigma_g^2}\right)- \left(\frac{\mu_f\sigma_g^2+\mu_g\sigma_f^2}{\sigma_f^2+\sigma_g^2}\right)^2}{2\frac{\sigma^2_f\sigma^2_g}{\sigma^2_f+\sigma^2_g}}\\ &=\frac{\left(x- \frac{\mu_f\sigma_g^2+\mu_g\sigma_f^2} {\sigma_f^2+\sigma_g^2}\right)^2} {2\frac{\sigma^2_f\sigma_g^2} {\sigma_f^2+\sigma_g^2}}+ \frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)}\\ &=\frac{(x-\mu_{fg})^2}{2\sigma^2_{fg}}+\frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)} \end{align}$$ Where $$\mu_{fg}=\frac{\mu_f\sigma^2_g+\mu_g\sigma_f^2}{\sigma_f^2+\sigma_g^2}$$ and $$\sigma_{fg}^2=\frac{\sigma_f^2\sigma_g^2}{\sigma_f^2+\sigma_g^2}.$$ So $$f(x)g(s)=\frac{1}{2\pi\sigma_f\sigma_g}\exp\left(-\frac{(x-\mu_{fg})^2}{2\sigma^2_{fg}}\right)\exp\left(\frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)}\right)$$ This can be written as a scaled Gaussian PDF: $$f(x)g(x)=\frac{S_{fg}}{\sigma_{fg}\sqrt{2\pi}}\exp\left(-\frac{(x-\mu_{fg})^2}{2\sigma_{fg}^2}\right)$$ where $$ S_{fg}=\frac{1}{\sqrt{2\pi(\sigma_f^2+\sigma_g^2)}}\exp\left(-\frac{(\mu_f-\mu_g)^2}{2(\sigma_f^2+\sigma_g^2)}\right) $$ Note that the scaling constant is also a Gaussian function of the two means and two variances. The product of two Gaussian densities is Gaussian, and the Gaussian is a member of the exponential family. Therefore, the Gaussian is conjugate prior to itself by the definition of conjugacy.
Gaussian is conjugate of Gaussian? If we take your question to mean whether the product of the densities are Gaussian, then the answer is "yes" (P.A. Bromiley. Tina Memo No. 2003-003. "Products and Convolutions of Gaussian Probability
51,777
Gaussian is conjugate of Gaussian?
Because a comment of mine about obtaining a simple answer seems to have generated interest, here are the details. Restatement of the question The question asks whether the product of two Normal distribution functions determines a Normally distributed variable. In the notation of the question, these functions have the form $$f(x; \mu, \sigma) = C(\mu,\sigma)\exp\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2\right)= C(\mu,\sigma)\exp\left(-\tau(\sigma)^2\left(x-\mu\right)^2\right)$$ where $C(\mu,\sigma)$ is the normalizing constant (a number determined by the need to make $f(x;\mu,\sigma)\,\mathrm{d}x$ integrate to unity) and $$\tau(\sigma) = \frac{1}{2\sigma^2}.$$ $2\tau(\sigma)^2$ (the reciprocal of the variance) is known as the precision. Use of the logarithm to simplify the analysis Because $f$ is always positive, we may work with its logarithm, which is a quadratic function of $x:$ $$\log f(x;\mu,\sigma) = A(\mu,\sigma) - \tau(\sigma)^2(x-\mu)^2\tag{*}$$ (where, evidently, $A(\mu,\sigma) = \log(C(\mu,\sigma))$). Notice that this expression describes all nondegenerate quadratic functions of $x$ with negative leading coefficient. That is, given any quadratic $Q(x) = -ax^2 + 2bx + c,$ we may find $\mu,$ $\sigma,$ and a constant (to play the role of $A(\mu,\sigma)$) in which $Q$ is expressed in the form $(*).$ Finding $\mu$ and $\sigma$ given $a,b,c$ is called completing the square. However, the details will not matter here, so I leave it to the interested reader to work out the formulas (which is a straightforward exercise in elementary algebra). Conversely (by definition of Normal distributions), any distribution with a log density function that can be written in this form (and is defined for all real numbers) is a Normal distribution. Let's memorialize this characterization by highlighting it: Any density function $f$ that is (a) defined for all real numbers and (b) whose logarithm is a quadratic function of its argument describes a Normal distribution. Solution Recall that the logarithm of a product is the sum of the logarithms. Thus, the question comes down to this: Is the sum of two quadratic functions quadratic? Trivially, yes, because by the rules of polynomial addition, $$(-a_1 x^2 + 2b_1 x + c_1) + (-a_2 x^2 + 2b_2 x + c_2) = -(a_1+a_2)x^2 + 2(b_1+b_2)x + (c_1+c_2),$$ QED. We can go further, though: it is of interest to identify which Normal distribution occurs. For this, the notation of the question will be convenient. The preceding calculation is now written $$\begin{aligned} \left(A(\mu_x,\sigma_x)-\tau(\sigma_x)^2 (x-\mu_x)^2\right) + \left(A(\mu_y,\sigma_y)-\tau(\sigma_y)^2 (x-\mu_y)^2\right) \\ = A(\mu,\sigma)-\tau(\sigma)^2 (x-\mu)^2 \end{aligned}$$ where $\sigma^2$ is the variance of the result, $\mu$ is its mean, and $A(\mu,\sigma)$ is the logarithm of its normalizing constant. My point is that we can solve this problem by inspection. This is a math-speak term for saying you don't have to write anything down because you can pick out appropriate polynomial coefficients just by looking. To wit, The coefficient of $x^2$ must be the sum of its coefficients on the left hand side, giving $$\tau(\sigma)^2 = \tau(\sigma_x)^2 + \tau(\sigma_y)^2.\tag{1}$$ The coefficient of $x$ must be the sum of its coefficients on the left hand side. This requires slightly greater perception: namely, recognizing that the coefficient of $x$ in the square $(x-\mu)^2$ is $-2\mu.$ Thus, $$2\tau(\sigma)^2 \mu = 2\tau(\sigma_x)^2\mu_x +2\tau(\sigma_y)^2\mu_y.$$ Here, then, is the second place where we actually have to do some algebra: solve this equation for $\mu.$ Again, the solution is by inspection (because the equation is so simple), and we can simplify it using $(1)$ above: $$\mu = \frac{2\tau(\sigma_x)^2\mu_x + 2\tau(\sigma_y)^2\mu_y}{2\tau(\sigma)^2} = \frac{2\tau(\sigma_x)^2\mu_x + 2\tau(\sigma_y)^2\mu_y}{2\tau(\sigma_x)^2 + 2\tau(\sigma_y)^2}.\tag{2}$$ The factors of $2\tau(\ )^2$ are the precisions of the distributions (q.v.), enabling us to characterize the results $(1)$ and $(2)$ in a simple, memorable fashion: When multiplying two Normal densities, precisions add (just double both sides of equation $(1)$) and the mean is the precision-weighted average of the means (equation $(2)$). The two highlighted equations--the first simplifying the sum of quadratics and the second solving a simple linear equation in one unknown--constitute the "two lines of algebra" I mentioned in my comment.
Gaussian is conjugate of Gaussian?
Because a comment of mine about obtaining a simple answer seems to have generated interest, here are the details. Restatement of the question The question asks whether the product of two Normal distri
Gaussian is conjugate of Gaussian? Because a comment of mine about obtaining a simple answer seems to have generated interest, here are the details. Restatement of the question The question asks whether the product of two Normal distribution functions determines a Normally distributed variable. In the notation of the question, these functions have the form $$f(x; \mu, \sigma) = C(\mu,\sigma)\exp\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2\right)= C(\mu,\sigma)\exp\left(-\tau(\sigma)^2\left(x-\mu\right)^2\right)$$ where $C(\mu,\sigma)$ is the normalizing constant (a number determined by the need to make $f(x;\mu,\sigma)\,\mathrm{d}x$ integrate to unity) and $$\tau(\sigma) = \frac{1}{2\sigma^2}.$$ $2\tau(\sigma)^2$ (the reciprocal of the variance) is known as the precision. Use of the logarithm to simplify the analysis Because $f$ is always positive, we may work with its logarithm, which is a quadratic function of $x:$ $$\log f(x;\mu,\sigma) = A(\mu,\sigma) - \tau(\sigma)^2(x-\mu)^2\tag{*}$$ (where, evidently, $A(\mu,\sigma) = \log(C(\mu,\sigma))$). Notice that this expression describes all nondegenerate quadratic functions of $x$ with negative leading coefficient. That is, given any quadratic $Q(x) = -ax^2 + 2bx + c,$ we may find $\mu,$ $\sigma,$ and a constant (to play the role of $A(\mu,\sigma)$) in which $Q$ is expressed in the form $(*).$ Finding $\mu$ and $\sigma$ given $a,b,c$ is called completing the square. However, the details will not matter here, so I leave it to the interested reader to work out the formulas (which is a straightforward exercise in elementary algebra). Conversely (by definition of Normal distributions), any distribution with a log density function that can be written in this form (and is defined for all real numbers) is a Normal distribution. Let's memorialize this characterization by highlighting it: Any density function $f$ that is (a) defined for all real numbers and (b) whose logarithm is a quadratic function of its argument describes a Normal distribution. Solution Recall that the logarithm of a product is the sum of the logarithms. Thus, the question comes down to this: Is the sum of two quadratic functions quadratic? Trivially, yes, because by the rules of polynomial addition, $$(-a_1 x^2 + 2b_1 x + c_1) + (-a_2 x^2 + 2b_2 x + c_2) = -(a_1+a_2)x^2 + 2(b_1+b_2)x + (c_1+c_2),$$ QED. We can go further, though: it is of interest to identify which Normal distribution occurs. For this, the notation of the question will be convenient. The preceding calculation is now written $$\begin{aligned} \left(A(\mu_x,\sigma_x)-\tau(\sigma_x)^2 (x-\mu_x)^2\right) + \left(A(\mu_y,\sigma_y)-\tau(\sigma_y)^2 (x-\mu_y)^2\right) \\ = A(\mu,\sigma)-\tau(\sigma)^2 (x-\mu)^2 \end{aligned}$$ where $\sigma^2$ is the variance of the result, $\mu$ is its mean, and $A(\mu,\sigma)$ is the logarithm of its normalizing constant. My point is that we can solve this problem by inspection. This is a math-speak term for saying you don't have to write anything down because you can pick out appropriate polynomial coefficients just by looking. To wit, The coefficient of $x^2$ must be the sum of its coefficients on the left hand side, giving $$\tau(\sigma)^2 = \tau(\sigma_x)^2 + \tau(\sigma_y)^2.\tag{1}$$ The coefficient of $x$ must be the sum of its coefficients on the left hand side. This requires slightly greater perception: namely, recognizing that the coefficient of $x$ in the square $(x-\mu)^2$ is $-2\mu.$ Thus, $$2\tau(\sigma)^2 \mu = 2\tau(\sigma_x)^2\mu_x +2\tau(\sigma_y)^2\mu_y.$$ Here, then, is the second place where we actually have to do some algebra: solve this equation for $\mu.$ Again, the solution is by inspection (because the equation is so simple), and we can simplify it using $(1)$ above: $$\mu = \frac{2\tau(\sigma_x)^2\mu_x + 2\tau(\sigma_y)^2\mu_y}{2\tau(\sigma)^2} = \frac{2\tau(\sigma_x)^2\mu_x + 2\tau(\sigma_y)^2\mu_y}{2\tau(\sigma_x)^2 + 2\tau(\sigma_y)^2}.\tag{2}$$ The factors of $2\tau(\ )^2$ are the precisions of the distributions (q.v.), enabling us to characterize the results $(1)$ and $(2)$ in a simple, memorable fashion: When multiplying two Normal densities, precisions add (just double both sides of equation $(1)$) and the mean is the precision-weighted average of the means (equation $(2)$). The two highlighted equations--the first simplifying the sum of quadratics and the second solving a simple linear equation in one unknown--constitute the "two lines of algebra" I mentioned in my comment.
Gaussian is conjugate of Gaussian? Because a comment of mine about obtaining a simple answer seems to have generated interest, here are the details. Restatement of the question The question asks whether the product of two Normal distri
51,778
What is the Fourier Transform of a brownian motion?
As mentioned above, the first equation about which you were confused is a property of the Fourier transform. Here is a very explicit derivation. First define the Fourier transform over a finite interval $(a,b)$ as $$ \mathcal{F}\left\{f(t)\right\} = \int_{(a,b)} f(t) e^{-i \omega t}\ dt. $$ With suitable technical considerations (if you care: that $f(t)$ is in the Sobolev space $W^{1,1}(a,b)$, which means that both $f$ and its derivative $f'$ are absolutely integrable over $(a,b)$) we can use our usual integration by parts formula: $\int u\ dv = uv|_{a}^b - \int v\ du$, where we will set $u = e^{-i \omega t}$ and $dv = f'(t) dt$. Then we have $$ \begin{aligned} \int e^{i \omega t}\frac{d}{dt} f(t)\ dt &= -i \omega e^{-i\omega t}f(t)\Big|_a^b - \int -i\omega e^{-i \omega t}f(t)\ dt \\ &= -i \omega \left(e^{-i \omega b}f(b) - e^{-i\omega a} f(a) \right) + i\omega \int f(t) e^{-i\omega t}\ dt\\ &= -i \omega \left(e^{-i \omega b}f(b) - e^{-i\omega a} f(a) \right) + i \omega \mathcal{F}\left\{ f(t) \right\}. \end{aligned} $$ If your function $f$ is well-behaved enough (if you somehow define a sequence of functions $f_n$ that converge to a limiting function and agree with $f$ on $(a,b)$, and if you can find a function $g$ so that, for any sequence of intervals $I_n$ converging to $\mathbb{R}$ you have $|f_n| \leq g$ for all $n$) then the constant term above cancels and you have the desired result. All the formality seems a little contrived, and indeed from the point of view of a physicist it is a little bit---we just do this and don't worry about the formality. But if you want to get into the details, they are important. Suppose you were trying to do this with a random function: the Wiener process $\mathcal{W}(t)$, for example. All right, since $\mathcal{W}(t)$ is a.s. continuous everywhere then we can a.s. Riemann-Stieltjes integrate it as above. But its "derivative" is not well-defined in the traditional sense since $\mathcal{W}(t)$ is a.s. differentiable nowhere! Oops. Everything does end up working out, and so even though the derivation you gave above is, technically-speaking, wrong, since $\frac{d}{dt}\mathcal{W}(t)$ is not defined, it is morally correct and, in physics, we use it all the time.
What is the Fourier Transform of a brownian motion?
As mentioned above, the first equation about which you were confused is a property of the Fourier transform. Here is a very explicit derivation. First define the Fourier transform over a finite interv
What is the Fourier Transform of a brownian motion? As mentioned above, the first equation about which you were confused is a property of the Fourier transform. Here is a very explicit derivation. First define the Fourier transform over a finite interval $(a,b)$ as $$ \mathcal{F}\left\{f(t)\right\} = \int_{(a,b)} f(t) e^{-i \omega t}\ dt. $$ With suitable technical considerations (if you care: that $f(t)$ is in the Sobolev space $W^{1,1}(a,b)$, which means that both $f$ and its derivative $f'$ are absolutely integrable over $(a,b)$) we can use our usual integration by parts formula: $\int u\ dv = uv|_{a}^b - \int v\ du$, where we will set $u = e^{-i \omega t}$ and $dv = f'(t) dt$. Then we have $$ \begin{aligned} \int e^{i \omega t}\frac{d}{dt} f(t)\ dt &= -i \omega e^{-i\omega t}f(t)\Big|_a^b - \int -i\omega e^{-i \omega t}f(t)\ dt \\ &= -i \omega \left(e^{-i \omega b}f(b) - e^{-i\omega a} f(a) \right) + i\omega \int f(t) e^{-i\omega t}\ dt\\ &= -i \omega \left(e^{-i \omega b}f(b) - e^{-i\omega a} f(a) \right) + i \omega \mathcal{F}\left\{ f(t) \right\}. \end{aligned} $$ If your function $f$ is well-behaved enough (if you somehow define a sequence of functions $f_n$ that converge to a limiting function and agree with $f$ on $(a,b)$, and if you can find a function $g$ so that, for any sequence of intervals $I_n$ converging to $\mathbb{R}$ you have $|f_n| \leq g$ for all $n$) then the constant term above cancels and you have the desired result. All the formality seems a little contrived, and indeed from the point of view of a physicist it is a little bit---we just do this and don't worry about the formality. But if you want to get into the details, they are important. Suppose you were trying to do this with a random function: the Wiener process $\mathcal{W}(t)$, for example. All right, since $\mathcal{W}(t)$ is a.s. continuous everywhere then we can a.s. Riemann-Stieltjes integrate it as above. But its "derivative" is not well-defined in the traditional sense since $\mathcal{W}(t)$ is a.s. differentiable nowhere! Oops. Everything does end up working out, and so even though the derivation you gave above is, technically-speaking, wrong, since $\frac{d}{dt}\mathcal{W}(t)$ is not defined, it is morally correct and, in physics, we use it all the time.
What is the Fourier Transform of a brownian motion? As mentioned above, the first equation about which you were confused is a property of the Fourier transform. Here is a very explicit derivation. First define the Fourier transform over a finite interv
51,779
What is the Fourier Transform of a brownian motion?
Sorry, I know this thread is old, but I feel like some statements are not very clear and/or misleading, and also I would like to add a more mathematically sound perspective on the matter. As was already pointed out, a Brownian path is with probability 1 not differentiable anywhere, at least not in a usual sense. It is correct that $dW/dt$ is defined in a distributional sense, and as such its Fourier transform can be taken. But the result is a priori a distribution. The actual problem is the premise that "the spectral density of white noise is a constant". At that point, you're miles away from any rigorous definition. Moreover, although you can find this claim a lot, it't not correct, or at least it's misleading. First of all, we should restrict ourselves to a Fourier decomposition on a finite interval, i.e. with discrete frequencies (because this is the only setting in which we can make meaningful statements). Let's say we're on $[0, 1]$. Then it's true that white noise has a flat spectral density in the sense that $$ E \left[ \left\vert \int_0^1 e^{-i \omega t} dW_t \right\vert ^2\right] = 1 , $$ which is independent of $\omega$. It is not true, however, that $\left\vert \int_0^1 e^{-i \omega t} dW_t \right\vert ^2$ is a constant. Rather, it follows a $\chi^2$ distribution. However, $\int \phi(t) dW_t$ is a stochastic integral, it cannot be understood as $\int \phi(t) \frac{dW_t}{dt}dt$. Now, stochastic calculus tells us that, for any deterministic function $\phi(t)$, $\int_0^1 \phi(t)\,dW_t$ is a random variable with normal distribution with mean zero and variance $\sigma^2 = \int_0^t \vert \phi(t) \vert^2\, dt$. How does this help? Well, we can write the Fourier transform as $$ \int_0^1 e^{-i \omega t} W_t dt = \int_0^1 e^{-i \omega t} \left( \int_0^s dW_s \right) dt = \int_0^1 \left( \int_s^1 e^{- i \omega t} d t \right) dW_s = \int_0^1 \phi(s)\, dW_s ,$$ with $$\phi(s) = \frac{i}{ \omega} (1 - e^{- i \omega s}).$$ And this function has $$\int_0^1 \vert \phi (s) \vert^2\, ds = 2 / \omega^2 .$$ This means that the $\omega$-th Fourier coefficient, $\omega \in 2 \pi \mathbb Z$, has distribution $\mathcal N (0, 2 / \omega^2 )$. In fact, you can simulate a Brownian path by sampling independent standard normal variables $Z_n \sim \mathcal N(0, 1)$, for $n = 0, 1, 2, \dots$, and puting $$ W(t) := Z_0 t + \sum_{n = 1}^\infty \frac{\sqrt 2 Z_n}{\pi n} \sin (\pi n t). $$ But if you fix the absolute value of $Z_n$ and just choose the sign randomly, you will end up with something else. If you really want to know what white noise is: It can be rigorously defined as a random tempered distribution (i.e. an extremely singular object!) following a certain law. Like a finite-dimensional random variable, this law can be defined via a characteristic function (existence is guaranteed by the Bochner-Minlos-Sazanov theorem). In infinite dimensions, this looks like this: The white noise distribution $T$ is distributed such, that for every Schwartz function (every smooth function that decays faster than any inverse polynomial) $\phi (t)$, $$ E [ e^{i (T, \phi )} ] = e^{- \frac 1 2 \int_{-\infty}^{\infty} \vert \phi(t) \vert^2 d t }$$ The field of mathematics that deals with this is called White Noise Analysis.
What is the Fourier Transform of a brownian motion?
Sorry, I know this thread is old, but I feel like some statements are not very clear and/or misleading, and also I would like to add a more mathematically sound perspective on the matter. As was alrea
What is the Fourier Transform of a brownian motion? Sorry, I know this thread is old, but I feel like some statements are not very clear and/or misleading, and also I would like to add a more mathematically sound perspective on the matter. As was already pointed out, a Brownian path is with probability 1 not differentiable anywhere, at least not in a usual sense. It is correct that $dW/dt$ is defined in a distributional sense, and as such its Fourier transform can be taken. But the result is a priori a distribution. The actual problem is the premise that "the spectral density of white noise is a constant". At that point, you're miles away from any rigorous definition. Moreover, although you can find this claim a lot, it't not correct, or at least it's misleading. First of all, we should restrict ourselves to a Fourier decomposition on a finite interval, i.e. with discrete frequencies (because this is the only setting in which we can make meaningful statements). Let's say we're on $[0, 1]$. Then it's true that white noise has a flat spectral density in the sense that $$ E \left[ \left\vert \int_0^1 e^{-i \omega t} dW_t \right\vert ^2\right] = 1 , $$ which is independent of $\omega$. It is not true, however, that $\left\vert \int_0^1 e^{-i \omega t} dW_t \right\vert ^2$ is a constant. Rather, it follows a $\chi^2$ distribution. However, $\int \phi(t) dW_t$ is a stochastic integral, it cannot be understood as $\int \phi(t) \frac{dW_t}{dt}dt$. Now, stochastic calculus tells us that, for any deterministic function $\phi(t)$, $\int_0^1 \phi(t)\,dW_t$ is a random variable with normal distribution with mean zero and variance $\sigma^2 = \int_0^t \vert \phi(t) \vert^2\, dt$. How does this help? Well, we can write the Fourier transform as $$ \int_0^1 e^{-i \omega t} W_t dt = \int_0^1 e^{-i \omega t} \left( \int_0^s dW_s \right) dt = \int_0^1 \left( \int_s^1 e^{- i \omega t} d t \right) dW_s = \int_0^1 \phi(s)\, dW_s ,$$ with $$\phi(s) = \frac{i}{ \omega} (1 - e^{- i \omega s}).$$ And this function has $$\int_0^1 \vert \phi (s) \vert^2\, ds = 2 / \omega^2 .$$ This means that the $\omega$-th Fourier coefficient, $\omega \in 2 \pi \mathbb Z$, has distribution $\mathcal N (0, 2 / \omega^2 )$. In fact, you can simulate a Brownian path by sampling independent standard normal variables $Z_n \sim \mathcal N(0, 1)$, for $n = 0, 1, 2, \dots$, and puting $$ W(t) := Z_0 t + \sum_{n = 1}^\infty \frac{\sqrt 2 Z_n}{\pi n} \sin (\pi n t). $$ But if you fix the absolute value of $Z_n$ and just choose the sign randomly, you will end up with something else. If you really want to know what white noise is: It can be rigorously defined as a random tempered distribution (i.e. an extremely singular object!) following a certain law. Like a finite-dimensional random variable, this law can be defined via a characteristic function (existence is guaranteed by the Bochner-Minlos-Sazanov theorem). In infinite dimensions, this looks like this: The white noise distribution $T$ is distributed such, that for every Schwartz function (every smooth function that decays faster than any inverse polynomial) $\phi (t)$, $$ E [ e^{i (T, \phi )} ] = e^{- \frac 1 2 \int_{-\infty}^{\infty} \vert \phi(t) \vert^2 d t }$$ The field of mathematics that deals with this is called White Noise Analysis.
What is the Fourier Transform of a brownian motion? Sorry, I know this thread is old, but I feel like some statements are not very clear and/or misleading, and also I would like to add a more mathematically sound perspective on the matter. As was alrea
51,780
Library routine for rolling window lag 1 autocorrelation?
In python, the pandas library has a function called rolling_apply that, in conjunction with the Series object method .autocorr() should work. Here's an example for $N = 10$. import pandas as pd y = pd.Series(np.random.normal(size = 100)) pd.rolling_apply(y, 10, lambda x: pd.Series(x).autocorr()) Another option is pandas.rolling_corr, so long as you shift the index of the series, and account for that shift in the size of the window: df = np.array([y[0:-1].values, y[1:].values]) df = df.transpose() df = pd.DataFrame(df) pd.rolling_corr(df[1], df[0], 9) If you'd like to examine autocorrelation for lags other than 1, the latter approach is more flexible. (In that case I'd advise special care to make sure your indexing matches your intended window; tripped me up at first.)
Library routine for rolling window lag 1 autocorrelation?
In python, the pandas library has a function called rolling_apply that, in conjunction with the Series object method .autocorr() should work. Here's an example for $N = 10$. import pandas as pd y = pd
Library routine for rolling window lag 1 autocorrelation? In python, the pandas library has a function called rolling_apply that, in conjunction with the Series object method .autocorr() should work. Here's an example for $N = 10$. import pandas as pd y = pd.Series(np.random.normal(size = 100)) pd.rolling_apply(y, 10, lambda x: pd.Series(x).autocorr()) Another option is pandas.rolling_corr, so long as you shift the index of the series, and account for that shift in the size of the window: df = np.array([y[0:-1].values, y[1:].values]) df = df.transpose() df = pd.DataFrame(df) pd.rolling_corr(df[1], df[0], 9) If you'd like to examine autocorrelation for lags other than 1, the latter approach is more flexible. (In that case I'd advise special care to make sure your indexing matches your intended window; tripped me up at first.)
Library routine for rolling window lag 1 autocorrelation? In python, the pandas library has a function called rolling_apply that, in conjunction with the Series object method .autocorr() should work. Here's an example for $N = 10$. import pandas as pd y = pd
51,781
Library routine for rolling window lag 1 autocorrelation?
The formula for the ACF can be expressed as a rational function of sums. By far the fastest way to compute rolling sums is with the Fast Fourier Transform. Use this to accomplish the task thousands of times faster than a brute-force iterated calculation (such as offered by windowed "roll apply" functions in R, Python, Mathematica, etc.) The same idea applies to any rolling statistics that are functions of rolling (possibly weighted) sums. This would include autocorrelations at greater lags, for instance, or rollings variances, skewnesses, and even rolling regressions (for sets of time series). The principal limitation is that the implementation can be a little tricky: the FFTs have to be done just right and more coding and testing are needed than for the brute force method. (Compare the lengths of the acf.window and acf.reference functions below.) For a one-off calculation in which the time series involved is not enormous, I would probably elect the brute-force solution; but for anything else--such as simulations where this operation had to be done many times, or with huge time series, or large windows, the extra time spent coding the FFT solution would pay off. Details For a time series $(x_t, t=1\ldots n)$ and lag of $k$, the ACF is $$\text{acf}(x)_k = \frac{\sum_{t=1}^{n-k}x_tx_{t+k} - \bar{x} \sum_{t=1}^{n-k}(x_t + x_{t+k}) + \left(\bar{x}\right)^2(n-k)}{\sum_{t=1}^n x_t^2 - n \bar{x}^2}$$ where $\bar x = (1/n)\sum_{t=1}^n x_t.$ Five sums appear here: of $x_tx_{t+k}$, $x_t$, and $x_{t+k}$ (for $t=1$ to $t=n-k$) and of $x_t$ and $x_t^2$ (for $t=1$ to $t=n$). A sum is a single term in a convolution with a kernel whose coefficients are ones and zeros. Convolutions can be computed with three Fast Fourier Transforms (FFTs): transform the kernel, transform the series, multiply the two transforms, and transform again. Since the computational time needed to do this for a series of total length $N$ is $O(N\log(N))$, the time needed to obtain the entire rolling ACF also is the same order (albeit multiplied approximately by five because it has to be done for each sum). The brute-force solution takes $O(n)$ time for each term of the output, of which there are $N-n$, for a total of $O(n(N-n))$. For any appreciable value of $n$ this can be enormously greater than the FFT method. In the R implementation below, the time improvement for the FFT method is a factor of a hundred to many thousands: minutes or even hours of calculation are done in seconds. Working example This code implements a rolling ACF and compares its output with that of a brute-force implementation using zoo::rollapply to compute a rolling value of the lag-one autocorrelation produced by the acf function. Its output value of $1.232757\times 10^{-28}$ is the sum of squares of differences between the two implementations: it is effectively zero, showing that only inconsequential differences in the least significant bits have appeared (due to different ways in which floating point errors accumulate). Here are the timing results: user.self sys.self elapsed user.child sys.child t1 0.01 0 0.01 NA NA t2 3.16 0 3.17 NA NA The total elapsed time for a window of length $n=99$ over a series of length $N=10000$ is almost not measurable for the FFT method but takes over three seconds for the brute force method. During those three seconds, the FFT method can process a time series of over a million points. # # Rolling autocorrelation, lag 1. # acf.window <- function(y, n) { N <- length(y) if (n > N) stop("Window too wide.") zero <- 0 # # Compute a rolling sum given the fft of its kernel. # sum.window <- function(x, k) Re(fft(fft(x) * k, inverse=TRUE)) / length(x) # # Precompute kernels for summing over windows of length `n` and n-1. # m <- floor((n+1)/2) kernel <- fft(c(rep(1, m), rep(0, N-n+1), rep(1, n-m-1))) kernel.full <- fft(c(rep(1, m), rep(0, N-n), rep(1, n-m))) # # Lag the original data. # y.lag <- c(y[-1], zero) # Lagged values y.trunc <- c( y[-N], zero) # Truncated at the end # # Compute the needed rolling sums. # y.sum <- sum.window(y, kernel) y.lag.sum <- c(y.sum[-1], zero) y.trunc.sum <- c(y.sum[-N], zero) y.prod <- sum.window(y.lag * y.trunc, kernel) y.mean <- sum.window(y, kernel.full) / n y.2 <- sum.window(y^2, kernel.full) a <- y.prod - y.mean*(y.lag.sum+y.trunc.sum) + y.mean^2*(n-1) a <- a / (y.2 - n * y.mean^2) return(a[m:(N-n+m)]) } # # Brute-force implementation. # acf.reference <- function(y, n, lag=1) { require(zoo) rollapply(y, width = n, FUN=function(x) acf(x, plot = FALSE, lag.max = lag)$acf[1+lag]) } #$ # # Compare the results and times of two functions `f1` and `f2`. # test <- function(f1, f2, ...) { t1 <- system.time(y1 <- f1(...)) t2 <- system.time(y2 <- f2(...)) return(list(value=sum((y1-y2)^2), times=rbind(t1, t2), results=rbind(y1, y2))) } # # Run a reproducible calculation with random data. # set.seed(17) y <- rnorm(10^4) n <- 99 result <- test(acf.window, acf.reference, y=y, n=n) result$value # Should be zero up to floating point rounding error $ result$times # Gives the timing values
Library routine for rolling window lag 1 autocorrelation?
The formula for the ACF can be expressed as a rational function of sums. By far the fastest way to compute rolling sums is with the Fast Fourier Transform. Use this to accomplish the task thousands
Library routine for rolling window lag 1 autocorrelation? The formula for the ACF can be expressed as a rational function of sums. By far the fastest way to compute rolling sums is with the Fast Fourier Transform. Use this to accomplish the task thousands of times faster than a brute-force iterated calculation (such as offered by windowed "roll apply" functions in R, Python, Mathematica, etc.) The same idea applies to any rolling statistics that are functions of rolling (possibly weighted) sums. This would include autocorrelations at greater lags, for instance, or rollings variances, skewnesses, and even rolling regressions (for sets of time series). The principal limitation is that the implementation can be a little tricky: the FFTs have to be done just right and more coding and testing are needed than for the brute force method. (Compare the lengths of the acf.window and acf.reference functions below.) For a one-off calculation in which the time series involved is not enormous, I would probably elect the brute-force solution; but for anything else--such as simulations where this operation had to be done many times, or with huge time series, or large windows, the extra time spent coding the FFT solution would pay off. Details For a time series $(x_t, t=1\ldots n)$ and lag of $k$, the ACF is $$\text{acf}(x)_k = \frac{\sum_{t=1}^{n-k}x_tx_{t+k} - \bar{x} \sum_{t=1}^{n-k}(x_t + x_{t+k}) + \left(\bar{x}\right)^2(n-k)}{\sum_{t=1}^n x_t^2 - n \bar{x}^2}$$ where $\bar x = (1/n)\sum_{t=1}^n x_t.$ Five sums appear here: of $x_tx_{t+k}$, $x_t$, and $x_{t+k}$ (for $t=1$ to $t=n-k$) and of $x_t$ and $x_t^2$ (for $t=1$ to $t=n$). A sum is a single term in a convolution with a kernel whose coefficients are ones and zeros. Convolutions can be computed with three Fast Fourier Transforms (FFTs): transform the kernel, transform the series, multiply the two transforms, and transform again. Since the computational time needed to do this for a series of total length $N$ is $O(N\log(N))$, the time needed to obtain the entire rolling ACF also is the same order (albeit multiplied approximately by five because it has to be done for each sum). The brute-force solution takes $O(n)$ time for each term of the output, of which there are $N-n$, for a total of $O(n(N-n))$. For any appreciable value of $n$ this can be enormously greater than the FFT method. In the R implementation below, the time improvement for the FFT method is a factor of a hundred to many thousands: minutes or even hours of calculation are done in seconds. Working example This code implements a rolling ACF and compares its output with that of a brute-force implementation using zoo::rollapply to compute a rolling value of the lag-one autocorrelation produced by the acf function. Its output value of $1.232757\times 10^{-28}$ is the sum of squares of differences between the two implementations: it is effectively zero, showing that only inconsequential differences in the least significant bits have appeared (due to different ways in which floating point errors accumulate). Here are the timing results: user.self sys.self elapsed user.child sys.child t1 0.01 0 0.01 NA NA t2 3.16 0 3.17 NA NA The total elapsed time for a window of length $n=99$ over a series of length $N=10000$ is almost not measurable for the FFT method but takes over three seconds for the brute force method. During those three seconds, the FFT method can process a time series of over a million points. # # Rolling autocorrelation, lag 1. # acf.window <- function(y, n) { N <- length(y) if (n > N) stop("Window too wide.") zero <- 0 # # Compute a rolling sum given the fft of its kernel. # sum.window <- function(x, k) Re(fft(fft(x) * k, inverse=TRUE)) / length(x) # # Precompute kernels for summing over windows of length `n` and n-1. # m <- floor((n+1)/2) kernel <- fft(c(rep(1, m), rep(0, N-n+1), rep(1, n-m-1))) kernel.full <- fft(c(rep(1, m), rep(0, N-n), rep(1, n-m))) # # Lag the original data. # y.lag <- c(y[-1], zero) # Lagged values y.trunc <- c( y[-N], zero) # Truncated at the end # # Compute the needed rolling sums. # y.sum <- sum.window(y, kernel) y.lag.sum <- c(y.sum[-1], zero) y.trunc.sum <- c(y.sum[-N], zero) y.prod <- sum.window(y.lag * y.trunc, kernel) y.mean <- sum.window(y, kernel.full) / n y.2 <- sum.window(y^2, kernel.full) a <- y.prod - y.mean*(y.lag.sum+y.trunc.sum) + y.mean^2*(n-1) a <- a / (y.2 - n * y.mean^2) return(a[m:(N-n+m)]) } # # Brute-force implementation. # acf.reference <- function(y, n, lag=1) { require(zoo) rollapply(y, width = n, FUN=function(x) acf(x, plot = FALSE, lag.max = lag)$acf[1+lag]) } #$ # # Compare the results and times of two functions `f1` and `f2`. # test <- function(f1, f2, ...) { t1 <- system.time(y1 <- f1(...)) t2 <- system.time(y2 <- f2(...)) return(list(value=sum((y1-y2)^2), times=rbind(t1, t2), results=rbind(y1, y2))) } # # Run a reproducible calculation with random data. # set.seed(17) y <- rnorm(10^4) n <- 99 result <- test(acf.window, acf.reference, y=y, n=n) result$value # Should be zero up to floating point rounding error $ result$times # Gives the timing values
Library routine for rolling window lag 1 autocorrelation? The formula for the ACF can be expressed as a rational function of sums. By far the fastest way to compute rolling sums is with the Fast Fourier Transform. Use this to accomplish the task thousands
51,782
Library routine for rolling window lag 1 autocorrelation?
Regarding R, if you have an existing function to calculate the lag 1 autocorrelation, I believe you can pass it as the FUN to apply.rolling in the PerformanceAnalytics package, which itself is described as a convenience wrapper for rollapply in package zoo. Example: sample <- rnorm(100) result <- rollapply(sample, width = 5, FUN=acf, lag.max = 1,type = "correlation",plot = FALSE) result[,1]
Library routine for rolling window lag 1 autocorrelation?
Regarding R, if you have an existing function to calculate the lag 1 autocorrelation, I believe you can pass it as the FUN to apply.rolling in the PerformanceAnalytics package, which itself is describ
Library routine for rolling window lag 1 autocorrelation? Regarding R, if you have an existing function to calculate the lag 1 autocorrelation, I believe you can pass it as the FUN to apply.rolling in the PerformanceAnalytics package, which itself is described as a convenience wrapper for rollapply in package zoo. Example: sample <- rnorm(100) result <- rollapply(sample, width = 5, FUN=acf, lag.max = 1,type = "correlation",plot = FALSE) result[,1]
Library routine for rolling window lag 1 autocorrelation? Regarding R, if you have an existing function to calculate the lag 1 autocorrelation, I believe you can pass it as the FUN to apply.rolling in the PerformanceAnalytics package, which itself is describ
51,783
How is the poisson distribution a distribution? It seems more like a formula
The formula $f$ is the probability mass function for the Poisson distribution. That formula, as explained in the video, can be used to calculate the probability of a given value under the assumed distribution. The related cumulative distribution function $F$ can be used to generate random numbers following the distribution: Use the CDF to partition the interval $(0,1)$ into subintervals: $(0, F(x_1))$, $(F(x_1), F(x_2))$, $etc...$ Generate random numbers on the interval $(0,1)$ and see which bin they fall into. More in this tutorial, which goes through a Poisson example using R. The Poisson PMF and CDF are available in scipy.
How is the poisson distribution a distribution? It seems more like a formula
The formula $f$ is the probability mass function for the Poisson distribution. That formula, as explained in the video, can be used to calculate the probability of a given value under the assumed dis
How is the poisson distribution a distribution? It seems more like a formula The formula $f$ is the probability mass function for the Poisson distribution. That formula, as explained in the video, can be used to calculate the probability of a given value under the assumed distribution. The related cumulative distribution function $F$ can be used to generate random numbers following the distribution: Use the CDF to partition the interval $(0,1)$ into subintervals: $(0, F(x_1))$, $(F(x_1), F(x_2))$, $etc...$ Generate random numbers on the interval $(0,1)$ and see which bin they fall into. More in this tutorial, which goes through a Poisson example using R. The Poisson PMF and CDF are available in scipy.
How is the poisson distribution a distribution? It seems more like a formula The formula $f$ is the probability mass function for the Poisson distribution. That formula, as explained in the video, can be used to calculate the probability of a given value under the assumed dis
51,784
How is the poisson distribution a distribution? It seems more like a formula
The function you link to is a random number generator. It does not return the Poisson distribution, but returns random numbers from a Poisson distribution. That is, it does exactly what its name suggests - gives you random Poisson variates, not the distribution. The Poisson probability function is of the form $P(X=x) = \frac{e^{-\lambda} \lambda^x}{x!}\,,\quad 0,1,2,\ldots$, while the distribution function is $P(X\leq x) =\sum_{i=0}^x \frac{e^{-\lambda} \lambda^i}{i!}\,,\quad 0,1,2,\ldots$. There are a variety of methods for generating random numbers from this distribution, which will (almost always) begin with a source of uniformly distributed random numbers on $[0,1)$ (notionally continuous, but in practice limited to at best the accuracy with which numbers are represented by the particular implementation on computers). The scipy function will use one of those methods; which one will be discernable by examining the code (which you'd be better placed to locate than me). However, if I am looking at the right underlying C code that numpy uses (source here), then it uses two different algorithms, depending on the Poisson parameter: long rk_poisson(rk_state *state, double lam) { if (lam >= 10) { return rk_poisson_ptrs(state, lam); } else if (lam == 0) { return 0; } else { return rk_poisson_mult(state, lam); } } The code for those two functions (rk_poisson_ptrs and rk_poisson_mult) is in the same file, immediately above the quoted code.
How is the poisson distribution a distribution? It seems more like a formula
The function you link to is a random number generator. It does not return the Poisson distribution, but returns random numbers from a Poisson distribution. That is, it does exactly what its name sugge
How is the poisson distribution a distribution? It seems more like a formula The function you link to is a random number generator. It does not return the Poisson distribution, but returns random numbers from a Poisson distribution. That is, it does exactly what its name suggests - gives you random Poisson variates, not the distribution. The Poisson probability function is of the form $P(X=x) = \frac{e^{-\lambda} \lambda^x}{x!}\,,\quad 0,1,2,\ldots$, while the distribution function is $P(X\leq x) =\sum_{i=0}^x \frac{e^{-\lambda} \lambda^i}{i!}\,,\quad 0,1,2,\ldots$. There are a variety of methods for generating random numbers from this distribution, which will (almost always) begin with a source of uniformly distributed random numbers on $[0,1)$ (notionally continuous, but in practice limited to at best the accuracy with which numbers are represented by the particular implementation on computers). The scipy function will use one of those methods; which one will be discernable by examining the code (which you'd be better placed to locate than me). However, if I am looking at the right underlying C code that numpy uses (source here), then it uses two different algorithms, depending on the Poisson parameter: long rk_poisson(rk_state *state, double lam) { if (lam >= 10) { return rk_poisson_ptrs(state, lam); } else if (lam == 0) { return 0; } else { return rk_poisson_mult(state, lam); } } The code for those two functions (rk_poisson_ptrs and rk_poisson_mult) is in the same file, immediately above the quoted code.
How is the poisson distribution a distribution? It seems more like a formula The function you link to is a random number generator. It does not return the Poisson distribution, but returns random numbers from a Poisson distribution. That is, it does exactly what its name sugge
51,785
How is the poisson distribution a distribution? It seems more like a formula
I generally use R so my answer here is based on a quick web search. It looks like numpy supports generating random samples from a Poisson distribution and doesn't have functions for computing the probability mass function (PMF) described by the Poisson formula to which you refer. Generating random samples from a distribution can be very useful but as you point out is not the same as computing the PMF which is what you'd need to do to solve the "customer" problem. It seems like you should be looking at scipy which seems to support the generation of PMF's for a large variety of distributions including Poisson.
How is the poisson distribution a distribution? It seems more like a formula
I generally use R so my answer here is based on a quick web search. It looks like numpy supports generating random samples from a Poisson distribution and doesn't have functions for computing the prob
How is the poisson distribution a distribution? It seems more like a formula I generally use R so my answer here is based on a quick web search. It looks like numpy supports generating random samples from a Poisson distribution and doesn't have functions for computing the probability mass function (PMF) described by the Poisson formula to which you refer. Generating random samples from a distribution can be very useful but as you point out is not the same as computing the PMF which is what you'd need to do to solve the "customer" problem. It seems like you should be looking at scipy which seems to support the generation of PMF's for a large variety of distributions including Poisson.
How is the poisson distribution a distribution? It seems more like a formula I generally use R so my answer here is based on a quick web search. It looks like numpy supports generating random samples from a Poisson distribution and doesn't have functions for computing the prob
51,786
Why is a mixed model a non-linear statistical model?
Hopefully the amount of notation suppressed and corners cut in what follows still leaves something intelligible: On what 'mixed' means. Imagine somewhere at the heart of the model we have a line looking something like this. $$\eta = \beta_0 + \beta_1 x_1 + ... + \beta_p x_p$$ Where the $x_k$ are our covariates. Focus on the coefficients, the $\beta_k$. We could either: think of these as fixed numbers (fixed effects) or as random numbers (random effects). Why might we want to think of a $\beta_k$ as random? For example, imagine a situation where each person gets their own intercept term $\beta_0$. This might represent them being naturally healthier, faster, smarter or whatever. We could then model what effect a drug / treatment had on top of their individual natural level. We might want to think of the effect of the drug or treatment though as being fixed and bumping $\eta$ up or down by the same amount for every individual. Hence the term mixed effects where some are random (e.g., person's natural level) and some are fixed (e.g., effect of treatment). On linear and non-linear. The 'linear' in generalised linear models refers to the equation for $\eta$ above. If $\mu$ is the mean of the variable we are modelling then we can do non-linear things with $\eta$ like: $$\mu = \frac{1}{1+e^{-\eta}}$$ Which is involved in logistic regression. Or $$\mu = e^{\eta}$$ As in models for count data. This is where having random effects in $\eta$ can cause some real bumps. Suppose we have a prior distribution for $\beta_0\sim\mathcal N(0,g)$. Then for example with logistic data: $$E[Y]=E[E[Y|\beta_0]]=\int \frac{1}{1+e^{-\eta}} (2\pi)^{-1/2}e^{-\frac{1}{2}\beta_0^2} d \beta_0$$ Which is pretty hairy, despite the fact that $\beta_0$ was added as a linear term to $\eta$. Think about the distribution of $\beta_0$ which is symmetric, but taking $\exp()$ of it is not. However if we model $\mu=\eta$ things work out nice and the random terms disappear from the mean upon marginalisation. They don't, however, disappear from the variance. Suppose for some observation $i$, with a random intercept (which is independent of the error term $\varepsilon_i \sim\mathcal N(0,\sigma^2)$): $$Y_i|\beta_0 = \beta_0 + \beta_1 x_1 + \varepsilon_i$$ Then $$E[Y_i]=E[E[Y_i|\beta_0]]=E[\beta_0 + \beta_1 x_1]=\beta_1 x_1$$ But $${\rm Var}(Y_i)={\rm Var}(\beta_0 + \beta_1 x_1 + \varepsilon_i) = {\rm Var}(\beta_0) + {\rm Var}( \varepsilon_i) = g + \sigma^2$$ So the 'variance parameters' still show up.
Why is a mixed model a non-linear statistical model?
Hopefully the amount of notation suppressed and corners cut in what follows still leaves something intelligible: On what 'mixed' means. Imagine somewhere at the heart of the model we have a line looki
Why is a mixed model a non-linear statistical model? Hopefully the amount of notation suppressed and corners cut in what follows still leaves something intelligible: On what 'mixed' means. Imagine somewhere at the heart of the model we have a line looking something like this. $$\eta = \beta_0 + \beta_1 x_1 + ... + \beta_p x_p$$ Where the $x_k$ are our covariates. Focus on the coefficients, the $\beta_k$. We could either: think of these as fixed numbers (fixed effects) or as random numbers (random effects). Why might we want to think of a $\beta_k$ as random? For example, imagine a situation where each person gets their own intercept term $\beta_0$. This might represent them being naturally healthier, faster, smarter or whatever. We could then model what effect a drug / treatment had on top of their individual natural level. We might want to think of the effect of the drug or treatment though as being fixed and bumping $\eta$ up or down by the same amount for every individual. Hence the term mixed effects where some are random (e.g., person's natural level) and some are fixed (e.g., effect of treatment). On linear and non-linear. The 'linear' in generalised linear models refers to the equation for $\eta$ above. If $\mu$ is the mean of the variable we are modelling then we can do non-linear things with $\eta$ like: $$\mu = \frac{1}{1+e^{-\eta}}$$ Which is involved in logistic regression. Or $$\mu = e^{\eta}$$ As in models for count data. This is where having random effects in $\eta$ can cause some real bumps. Suppose we have a prior distribution for $\beta_0\sim\mathcal N(0,g)$. Then for example with logistic data: $$E[Y]=E[E[Y|\beta_0]]=\int \frac{1}{1+e^{-\eta}} (2\pi)^{-1/2}e^{-\frac{1}{2}\beta_0^2} d \beta_0$$ Which is pretty hairy, despite the fact that $\beta_0$ was added as a linear term to $\eta$. Think about the distribution of $\beta_0$ which is symmetric, but taking $\exp()$ of it is not. However if we model $\mu=\eta$ things work out nice and the random terms disappear from the mean upon marginalisation. They don't, however, disappear from the variance. Suppose for some observation $i$, with a random intercept (which is independent of the error term $\varepsilon_i \sim\mathcal N(0,\sigma^2)$): $$Y_i|\beta_0 = \beta_0 + \beta_1 x_1 + \varepsilon_i$$ Then $$E[Y_i]=E[E[Y_i|\beta_0]]=E[\beta_0 + \beta_1 x_1]=\beta_1 x_1$$ But $${\rm Var}(Y_i)={\rm Var}(\beta_0 + \beta_1 x_1 + \varepsilon_i) = {\rm Var}(\beta_0) + {\rm Var}( \varepsilon_i) = g + \sigma^2$$ So the 'variance parameters' still show up.
Why is a mixed model a non-linear statistical model? Hopefully the amount of notation suppressed and corners cut in what follows still leaves something intelligible: On what 'mixed' means. Imagine somewhere at the heart of the model we have a line looki
51,787
Why is a mixed model a non-linear statistical model?
And, here's a "street" version of the above: a) what is a linear model? It's one which can be expressed in the form of sums and scalar products of the inputs (y = ax + b at the simplest). If "a" is a function of some other factor ("z" not "x") perhaps a random function, A(z), then we have y = A(z)x + b which is no longer a linear model. b) what is a "mixed-model?" One which has linear parts and non-linear parts: y = ax + A(Z)x + b
Why is a mixed model a non-linear statistical model?
And, here's a "street" version of the above: a) what is a linear model? It's one which can be expressed in the form of sums and scalar products of the inputs (y = ax + b at the simplest). If "a" is a
Why is a mixed model a non-linear statistical model? And, here's a "street" version of the above: a) what is a linear model? It's one which can be expressed in the form of sums and scalar products of the inputs (y = ax + b at the simplest). If "a" is a function of some other factor ("z" not "x") perhaps a random function, A(z), then we have y = A(z)x + b which is no longer a linear model. b) what is a "mixed-model?" One which has linear parts and non-linear parts: y = ax + A(Z)x + b
Why is a mixed model a non-linear statistical model? And, here's a "street" version of the above: a) what is a linear model? It's one which can be expressed in the form of sums and scalar products of the inputs (y = ax + b at the simplest). If "a" is a
51,788
Should the Shapiro-Wilk test and QQ-Plot always be combined?
At least two reasons: 1) A Shapiro Wilk test, at least if you base a decision on a p-value, is sample size dependent. With a small sample, you'll almost always conclude "normal" and with a large enough sample, even a tiny deviation from normal will be significant 2) A QQ plot tells you a lot about how the distribution is non-normal and may point to solutions.
Should the Shapiro-Wilk test and QQ-Plot always be combined?
At least two reasons: 1) A Shapiro Wilk test, at least if you base a decision on a p-value, is sample size dependent. With a small sample, you'll almost always conclude "normal" and with a large enoug
Should the Shapiro-Wilk test and QQ-Plot always be combined? At least two reasons: 1) A Shapiro Wilk test, at least if you base a decision on a p-value, is sample size dependent. With a small sample, you'll almost always conclude "normal" and with a large enough sample, even a tiny deviation from normal will be significant 2) A QQ plot tells you a lot about how the distribution is non-normal and may point to solutions.
Should the Shapiro-Wilk test and QQ-Plot always be combined? At least two reasons: 1) A Shapiro Wilk test, at least if you base a decision on a p-value, is sample size dependent. With a small sample, you'll almost always conclude "normal" and with a large enoug
51,789
Should the Shapiro-Wilk test and QQ-Plot always be combined?
Citations would be helpful, but at face value, the claim is false. One of our favorite questions here (one of mine, anyway) is, "Is normality testing 'essentially useless'?" Answers to this question generally argue that Q–Q plots are more valuable than the Shapiro–Wilk test. I.e., if one of these is to be excluded, let it be the Shapiro–Wilk test, not the Q–Q plot. Many analyses involve normality assumptions regarding distributions of interest, but these analyses vary in their sensitivity to violations of this assumption. As a significance test, the Shapiro–Wilk test does not indicate the degree of deviation from normality directly; it produces a significance estimate, which involves more than this effect size component. Another component involved somewhat infamously is sample size, which as @PeterFlom points out in his answer here, is potentially misleading. As a somewhat comical adaptation, r throws an error when a user attempts to perform a shapiro.test on a sample larger than 5000 observations. Furthermore, the Shapiro–Wilk test does not disambiguate skewness and kurtosis as different forms of deviation from the normal distribution. Some analyses may be more sensitive to skew than to kurtosis, or vice versa. Hence a given Shapiro–Wilk test statistic may not even reflect equivalently useful information about the invalidity of a normality assumption for two different analyses of the same sample. Conversely, as a data visualization technique (rather than a hypothesis test), a Q–Q plot may reveal much more to a trained eye about the specific nature of problems with a normality assumption, be it skew, kurtosis, a few particularly nasty outliers, etc.
Should the Shapiro-Wilk test and QQ-Plot always be combined?
Citations would be helpful, but at face value, the claim is false. One of our favorite questions here (one of mine, anyway) is, "Is normality testing 'essentially useless'?" Answers to this question g
Should the Shapiro-Wilk test and QQ-Plot always be combined? Citations would be helpful, but at face value, the claim is false. One of our favorite questions here (one of mine, anyway) is, "Is normality testing 'essentially useless'?" Answers to this question generally argue that Q–Q plots are more valuable than the Shapiro–Wilk test. I.e., if one of these is to be excluded, let it be the Shapiro–Wilk test, not the Q–Q plot. Many analyses involve normality assumptions regarding distributions of interest, but these analyses vary in their sensitivity to violations of this assumption. As a significance test, the Shapiro–Wilk test does not indicate the degree of deviation from normality directly; it produces a significance estimate, which involves more than this effect size component. Another component involved somewhat infamously is sample size, which as @PeterFlom points out in his answer here, is potentially misleading. As a somewhat comical adaptation, r throws an error when a user attempts to perform a shapiro.test on a sample larger than 5000 observations. Furthermore, the Shapiro–Wilk test does not disambiguate skewness and kurtosis as different forms of deviation from the normal distribution. Some analyses may be more sensitive to skew than to kurtosis, or vice versa. Hence a given Shapiro–Wilk test statistic may not even reflect equivalently useful information about the invalidity of a normality assumption for two different analyses of the same sample. Conversely, as a data visualization technique (rather than a hypothesis test), a Q–Q plot may reveal much more to a trained eye about the specific nature of problems with a normality assumption, be it skew, kurtosis, a few particularly nasty outliers, etc.
Should the Shapiro-Wilk test and QQ-Plot always be combined? Citations would be helpful, but at face value, the claim is false. One of our favorite questions here (one of mine, anyway) is, "Is normality testing 'essentially useless'?" Answers to this question g
51,790
Implausibly small standard error
You have way overcorrected the individual doctor effects twice using methods that simply do not work together. If your model is regress outcome i.doctor, vce(cluster doctor), then Stata should have complained that you've exhausted your degrees of freedom. xtreg may not be as smart, and may miss a perfect determination of the fixed effects. These 1e-14 standard errors should have been identically zero, and they are non-zero in practice due to rounding somewhere in the guts of fixed effect estimation. What happens here is this: cluster variance estimation works by summing up the cluster contributions, over clusters. However, by specifying doctors as fixed effects, you force the residuals for a given doctor to sum up to 0. regress knows how to determine this at the level of algebra. xtreg may not know enough of computational linear algebra to do this, though, and simply sums up the (numerical) zero contributions to produce the implausibly small standard errors that you see here.
Implausibly small standard error
You have way overcorrected the individual doctor effects twice using methods that simply do not work together. If your model is regress outcome i.doctor, vce(cluster doctor), then Stata should have co
Implausibly small standard error You have way overcorrected the individual doctor effects twice using methods that simply do not work together. If your model is regress outcome i.doctor, vce(cluster doctor), then Stata should have complained that you've exhausted your degrees of freedom. xtreg may not be as smart, and may miss a perfect determination of the fixed effects. These 1e-14 standard errors should have been identically zero, and they are non-zero in practice due to rounding somewhere in the guts of fixed effect estimation. What happens here is this: cluster variance estimation works by summing up the cluster contributions, over clusters. However, by specifying doctors as fixed effects, you force the residuals for a given doctor to sum up to 0. regress knows how to determine this at the level of algebra. xtreg may not know enough of computational linear algebra to do this, though, and simply sums up the (numerical) zero contributions to produce the implausibly small standard errors that you see here.
Implausibly small standard error You have way overcorrected the individual doctor effects twice using methods that simply do not work together. If your model is regress outcome i.doctor, vce(cluster doctor), then Stata should have co
51,791
Implausibly small standard error
If I understand your problem, this can happen when the intra-cluster correlations are negative. See Stata FAQ for the therapist version with some intuition. Edit: I think Stas is right about the deeper issue. I was too hasty. Here's my attempt to replicate this with a dataset of pharmacy visits by 27,766 Vietnamese villagers that are nested in 5,740 households in 194 villages (data are from Cameron and Trivedi). I could not find a public dataset where the clustered errors were smaller, but I think this illustrates the main point. I will treat pharmacy visits as continuous, though they clearly are not. First, we set up the data: . use "http://cameron.econ.ucdavis.edu/mmabook/vietnam_ex2.dta", clear . egen hh=group(lnhhinc) (1 missing value generated) . bys hh: gen person = _n . xtset hh person panel variable: hh (unbalanced) time variable: person, 1 to 19 delta: 1 unit . xtdes hh: 1, 2, ..., 5740 n = 5740 person: 1, 2, ..., 19 T = 19 Delta(person) = 1 unit Span(person) = 19 periods (hh*person uniquely identifies each observation) Distribution of T_i: min 5% 25% 50% 75% 95% max 1 2 4 5 6 8 19 (snip) Now for the FE regression of visits on days sick: . xtreg PHARVIS ILLDAYS, fe Fixed-effects (within) regression Number of obs = 27765 Group variable: hh Number of groups = 5740 R-sq: within = 0.1145 Obs per group: min = 1 between = 0.1390 avg = 4.8 overall = 0.1257 max = 19 F(1,22024) = 2848.23 corr(u_i, Xb) = 0.0465 Prob > F = 0.0000 ------------------------------------------------------------------------------ PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0788618 .0014777 53.37 0.000 .0759654 .0817581 _cons | .2906284 .0077221 37.64 0.000 .2754925 .3057643 -------------+---------------------------------------------------------------- sigma_u | .85814688 sigma_e | 1.085808 rho | .38447214 (fraction of variance due to u_i) ------------------------------------------------------------------------------ F test that all u_i=0: F(5739, 22024) = 2.35 Prob > F = 0.0000 Clustering on the panel variable inflates the errors: . xtreg PHARVIS ILLDAYS, fe vce(cluster hh) Fixed-effects (within) regression Number of obs = 27765 Group variable: hh Number of groups = 5740 R-sq: within = 0.1145 Obs per group: min = 1 between = 0.1390 avg = 4.8 overall = 0.1257 max = 19 F(1,5739) = 464.54 corr(u_i, Xb) = 0.0465 Prob > F = 0.0000 (Std. Err. adjusted for 5740 clusters in hh) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0788618 .0036589 21.55 0.000 .0716889 .0860346 _cons | .2906284 .0102597 28.33 0.000 .2705154 .3107413 -------------+---------------------------------------------------------------- sigma_u | .85814688 sigma_e | 1.085808 rho | .38447214 (fraction of variance due to u_i) ------------------------------------------------------------------------------ Now I try this a non-panel approach. I am using areg since Stata won't let me put in ~6K dummies. . areg PHARVIS ILLDAYS, absorb(hh) vce(cluster hh) Linear regression, absorbing indicators Number of obs = 27765 F( 1, 5739) = 368.52 Prob > F = 0.0000 R-squared = 0.4579 Adj R-squared = 0.3166 Root MSE = 1.0858 (Std. Err. adjusted for 5740 clusters in hh) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0788618 .0041081 19.20 0.000 .0708084 .0869151 _cons | .2906284 .0115192 25.23 0.000 .2680464 .3132103 -------------+---------------------------------------------------------------- hh | absorbed (5740 categories) Unfortunately, areg obscures the thing you are interested in. If you use regress and limit the sample so the number of HHs is reasonable, you will get the tiny standard errors for clusters with only 1 villager. This makes sense since the residual for such observations will be exactly zero. Here's an example: . reg PHARVIS ILLDAYS i.hh if inrange(hh,1,100), cluster(hh) Linear regression Number of obs = 219 F( 0, 99) = . Prob > F = . R-squared = 0.6473 Root MSE = .88177 (Std. Err. adjusted for 100 clusters in hh) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0518095 .0314707 1.65 0.103 -.0106352 .1142542 | hh | 2 | -1 1.84e-14 -5.4e+13 0.000 -1 -1 3 | .2590475 .1573536 1.65 0.103 -.0531762 .5712712 4 | .4662855 .2832365 1.65 0.103 -.0957171 1.028288 5 | 2.129524 .0786768 27.07 0.000 1.973412 2.285636 6 | 1 1.84e-14 5.4e+13 0.000 1 1 7 | -.585524 .2517657 -2.33 0.022 -1.085082 -.0859662 (snip).... 100 | -.8359366 .0996573 -8.39 0.000 -1.033678 -.6381949 | _cons | .481905 .3147072 1.53 0.129 -.1425423 1.106352 ------------------------------------------------------------------------------ Now I will cluster on the village, which inflates them some, as is expected, but still OK: . reg PHARVIS ILLDAYS i.commune, cluster(commune) Linear regression Number of obs = 27765 F( 0, 193) = . Prob > F = . R-squared = 0.1814 Root MSE = 1.1925 (Std. Err. adjusted for 194 clusters in commune) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0840634 .0056375 14.91 0.000 .0729444 .0951823 | commune | 2 | -.1885549 .012027 -15.68 0.000 -.2122761 -.1648337 (snip) .... 191 | .4646775 .0014571 318.91 0.000 .4618037 .4675514 192 | -.0020317 .0065782 -0.31 0.758 -.0150061 .0109427 193 | -.2444578 .0115522 -21.16 0.000 -.2672426 -.2216731 194 | .1917803 .0002288 838.33 0.000 .1913291 .1922315 | _cons | .4371527 .0200739 21.78 0.000 .3975602 .4767452 ------------------------------------------------------------------------------ If I drop all other regressors and estimate something like Stas suggests, I get the zero standard errors on the commune dummies: . reg PHARVIS i.commune, cluster(commune) Linear regression Number of obs = 27765 F( 0, 193) = . Prob > F = . R-squared = 0.0656 Root MSE = 1.274 (Std. Err. adjusted for 194 clusters in commune) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- commune | 2 | -.0092138 1.72e-14 -5.4e+11 0.000 -.0092138 -.0092138 3 | -.2910319 1.72e-14 -1.7e+13 0.000 -.2910319 -.2910319 4 | -.3957457 1.72e-14 -2.3e+13 0.000 -.3957457 -.3957457 5 | -.4244865 1.72e-14 -2.5e+13 0.000 -.4244865 -.4244865 (snip) .... 191 | .4864051 1.72e-14 2.8e+13 0.000 .4864051 .4864051 192 | -.1001229 1.72e-14 -5.8e+12 0.000 -.1001229 -.1001229 193 | -.416719 1.72e-14 -2.4e+13 0.000 -.416719 -.416719 194 | .188369 1.72e-14 1.1e+13 0.000 .188369 .188369 | _cons | .7364865 1.72e-14 4.3e+13 0.000 .7364865 .7364865 ------------------------------------------------------------------------------
Implausibly small standard error
If I understand your problem, this can happen when the intra-cluster correlations are negative. See Stata FAQ for the therapist version with some intuition. Edit: I think Stas is right about the deep
Implausibly small standard error If I understand your problem, this can happen when the intra-cluster correlations are negative. See Stata FAQ for the therapist version with some intuition. Edit: I think Stas is right about the deeper issue. I was too hasty. Here's my attempt to replicate this with a dataset of pharmacy visits by 27,766 Vietnamese villagers that are nested in 5,740 households in 194 villages (data are from Cameron and Trivedi). I could not find a public dataset where the clustered errors were smaller, but I think this illustrates the main point. I will treat pharmacy visits as continuous, though they clearly are not. First, we set up the data: . use "http://cameron.econ.ucdavis.edu/mmabook/vietnam_ex2.dta", clear . egen hh=group(lnhhinc) (1 missing value generated) . bys hh: gen person = _n . xtset hh person panel variable: hh (unbalanced) time variable: person, 1 to 19 delta: 1 unit . xtdes hh: 1, 2, ..., 5740 n = 5740 person: 1, 2, ..., 19 T = 19 Delta(person) = 1 unit Span(person) = 19 periods (hh*person uniquely identifies each observation) Distribution of T_i: min 5% 25% 50% 75% 95% max 1 2 4 5 6 8 19 (snip) Now for the FE regression of visits on days sick: . xtreg PHARVIS ILLDAYS, fe Fixed-effects (within) regression Number of obs = 27765 Group variable: hh Number of groups = 5740 R-sq: within = 0.1145 Obs per group: min = 1 between = 0.1390 avg = 4.8 overall = 0.1257 max = 19 F(1,22024) = 2848.23 corr(u_i, Xb) = 0.0465 Prob > F = 0.0000 ------------------------------------------------------------------------------ PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0788618 .0014777 53.37 0.000 .0759654 .0817581 _cons | .2906284 .0077221 37.64 0.000 .2754925 .3057643 -------------+---------------------------------------------------------------- sigma_u | .85814688 sigma_e | 1.085808 rho | .38447214 (fraction of variance due to u_i) ------------------------------------------------------------------------------ F test that all u_i=0: F(5739, 22024) = 2.35 Prob > F = 0.0000 Clustering on the panel variable inflates the errors: . xtreg PHARVIS ILLDAYS, fe vce(cluster hh) Fixed-effects (within) regression Number of obs = 27765 Group variable: hh Number of groups = 5740 R-sq: within = 0.1145 Obs per group: min = 1 between = 0.1390 avg = 4.8 overall = 0.1257 max = 19 F(1,5739) = 464.54 corr(u_i, Xb) = 0.0465 Prob > F = 0.0000 (Std. Err. adjusted for 5740 clusters in hh) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0788618 .0036589 21.55 0.000 .0716889 .0860346 _cons | .2906284 .0102597 28.33 0.000 .2705154 .3107413 -------------+---------------------------------------------------------------- sigma_u | .85814688 sigma_e | 1.085808 rho | .38447214 (fraction of variance due to u_i) ------------------------------------------------------------------------------ Now I try this a non-panel approach. I am using areg since Stata won't let me put in ~6K dummies. . areg PHARVIS ILLDAYS, absorb(hh) vce(cluster hh) Linear regression, absorbing indicators Number of obs = 27765 F( 1, 5739) = 368.52 Prob > F = 0.0000 R-squared = 0.4579 Adj R-squared = 0.3166 Root MSE = 1.0858 (Std. Err. adjusted for 5740 clusters in hh) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0788618 .0041081 19.20 0.000 .0708084 .0869151 _cons | .2906284 .0115192 25.23 0.000 .2680464 .3132103 -------------+---------------------------------------------------------------- hh | absorbed (5740 categories) Unfortunately, areg obscures the thing you are interested in. If you use regress and limit the sample so the number of HHs is reasonable, you will get the tiny standard errors for clusters with only 1 villager. This makes sense since the residual for such observations will be exactly zero. Here's an example: . reg PHARVIS ILLDAYS i.hh if inrange(hh,1,100), cluster(hh) Linear regression Number of obs = 219 F( 0, 99) = . Prob > F = . R-squared = 0.6473 Root MSE = .88177 (Std. Err. adjusted for 100 clusters in hh) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0518095 .0314707 1.65 0.103 -.0106352 .1142542 | hh | 2 | -1 1.84e-14 -5.4e+13 0.000 -1 -1 3 | .2590475 .1573536 1.65 0.103 -.0531762 .5712712 4 | .4662855 .2832365 1.65 0.103 -.0957171 1.028288 5 | 2.129524 .0786768 27.07 0.000 1.973412 2.285636 6 | 1 1.84e-14 5.4e+13 0.000 1 1 7 | -.585524 .2517657 -2.33 0.022 -1.085082 -.0859662 (snip).... 100 | -.8359366 .0996573 -8.39 0.000 -1.033678 -.6381949 | _cons | .481905 .3147072 1.53 0.129 -.1425423 1.106352 ------------------------------------------------------------------------------ Now I will cluster on the village, which inflates them some, as is expected, but still OK: . reg PHARVIS ILLDAYS i.commune, cluster(commune) Linear regression Number of obs = 27765 F( 0, 193) = . Prob > F = . R-squared = 0.1814 Root MSE = 1.1925 (Std. Err. adjusted for 194 clusters in commune) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- ILLDAYS | .0840634 .0056375 14.91 0.000 .0729444 .0951823 | commune | 2 | -.1885549 .012027 -15.68 0.000 -.2122761 -.1648337 (snip) .... 191 | .4646775 .0014571 318.91 0.000 .4618037 .4675514 192 | -.0020317 .0065782 -0.31 0.758 -.0150061 .0109427 193 | -.2444578 .0115522 -21.16 0.000 -.2672426 -.2216731 194 | .1917803 .0002288 838.33 0.000 .1913291 .1922315 | _cons | .4371527 .0200739 21.78 0.000 .3975602 .4767452 ------------------------------------------------------------------------------ If I drop all other regressors and estimate something like Stas suggests, I get the zero standard errors on the commune dummies: . reg PHARVIS i.commune, cluster(commune) Linear regression Number of obs = 27765 F( 0, 193) = . Prob > F = . R-squared = 0.0656 Root MSE = 1.274 (Std. Err. adjusted for 194 clusters in commune) ------------------------------------------------------------------------------ | Robust PHARVIS | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- commune | 2 | -.0092138 1.72e-14 -5.4e+11 0.000 -.0092138 -.0092138 3 | -.2910319 1.72e-14 -1.7e+13 0.000 -.2910319 -.2910319 4 | -.3957457 1.72e-14 -2.3e+13 0.000 -.3957457 -.3957457 5 | -.4244865 1.72e-14 -2.5e+13 0.000 -.4244865 -.4244865 (snip) .... 191 | .4864051 1.72e-14 2.8e+13 0.000 .4864051 .4864051 192 | -.1001229 1.72e-14 -5.8e+12 0.000 -.1001229 -.1001229 193 | -.416719 1.72e-14 -2.4e+13 0.000 -.416719 -.416719 194 | .188369 1.72e-14 1.1e+13 0.000 .188369 .188369 | _cons | .7364865 1.72e-14 4.3e+13 0.000 .7364865 .7364865 ------------------------------------------------------------------------------
Implausibly small standard error If I understand your problem, this can happen when the intra-cluster correlations are negative. See Stata FAQ for the therapist version with some intuition. Edit: I think Stas is right about the deep
51,792
Controlling covariates in linear regression in R
The question, as phrased, is slightly ambiguous. It states that "the coefficients in each model appear to be exactly the same". There are two ways that statement could be interpreted, with respect to: (1) the Estimates of the coefficients, or (2) the tests of the coefficients. Regarding the Estimates of the coefficients, they are being adjusted for the other variables in the model, but you don't see any difference because you have the same variables in both model1 and model2; the order they are listed in doesn't matter. The parameter estimates will only differ if the variables are correlated and the sets of variables that are included in the models differ. Consider: model1 <- lm(mpg ~ drat + wt + cyl, mtcars) model2 <- lm(mpg ~ wt + cyl + drat, mtcars) model3 <- lm(mpg ~ wt + drat, mtcars) cor(mtcars$wt, mtcars$cyl) # [1] 0.7824958 summary(model2) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 39.7677 6.8729 5.786 3.26e-06 *** # wt -3.1947 0.8293 -3.852 0.000624 *** # cyl -1.5096 0.4464 -3.382 0.002142 ** # drat -0.0162 1.3231 -0.012 0.990317 summary(model3) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 30.290 7.318 4.139 0.000274 *** # wt -4.783 0.797 -6.001 1.59e-06 *** # drat 1.442 1.459 0.989 0.330854 Notice that the Estimate for wt is -3.1974 in model2 and -4.783 in model3. To learn more about how the parameter estimates for variables can change depending on whether a correlated variable is included or not, it may help you to read by answer here. Regarding the tests of the coefficients, it depends on which function you use to get your output. You used summary(). What is reported then are the $t$-tests associated with the parameter estimates. Those are not computed by partitioning sums of squares. However, they are equivalent to $F$-tests using type III SS. The order variables are listed in doesn't matter for $t$-tests or $F$-tests that are based on type III SS. You can also use anova() to get significance tests of your parameter estimates. That is where R uses type I SS. And because type I SS are sequential, the order the variables are listed in does matter (although again, only if the variables are correlated). Consider: summary(model1) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 39.7677 6.8729 5.786 3.26e-06 *** # drat -0.0162 1.3231 -0.012 0.990317 # wt -3.1947 0.8293 -3.852 0.000624 *** # cyl -1.5096 0.4464 -3.382 0.002142 ** summary(model2) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 39.7677 6.8729 5.786 3.26e-06 *** # wt -3.1947 0.8293 -3.852 0.000624 *** # cyl -1.5096 0.4464 -3.382 0.002142 ** # drat -0.0162 1.3231 -0.012 0.990317 anova(model1) # Analysis of Variance Table # Df Sum Sq Mean Sq F value Pr(>F) # drat 1 522.48 522.48 76.525 1.691e-09 *** # wt 1 334.33 334.33 48.967 1.308e-07 *** # cyl 1 78.07 78.07 11.435 0.002142 ** # Residuals 28 191.17 6.83 anova(model2) # Analysis of Variance Table # Df Sum Sq Mean Sq F value Pr(>F) # wt 1 847.73 847.73 124.1627 8.382e-12 *** # cyl 1 87.15 87.15 12.7645 0.001304 ** # drat 1 0.00 0.00 0.0001 0.990317 # Residuals 28 191.17 6.83 Notice that the $p$-value for wt is 0.000624 in both summary() outputs, but is 1.308e-07 in anova(model1) and it 8.382e-12 in anova(model2). To learn more about sums of squares in general, it may help to read my answer here. Lastly, note that you can get an ANOVA table in R that uses other types of SS, such as II and III, by using Anova() in the car package.
Controlling covariates in linear regression in R
The question, as phrased, is slightly ambiguous. It states that "the coefficients in each model appear to be exactly the same". There are two ways that statement could be interpreted, with respect t
Controlling covariates in linear regression in R The question, as phrased, is slightly ambiguous. It states that "the coefficients in each model appear to be exactly the same". There are two ways that statement could be interpreted, with respect to: (1) the Estimates of the coefficients, or (2) the tests of the coefficients. Regarding the Estimates of the coefficients, they are being adjusted for the other variables in the model, but you don't see any difference because you have the same variables in both model1 and model2; the order they are listed in doesn't matter. The parameter estimates will only differ if the variables are correlated and the sets of variables that are included in the models differ. Consider: model1 <- lm(mpg ~ drat + wt + cyl, mtcars) model2 <- lm(mpg ~ wt + cyl + drat, mtcars) model3 <- lm(mpg ~ wt + drat, mtcars) cor(mtcars$wt, mtcars$cyl) # [1] 0.7824958 summary(model2) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 39.7677 6.8729 5.786 3.26e-06 *** # wt -3.1947 0.8293 -3.852 0.000624 *** # cyl -1.5096 0.4464 -3.382 0.002142 ** # drat -0.0162 1.3231 -0.012 0.990317 summary(model3) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 30.290 7.318 4.139 0.000274 *** # wt -4.783 0.797 -6.001 1.59e-06 *** # drat 1.442 1.459 0.989 0.330854 Notice that the Estimate for wt is -3.1974 in model2 and -4.783 in model3. To learn more about how the parameter estimates for variables can change depending on whether a correlated variable is included or not, it may help you to read by answer here. Regarding the tests of the coefficients, it depends on which function you use to get your output. You used summary(). What is reported then are the $t$-tests associated with the parameter estimates. Those are not computed by partitioning sums of squares. However, they are equivalent to $F$-tests using type III SS. The order variables are listed in doesn't matter for $t$-tests or $F$-tests that are based on type III SS. You can also use anova() to get significance tests of your parameter estimates. That is where R uses type I SS. And because type I SS are sequential, the order the variables are listed in does matter (although again, only if the variables are correlated). Consider: summary(model1) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 39.7677 6.8729 5.786 3.26e-06 *** # drat -0.0162 1.3231 -0.012 0.990317 # wt -3.1947 0.8293 -3.852 0.000624 *** # cyl -1.5096 0.4464 -3.382 0.002142 ** summary(model2) # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 39.7677 6.8729 5.786 3.26e-06 *** # wt -3.1947 0.8293 -3.852 0.000624 *** # cyl -1.5096 0.4464 -3.382 0.002142 ** # drat -0.0162 1.3231 -0.012 0.990317 anova(model1) # Analysis of Variance Table # Df Sum Sq Mean Sq F value Pr(>F) # drat 1 522.48 522.48 76.525 1.691e-09 *** # wt 1 334.33 334.33 48.967 1.308e-07 *** # cyl 1 78.07 78.07 11.435 0.002142 ** # Residuals 28 191.17 6.83 anova(model2) # Analysis of Variance Table # Df Sum Sq Mean Sq F value Pr(>F) # wt 1 847.73 847.73 124.1627 8.382e-12 *** # cyl 1 87.15 87.15 12.7645 0.001304 ** # drat 1 0.00 0.00 0.0001 0.990317 # Residuals 28 191.17 6.83 Notice that the $p$-value for wt is 0.000624 in both summary() outputs, but is 1.308e-07 in anova(model1) and it 8.382e-12 in anova(model2). To learn more about sums of squares in general, it may help to read my answer here. Lastly, note that you can get an ANOVA table in R that uses other types of SS, such as II and III, by using Anova() in the car package.
Controlling covariates in linear regression in R The question, as phrased, is slightly ambiguous. It states that "the coefficients in each model appear to be exactly the same". There are two ways that statement could be interpreted, with respect t
51,793
Calculate tail probabilities from density() call in R
I would take the same approach as @Flounderer, but exploit another feature of R's density() function; namely the from and to arguments, which restrict the density estimation to the region enclosed by the two arguments. This results in the same density estimates as running the function without from and/or to, but by restricting the range of the density estimate to the region of interest, we focus all of the n evaluation points on the region of interest. set.seed(1) x <-rnorm(1000) hist(x,freq=FALSE) lines( dens <- density(x) ) lines( dens2 <- density(x, from = 1, n = 1024), col = "red", lwd = 2) This produces The red line is to illustrate that the density estimates in dens and dens2 are the same for the region of interest. Then you can follow the approach @Flounderer used to evaluate the tail probability: > with(dens2, sum(y * diff(x)[1])) [1] 0.1680759 The advantage of this approach is to expend the n observations at which density() evaluates the KDE all on the region of interest. The larger n the higher the resolution that you have in evaluating the tail probability. Note from ?density that given the FFT used in the implementation, having n as a multiple of 2 is advantageous.
Calculate tail probabilities from density() call in R
I would take the same approach as @Flounderer, but exploit another feature of R's density() function; namely the from and to arguments, which restrict the density estimation to the region enclosed by
Calculate tail probabilities from density() call in R I would take the same approach as @Flounderer, but exploit another feature of R's density() function; namely the from and to arguments, which restrict the density estimation to the region enclosed by the two arguments. This results in the same density estimates as running the function without from and/or to, but by restricting the range of the density estimate to the region of interest, we focus all of the n evaluation points on the region of interest. set.seed(1) x <-rnorm(1000) hist(x,freq=FALSE) lines( dens <- density(x) ) lines( dens2 <- density(x, from = 1, n = 1024), col = "red", lwd = 2) This produces The red line is to illustrate that the density estimates in dens and dens2 are the same for the region of interest. Then you can follow the approach @Flounderer used to evaluate the tail probability: > with(dens2, sum(y * diff(x)[1])) [1] 0.1680759 The advantage of this approach is to expend the n observations at which density() evaluates the KDE all on the region of interest. The larger n the higher the resolution that you have in evaluating the tail probability. Note from ?density that given the FFT used in the implementation, having n as a multiple of 2 is advantageous.
Calculate tail probabilities from density() call in R I would take the same approach as @Flounderer, but exploit another feature of R's density() function; namely the from and to arguments, which restrict the density estimation to the region enclosed by
51,794
Calculate tail probabilities from density() call in R
The density function returns an object with various properties. You can access the $x$ and $y$ values using density(x)$x and density(x)$y. So you can do it like this: set.seed(100) x <- rnorm(1000) d <- density(x) x0 <- 1 idx <- which(abs(d$x-x0)==min(abs(d$x-x0))) approx.tail.prob <- sum(d$y[idx:length(d$x)] * diff(d$x)[1]) This is just an approximation based on a Riemann sum. You could get a better approximation using another numerical technique, such as the Trapezium Rule or Simpson's Rule. But once you know how to get at density(x)$x and density(x)$y, it's straightforward to work out how to do these. You could even use the R integrate function, maybe like this: f <- function(x0) d$y[which(abs(d$x-x0) == min(abs(d$x-x0)))[1]] and then: integrate(Vectorize(f), 1, max(d$x))
Calculate tail probabilities from density() call in R
The density function returns an object with various properties. You can access the $x$ and $y$ values using density(x)$x and density(x)$y. So you can do it like this: set.seed(100) x <- rnorm(1000) d
Calculate tail probabilities from density() call in R The density function returns an object with various properties. You can access the $x$ and $y$ values using density(x)$x and density(x)$y. So you can do it like this: set.seed(100) x <- rnorm(1000) d <- density(x) x0 <- 1 idx <- which(abs(d$x-x0)==min(abs(d$x-x0))) approx.tail.prob <- sum(d$y[idx:length(d$x)] * diff(d$x)[1]) This is just an approximation based on a Riemann sum. You could get a better approximation using another numerical technique, such as the Trapezium Rule or Simpson's Rule. But once you know how to get at density(x)$x and density(x)$y, it's straightforward to work out how to do these. You could even use the R integrate function, maybe like this: f <- function(x0) d$y[which(abs(d$x-x0) == min(abs(d$x-x0)))[1]] and then: integrate(Vectorize(f), 1, max(d$x))
Calculate tail probabilities from density() call in R The density function returns an object with various properties. You can access the $x$ and $y$ values using density(x)$x and density(x)$y. So you can do it like this: set.seed(100) x <- rnorm(1000) d
51,795
Calculate tail probabilities from density() call in R
By definition, given a "bandwidth" $h$ and a kernel density $k,$ the KDE of a data vector $\mathbf{x} = (x_1, x_2, \ldots, x_n)$ is $$f(x; \mathbf{x}, h, k) = \frac{1}{nh}\sum_{i=1}^n k\left(\frac{x - x_i}{h}\right).$$ Consequently the distribution function (left tail probability function) is its integral, $$F(x; \mathbf{x}, h, K) = \frac{1}{nh}\sum_{i=1}^n \int_{-\infty}^x k\left(\frac{t - x_i}{h}\right)\,\mathrm{d}t = \frac{1}{n}\sum_{i=1}^n K(t-x_i; h)$$ where $K$ is the integral of $k,$ $$K(x) = \frac{1}{h}\int_{-\infty}^x k\left(\frac{t}{h}\right)\,\mathrm{d}t.$$ By default, density uses a Gaussian (Normal) kernel: that is, $k$ is implemented as dnorm and $K$ as pnorm. This leads to an extremely compact and precise solution: pkde <- Vectorize(function(x, data, bw) mean(pnorm(x, data, bw)), "x") The arguments of pkde are the point x of evaluation, the data vector data $=\mathbf{x},$ and the bandwidth bw $=h.$ For instance, the right tail probability at the value $x=1$ in the example is obtained by storing the result of density in order to fetch bw: obj <- density(x) 1 - pkde(1, x, obj$bw) The answer (for the question's sample data) will be close to $0.16,$ depending on the specific values that were generated. Some comments about the use of pnorm may be of interest. pnorm appears here because it describes your kernel. Any other solution will be equivalent to this one. This one is the most precise of any possible solution because it does not require interpolation. It will be inferior in terms of computational complexity when $n$ is large, because it requires one evaluation of pnorm for each data point. It will be greatly superior to any conceivable alternative when n is small. Indeed, it doesn't even require the KDE to be computed: it only needs you to determine what bandwidth you want to use. Here is a more interesting example, showing how pkde can be used to plot the entire cumulative distribution of the KDE for a decidedly non-Normal dataset. set.seed(17) X <- c(rexp(500), rgamma(500, 30)) obj <- density(X) curve(pkde(x, X, obj$bw), min(X), max(X), lwd=2, main="Left tail kernel probability") hist(X, breaks=100)
Calculate tail probabilities from density() call in R
By definition, given a "bandwidth" $h$ and a kernel density $k,$ the KDE of a data vector $\mathbf{x} = (x_1, x_2, \ldots, x_n)$ is $$f(x; \mathbf{x}, h, k) = \frac{1}{nh}\sum_{i=1}^n k\left(\frac{x -
Calculate tail probabilities from density() call in R By definition, given a "bandwidth" $h$ and a kernel density $k,$ the KDE of a data vector $\mathbf{x} = (x_1, x_2, \ldots, x_n)$ is $$f(x; \mathbf{x}, h, k) = \frac{1}{nh}\sum_{i=1}^n k\left(\frac{x - x_i}{h}\right).$$ Consequently the distribution function (left tail probability function) is its integral, $$F(x; \mathbf{x}, h, K) = \frac{1}{nh}\sum_{i=1}^n \int_{-\infty}^x k\left(\frac{t - x_i}{h}\right)\,\mathrm{d}t = \frac{1}{n}\sum_{i=1}^n K(t-x_i; h)$$ where $K$ is the integral of $k,$ $$K(x) = \frac{1}{h}\int_{-\infty}^x k\left(\frac{t}{h}\right)\,\mathrm{d}t.$$ By default, density uses a Gaussian (Normal) kernel: that is, $k$ is implemented as dnorm and $K$ as pnorm. This leads to an extremely compact and precise solution: pkde <- Vectorize(function(x, data, bw) mean(pnorm(x, data, bw)), "x") The arguments of pkde are the point x of evaluation, the data vector data $=\mathbf{x},$ and the bandwidth bw $=h.$ For instance, the right tail probability at the value $x=1$ in the example is obtained by storing the result of density in order to fetch bw: obj <- density(x) 1 - pkde(1, x, obj$bw) The answer (for the question's sample data) will be close to $0.16,$ depending on the specific values that were generated. Some comments about the use of pnorm may be of interest. pnorm appears here because it describes your kernel. Any other solution will be equivalent to this one. This one is the most precise of any possible solution because it does not require interpolation. It will be inferior in terms of computational complexity when $n$ is large, because it requires one evaluation of pnorm for each data point. It will be greatly superior to any conceivable alternative when n is small. Indeed, it doesn't even require the KDE to be computed: it only needs you to determine what bandwidth you want to use. Here is a more interesting example, showing how pkde can be used to plot the entire cumulative distribution of the KDE for a decidedly non-Normal dataset. set.seed(17) X <- c(rexp(500), rgamma(500, 30)) obj <- density(X) curve(pkde(x, X, obj$bw), min(X), max(X), lwd=2, main="Left tail kernel probability") hist(X, breaks=100)
Calculate tail probabilities from density() call in R By definition, given a "bandwidth" $h$ and a kernel density $k,$ the KDE of a data vector $\mathbf{x} = (x_1, x_2, \ldots, x_n)$ is $$f(x; \mathbf{x}, h, k) = \frac{1}{nh}\sum_{i=1}^n k\left(\frac{x -
51,796
Calculate tail probabilities from density() call in R
You can do this with the KDE function in the utilties package For this type of problem, you can use the KDE function in the utilties package. This function generates the KDE in the same way as the density function in R,$^\dagger$ but instead of producing an output computed over a relatively small set of points, it produces an output that includes the probability functions for the KDE. You can also call the function in such a way that it loads those probability functions directly to the global environment, so that you can easily call them just like any other density function in R. Below I give an example of how to generate the KDE using this function, and how to call the cumulative distribution function over an arbitrary set of values. As you can see, the KDE function produces a set of probability functions (dkde, pkde, qkde, and rkde) that can be called just like the probability functions for any of the pre-programmed families of distributions. This allows you to compute the cumulative distribution from pkde at any point you want, including points that are far outside the data range used to generate the KDE. #Load the package library(utilities) #Generate some mock data set.seed(1) DATA <- rnorm(40) #Create a KDE and show its output MY_KDE <- KDE(DATA, to.environment = TRUE) MY_KDE Kernel Density Estimator (KDE) Computed from 40 data points in the input 'DATA' Estimated bandwidth = 0.367412 Input degrees-of-freedom = Inf Probability functions for the KDE are the following: Density function: dkde * Distribution function: pkde * Quantile function: qkde * Random generation function: rkde * * This function is presently loaded in the global environment #Call the CDF over a set of points (including points far in the tails) POINTS <- -10:10 pkde(POINTS) [1] 1.489573e-101 4.685332e-78 9.132757e-58 1.112228e-40 8.584043e-27 4.322183e-16 [7] 1.529301e-08 4.819333e-04 3.326124e-02 1.236576e-01 4.251039e-01 8.352227e-01 [13] 9.927698e-01 9.999976e-01 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 [19] 1.000000e+00 1.000000e+00 1.000000e+00 Note that if you want upper-tail probabilities then you can just set lower.tail = FALSE when you call the cumulative distribution function using pkde. $^\dagger$ The KDE function has the advantage of giving a more useful output (in my opinion) but it is not as general as the density function in the base package. It does not accomodate as wide a range of kernel types or bandwidth estimation methods. Both functions can produce a KDE using the normal kernel.
Calculate tail probabilities from density() call in R
You can do this with the KDE function in the utilties package For this type of problem, you can use the KDE function in the utilties package. This function generates the KDE in the same way as the de
Calculate tail probabilities from density() call in R You can do this with the KDE function in the utilties package For this type of problem, you can use the KDE function in the utilties package. This function generates the KDE in the same way as the density function in R,$^\dagger$ but instead of producing an output computed over a relatively small set of points, it produces an output that includes the probability functions for the KDE. You can also call the function in such a way that it loads those probability functions directly to the global environment, so that you can easily call them just like any other density function in R. Below I give an example of how to generate the KDE using this function, and how to call the cumulative distribution function over an arbitrary set of values. As you can see, the KDE function produces a set of probability functions (dkde, pkde, qkde, and rkde) that can be called just like the probability functions for any of the pre-programmed families of distributions. This allows you to compute the cumulative distribution from pkde at any point you want, including points that are far outside the data range used to generate the KDE. #Load the package library(utilities) #Generate some mock data set.seed(1) DATA <- rnorm(40) #Create a KDE and show its output MY_KDE <- KDE(DATA, to.environment = TRUE) MY_KDE Kernel Density Estimator (KDE) Computed from 40 data points in the input 'DATA' Estimated bandwidth = 0.367412 Input degrees-of-freedom = Inf Probability functions for the KDE are the following: Density function: dkde * Distribution function: pkde * Quantile function: qkde * Random generation function: rkde * * This function is presently loaded in the global environment #Call the CDF over a set of points (including points far in the tails) POINTS <- -10:10 pkde(POINTS) [1] 1.489573e-101 4.685332e-78 9.132757e-58 1.112228e-40 8.584043e-27 4.322183e-16 [7] 1.529301e-08 4.819333e-04 3.326124e-02 1.236576e-01 4.251039e-01 8.352227e-01 [13] 9.927698e-01 9.999976e-01 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 [19] 1.000000e+00 1.000000e+00 1.000000e+00 Note that if you want upper-tail probabilities then you can just set lower.tail = FALSE when you call the cumulative distribution function using pkde. $^\dagger$ The KDE function has the advantage of giving a more useful output (in my opinion) but it is not as general as the density function in the base package. It does not accomodate as wide a range of kernel types or bandwidth estimation methods. Both functions can produce a KDE using the normal kernel.
Calculate tail probabilities from density() call in R You can do this with the KDE function in the utilties package For this type of problem, you can use the KDE function in the utilties package. This function generates the KDE in the same way as the de
51,797
Compare 2 regression lines in R
You have to use a Chow test (wikipedia). It's an application of the Fisher test to test the equality of coefficients among two groups of individuals. You can compute it easily using the sum of squared residuals of each model. See my gist file to see how I compute the Chow test. In your case, the null hypothesis of equality of coefficients among the two groups cannot be rejected.
Compare 2 regression lines in R
You have to use a Chow test (wikipedia). It's an application of the Fisher test to test the equality of coefficients among two groups of individuals. You can compute it easily using the sum of squared
Compare 2 regression lines in R You have to use a Chow test (wikipedia). It's an application of the Fisher test to test the equality of coefficients among two groups of individuals. You can compute it easily using the sum of squared residuals of each model. See my gist file to see how I compute the Chow test. In your case, the null hypothesis of equality of coefficients among the two groups cannot be rejected.
Compare 2 regression lines in R You have to use a Chow test (wikipedia). It's an application of the Fisher test to test the equality of coefficients among two groups of individuals. You can compute it easily using the sum of squared
51,798
Compare 2 regression lines in R
You would do best to test for a difference in slopes by including sex and a sex:Age interaction in a multiple regression analysis. The t-test of the interaction term will assess whether or not the slopes differ significantly. The R code for your situation would be (I'm guessing): lm(formula = SBR ~ Sex + Age + Sex:Age, data = sbr_with_pred)
Compare 2 regression lines in R
You would do best to test for a difference in slopes by including sex and a sex:Age interaction in a multiple regression analysis. The t-test of the interaction term will assess whether or not the sl
Compare 2 regression lines in R You would do best to test for a difference in slopes by including sex and a sex:Age interaction in a multiple regression analysis. The t-test of the interaction term will assess whether or not the slopes differ significantly. The R code for your situation would be (I'm guessing): lm(formula = SBR ~ Sex + Age + Sex:Age, data = sbr_with_pred)
Compare 2 regression lines in R You would do best to test for a difference in slopes by including sex and a sex:Age interaction in a multiple regression analysis. The t-test of the interaction term will assess whether or not the sl
51,799
Compare 2 regression lines in R
In R you can use anova for an analysis of covariance. I tried quickly with the anova command to run a test with your data but the sample size for the two models are different which gives problems at the moment. Code by PAC also works nicely. Based on gung's answer you can also do an anova test using the following code (also guessing): library(car) Anova(lm(SBR~Age*Sex,data=sbr_with_pred))
Compare 2 regression lines in R
In R you can use anova for an analysis of covariance. I tried quickly with the anova command to run a test with your data but the sample size for the two models are different which gives problems at t
Compare 2 regression lines in R In R you can use anova for an analysis of covariance. I tried quickly with the anova command to run a test with your data but the sample size for the two models are different which gives problems at the moment. Code by PAC also works nicely. Based on gung's answer you can also do an anova test using the following code (also guessing): library(car) Anova(lm(SBR~Age*Sex,data=sbr_with_pred))
Compare 2 regression lines in R In R you can use anova for an analysis of covariance. I tried quickly with the anova command to run a test with your data but the sample size for the two models are different which gives problems at t
51,800
Fallacy in p-value definition
The StatSoft definition is incorrect. (I know, a short answer, but sometimes there is no long answer).
Fallacy in p-value definition
The StatSoft definition is incorrect. (I know, a short answer, but sometimes there is no long answer).
Fallacy in p-value definition The StatSoft definition is incorrect. (I know, a short answer, but sometimes there is no long answer).
Fallacy in p-value definition The StatSoft definition is incorrect. (I know, a short answer, but sometimes there is no long answer).