idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,801 | Poisson distribution: why does time between events follow an exponential distribution? | Let $X_t$ be the number of arrivals in the Poisson process with rate $\lambda$ between time $0$ and time $t\ge0.$ Then we have
$$
\Pr(X_t=x) = \frac{(\lambda t)^x e^{-\lambda t}}{x!} \text{ for } x=0,1,2,3,\ldots
$$
Let $T$ be the time until the first arrival.
Then the following two events are really both the same event:
$$
\Big[ X_t=0\Big]. \qquad \Big[ T>t \Big].
$$
So they both have the same probability. Thus
$$
\Pr(T>t) = \Pr(X_t=0) = \frac{(\lambda t)^0 e^{-\lambda t}}{0!} = e^{-\lambda t}.
$$
So
$$
\Pr(T>t) = e^{-\lambda t} \text{ for } t\ge0.
$$
That says $T$ is exponentially distributed. | Poisson distribution: why does time between events follow an exponential distribution? | Let $X_t$ be the number of arrivals in the Poisson process with rate $\lambda$ between time $0$ and time $t\ge0.$ Then we have
$$
\Pr(X_t=x) = \frac{(\lambda t)^x e^{-\lambda t}}{x!} \text{ for } x=0, | Poisson distribution: why does time between events follow an exponential distribution?
Let $X_t$ be the number of arrivals in the Poisson process with rate $\lambda$ between time $0$ and time $t\ge0.$ Then we have
$$
\Pr(X_t=x) = \frac{(\lambda t)^x e^{-\lambda t}}{x!} \text{ for } x=0,1,2,3,\ldots
$$
Let $T$ be the time until the first arrival.
Then the following two events are really both the same event:
$$
\Big[ X_t=0\Big]. \qquad \Big[ T>t \Big].
$$
So they both have the same probability. Thus
$$
\Pr(T>t) = \Pr(X_t=0) = \frac{(\lambda t)^0 e^{-\lambda t}}{0!} = e^{-\lambda t}.
$$
So
$$
\Pr(T>t) = e^{-\lambda t} \text{ for } t\ge0.
$$
That says $T$ is exponentially distributed. | Poisson distribution: why does time between events follow an exponential distribution?
Let $X_t$ be the number of arrivals in the Poisson process with rate $\lambda$ between time $0$ and time $t\ge0.$ Then we have
$$
\Pr(X_t=x) = \frac{(\lambda t)^x e^{-\lambda t}}{x!} \text{ for } x=0, |
52,802 | Poisson distribution: why does time between events follow an exponential distribution? | Exponential distribution doesn't imply that time between events grows exponentially. All it tells you is that probability to wait longer between events declines very quickly with waiting time. The probability density is:$$\lambda e^{-\lambda t}$$ where $\lambda$ is Poisson intensity, i.e. average number of events in unit of time, and $t$ is the waiting time. The average waiting time is obviously $1/\lambda$. | Poisson distribution: why does time between events follow an exponential distribution? | Exponential distribution doesn't imply that time between events grows exponentially. All it tells you is that probability to wait longer between events declines very quickly with waiting time. The pro | Poisson distribution: why does time between events follow an exponential distribution?
Exponential distribution doesn't imply that time between events grows exponentially. All it tells you is that probability to wait longer between events declines very quickly with waiting time. The probability density is:$$\lambda e^{-\lambda t}$$ where $\lambda$ is Poisson intensity, i.e. average number of events in unit of time, and $t$ is the waiting time. The average waiting time is obviously $1/\lambda$. | Poisson distribution: why does time between events follow an exponential distribution?
Exponential distribution doesn't imply that time between events grows exponentially. All it tells you is that probability to wait longer between events declines very quickly with waiting time. The pro |
52,803 | Mixed models. Random slopes only, mean and group centering? | This all depends on the nature of your study.
When you fit random intercepts, without random slopes, this assumes that each subject has the same response to the treatment, but each subject has a different baseline value:
When you add random slopes, then you allow each subject to have a different response to the treatment, and each subject still has a different baseline value:
It you don't fit random intercepts, but retain the slopes then you assume that each subject has a different response to the treatment, and each subject has the SAME baseline value: | Mixed models. Random slopes only, mean and group centering? | This all depends on the nature of your study.
When you fit random intercepts, without random slopes, this assumes that each subject has the same response to the treatment, but each subject has a diffe | Mixed models. Random slopes only, mean and group centering?
This all depends on the nature of your study.
When you fit random intercepts, without random slopes, this assumes that each subject has the same response to the treatment, but each subject has a different baseline value:
When you add random slopes, then you allow each subject to have a different response to the treatment, and each subject still has a different baseline value:
It you don't fit random intercepts, but retain the slopes then you assume that each subject has a different response to the treatment, and each subject has the SAME baseline value: | Mixed models. Random slopes only, mean and group centering?
This all depends on the nature of your study.
When you fit random intercepts, without random slopes, this assumes that each subject has the same response to the treatment, but each subject has a diffe |
52,804 | Mixed models. Random slopes only, mean and group centering? | Question is, is a random slopes only model at lvl 3 allowed. Every example I see always has random intercepts first. Is this because it must or because I just got unlucky with examples?
Yes, it is certainly allowed, but as illustrated in the plots in the answer by @Wayne, it is making an assumption about the baseline observations being equal at that level.
does anyone know how different numerical structures of the subject/ID variables affect mixed model? I'd. Having a unique identifier for both levels vs having a unique identifier within each group at lvl 2 (individuals).
This will depend on the software that you are using to fit the model and how you specify the model. In lme4 for R, for example, it should not make any difference provided that you specify the random intercepts correctly. See this answer and this answer for more information.
can someone please explain the benefits, reasons and differences behind mean centered and group centered DVs?
Grand-mean centering is often desirable for interpretation purposes, especially when interactions are involved, where the main effects show the effect of a covariate when the other one (in the case of a two-way interaction) is held at zero (which is often implausible in real world cases). In the case of group-mean centering, the random intercepts will be the unadjusted group means. The group means can then be used as a group-level predictor, and the coefficient for this predictor will be a between-group predictor. Personally I have never found the need to do group-mean centering - though I have seen it done, particularly in the multilevel modelling literature. | Mixed models. Random slopes only, mean and group centering? | Question is, is a random slopes only model at lvl 3 allowed. Every example I see always has random intercepts first. Is this because it must or because I just got unlucky with examples?
Yes, it is ce | Mixed models. Random slopes only, mean and group centering?
Question is, is a random slopes only model at lvl 3 allowed. Every example I see always has random intercepts first. Is this because it must or because I just got unlucky with examples?
Yes, it is certainly allowed, but as illustrated in the plots in the answer by @Wayne, it is making an assumption about the baseline observations being equal at that level.
does anyone know how different numerical structures of the subject/ID variables affect mixed model? I'd. Having a unique identifier for both levels vs having a unique identifier within each group at lvl 2 (individuals).
This will depend on the software that you are using to fit the model and how you specify the model. In lme4 for R, for example, it should not make any difference provided that you specify the random intercepts correctly. See this answer and this answer for more information.
can someone please explain the benefits, reasons and differences behind mean centered and group centered DVs?
Grand-mean centering is often desirable for interpretation purposes, especially when interactions are involved, where the main effects show the effect of a covariate when the other one (in the case of a two-way interaction) is held at zero (which is often implausible in real world cases). In the case of group-mean centering, the random intercepts will be the unadjusted group means. The group means can then be used as a group-level predictor, and the coefficient for this predictor will be a between-group predictor. Personally I have never found the need to do group-mean centering - though I have seen it done, particularly in the multilevel modelling literature. | Mixed models. Random slopes only, mean and group centering?
Question is, is a random slopes only model at lvl 3 allowed. Every example I see always has random intercepts first. Is this because it must or because I just got unlucky with examples?
Yes, it is ce |
52,805 | Mixed models. Random slopes only, mean and group centering? | A classic must-read on some aspects is Bill Venables (the Jekyll subset of Ripley/Venables) https://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf.
And as a rare example where fixed zero offsets might be useful: Assume you measure "new-bone growth" after some surgery. By definition newbone is exactly zero at time of surgery, later it is measured and varies. | Mixed models. Random slopes only, mean and group centering? | A classic must-read on some aspects is Bill Venables (the Jekyll subset of Ripley/Venables) https://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf.
And as a rare example where fixed zero offsets might be u | Mixed models. Random slopes only, mean and group centering?
A classic must-read on some aspects is Bill Venables (the Jekyll subset of Ripley/Venables) https://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf.
And as a rare example where fixed zero offsets might be useful: Assume you measure "new-bone growth" after some surgery. By definition newbone is exactly zero at time of surgery, later it is measured and varies. | Mixed models. Random slopes only, mean and group centering?
A classic must-read on some aspects is Bill Venables (the Jekyll subset of Ripley/Venables) https://www.stats.ox.ac.uk/pub/MASS3/Exegeses.pdf.
And as a rare example where fixed zero offsets might be u |
52,806 | Importance of the prior | The purpose of a prior is not to reduce uncertainty per se. Rather it is to express what is already known -- and generally agreed upon -- about the parameter or parameters in the model likelihood. Priors can be informative or uniformative. If little is known about a parameter, then an uniformative prior (maximally entropic) is appropriate. But often, we know more than we think we do. For instance, we know variance cannot be negative, so a prior that sets the prior probability to zero for $\sigma^2 < 0$ is sensible.
Pleading ignorance gets old quickly, and informative priors are more fun. If you are doing a series of experiments over several days, your prior today may be your posterior from yesterday. You can daisy-chain several days of results together this way.
For algebraic convenience, priors are usually chosen to be "conjugate" to the likelihood function, so the functional form of the posterior is the same as the prior. The best example of this is the beta function prior for the Bernoulli/binomial model likelihood.
Worth noting: if two researchers disagree on a prior, then they will disagree on the interpretation of the results of the experiment. That is not wrong; it is simply unfortunate, and the resolution is for them to agree on the prior, ideally a priori (before the fact), but a posteriori (after the fact) works, too.
As long as a prior does not insist that a particular value for a parameter has a probability identically equal to zero, then any prior will eventually shift toward the MLE, as enough data comes in. I think MLK Jr. was enunciating a basic Bayesian truth when he said "the arc of the moral universe is long, but it bends toward justice." Yes, priors/prejudices disappear when a) enough data is collected, and b) we keep our minds open and do not set any prior probabilities exactly to zero. | Importance of the prior | The purpose of a prior is not to reduce uncertainty per se. Rather it is to express what is already known -- and generally agreed upon -- about the parameter or parameters in the model likelihood. Pri | Importance of the prior
The purpose of a prior is not to reduce uncertainty per se. Rather it is to express what is already known -- and generally agreed upon -- about the parameter or parameters in the model likelihood. Priors can be informative or uniformative. If little is known about a parameter, then an uniformative prior (maximally entropic) is appropriate. But often, we know more than we think we do. For instance, we know variance cannot be negative, so a prior that sets the prior probability to zero for $\sigma^2 < 0$ is sensible.
Pleading ignorance gets old quickly, and informative priors are more fun. If you are doing a series of experiments over several days, your prior today may be your posterior from yesterday. You can daisy-chain several days of results together this way.
For algebraic convenience, priors are usually chosen to be "conjugate" to the likelihood function, so the functional form of the posterior is the same as the prior. The best example of this is the beta function prior for the Bernoulli/binomial model likelihood.
Worth noting: if two researchers disagree on a prior, then they will disagree on the interpretation of the results of the experiment. That is not wrong; it is simply unfortunate, and the resolution is for them to agree on the prior, ideally a priori (before the fact), but a posteriori (after the fact) works, too.
As long as a prior does not insist that a particular value for a parameter has a probability identically equal to zero, then any prior will eventually shift toward the MLE, as enough data comes in. I think MLK Jr. was enunciating a basic Bayesian truth when he said "the arc of the moral universe is long, but it bends toward justice." Yes, priors/prejudices disappear when a) enough data is collected, and b) we keep our minds open and do not set any prior probabilities exactly to zero. | Importance of the prior
The purpose of a prior is not to reduce uncertainty per se. Rather it is to express what is already known -- and generally agreed upon -- about the parameter or parameters in the model likelihood. Pri |
52,807 | Importance of the prior | If priors are supposed to encode "a priori" information of parameters that might be useful during training, then why is it that they are often drawn from probability distributions?
The prior is a distribution or mass function in that the sum of all possible cases must sum to one. That the priors often look like well-known distributions is often a mathematical and computational convenience. It isn't mandatory as long as you sum to unity. There is also the case where you can use improper priors where the posterior is assured to sum to unity, but those are also computational conveniences.
I certainly have used non-standard distributions for the prior, that is, it is definitely not a named distribution. In fact, I have used piecewise distributions for continuous functions to reflect the known information. I can promise you, however, that doing that is computationally inconvenient. Encoding if or case statements in your prior isn't a joy if you are concerned about speed or using nice library functions.
priors should come from high-entropy distributions, such as the normal distribution because this reflects uncertainty in the parameters. However, it seems counterintuitive to me to introduce a "random" prior if the point of the prior is to reduce uncertainty. How can these two claims be resolved?
The argument for adding entropy to a prior is to reflect the idea that you may be wrong in your prior information. An example I often use involves magical quarters minted under the authority of the Congress where they are either double-sided, or biased one-third, one-half, or two-thirds for heads. It makes for all kinds of nice thought experiments.
Let us imagine you need to predict a coin from its tosses. You are not allowed to see it. You phone the US Mint and find out that they produce the coins in a ratio of 5% double headed, 20% 1/3rd bias for heads, 50% fair, 20% biased for heads, and 5% with double tails. That would be your prior, maybe.
Imagine you were going to go to a particular credit union to get a roll of quarters to run your test on. You were going to use this method until one of your researchers asks "how do you know they distribute coins proportionally to all institutions? What if some institutions only receive one type of coin?"
You realize that the researcher is correct. You do not want to discard this information from the Mint, but you do want to account for the fact that you may not have access to a representative sample. This gives you two choices.
The ignorance prior would be to grant all five choices equal weight. If you did that you would be ignoring real information. You could, however, multiply your informative distribution by the uniform distribution. The interesting consequence is that you recover your original informative distribution. It is true that your credit union may only receive double-headed coins, but the system still has to get proportionate amounts, and you do not know which luck of the draw distortion is going to your institution.
The second would be to note that you could reduce the weight of the center and increase the weight of the tails. Maybe the fair coin could be reduced to 40% and the tails increased to 10% each. The problem with this is that it isn't based on real information. There is no reason to widen the tails and shrink the center except a fear that a prior will have too strong of an effect on the outcome.
Now let us imagine you get eight coin tosses, and you have decided to use the MAP estimator as your decision function. Will a prior alter any of the answers? Yes.
For the informative prior the MAP estimator is:
Note at two and six successes how little of a difference between the MAP and first runner up. This is similar to the Frequentist concept of power. In those two cases, there isn't a lot separating two of the possible hypotheses.
Now consider the uniform distribution, the maximum entropy hypothesis. It results in a posterior and MAP of:
One interesting takeaway is that the most common real-world case is now rarely the answer under the MAP.
What this is telling you is that in four cases, the MAP is being chosen by the choice of the prior. Part of the issue is the small sample size, of course, but part is a lesson in the role an informative prior may have.
In five of nine cases, the data trumps the prior as they are distinct enough and unusual enough to provide sufficient information to the user. In four cases, there isn't enough information in the data to overcome the prior bias, which is that the fair coin cases are ten times more likely than either double-sided case and five times more likely than either biased case.
In a more real-world case, I am working on a class of time series whose parameters have certain theoretical properties that can be used to construct a prior. In simulations, I end up with relative efficiencies of around 20-1 gains over the Frequentist counter-part. I don't want a maximum entropy method because I do want to find the parameter, however, I do use a relatively agnostic prior over one of the nuisance parameters because I don't have a good reason to favor one location over another over a pretty wide range.
The prior should never be random, but it can be diffuse. It should reflect what you know.
You are still free to use your mixed distribution, which gives a third set of MAP estimators. They beg the question of why you chose that prior though.
Unfortunately, the best solution is to struggle with the prior, document it, and perform a sensitivity analysis.
This answer is also the Frequentist and Fisherian Likelihoodist argument against Bayesian methods. Is it a virtue to have prior knowledge if it is imperfect prior knowledge? | Importance of the prior | If priors are supposed to encode "a priori" information of parameters that might be useful during training, then why is it that they are often drawn from probability distributions?
The prior is a dis | Importance of the prior
If priors are supposed to encode "a priori" information of parameters that might be useful during training, then why is it that they are often drawn from probability distributions?
The prior is a distribution or mass function in that the sum of all possible cases must sum to one. That the priors often look like well-known distributions is often a mathematical and computational convenience. It isn't mandatory as long as you sum to unity. There is also the case where you can use improper priors where the posterior is assured to sum to unity, but those are also computational conveniences.
I certainly have used non-standard distributions for the prior, that is, it is definitely not a named distribution. In fact, I have used piecewise distributions for continuous functions to reflect the known information. I can promise you, however, that doing that is computationally inconvenient. Encoding if or case statements in your prior isn't a joy if you are concerned about speed or using nice library functions.
priors should come from high-entropy distributions, such as the normal distribution because this reflects uncertainty in the parameters. However, it seems counterintuitive to me to introduce a "random" prior if the point of the prior is to reduce uncertainty. How can these two claims be resolved?
The argument for adding entropy to a prior is to reflect the idea that you may be wrong in your prior information. An example I often use involves magical quarters minted under the authority of the Congress where they are either double-sided, or biased one-third, one-half, or two-thirds for heads. It makes for all kinds of nice thought experiments.
Let us imagine you need to predict a coin from its tosses. You are not allowed to see it. You phone the US Mint and find out that they produce the coins in a ratio of 5% double headed, 20% 1/3rd bias for heads, 50% fair, 20% biased for heads, and 5% with double tails. That would be your prior, maybe.
Imagine you were going to go to a particular credit union to get a roll of quarters to run your test on. You were going to use this method until one of your researchers asks "how do you know they distribute coins proportionally to all institutions? What if some institutions only receive one type of coin?"
You realize that the researcher is correct. You do not want to discard this information from the Mint, but you do want to account for the fact that you may not have access to a representative sample. This gives you two choices.
The ignorance prior would be to grant all five choices equal weight. If you did that you would be ignoring real information. You could, however, multiply your informative distribution by the uniform distribution. The interesting consequence is that you recover your original informative distribution. It is true that your credit union may only receive double-headed coins, but the system still has to get proportionate amounts, and you do not know which luck of the draw distortion is going to your institution.
The second would be to note that you could reduce the weight of the center and increase the weight of the tails. Maybe the fair coin could be reduced to 40% and the tails increased to 10% each. The problem with this is that it isn't based on real information. There is no reason to widen the tails and shrink the center except a fear that a prior will have too strong of an effect on the outcome.
Now let us imagine you get eight coin tosses, and you have decided to use the MAP estimator as your decision function. Will a prior alter any of the answers? Yes.
For the informative prior the MAP estimator is:
Note at two and six successes how little of a difference between the MAP and first runner up. This is similar to the Frequentist concept of power. In those two cases, there isn't a lot separating two of the possible hypotheses.
Now consider the uniform distribution, the maximum entropy hypothesis. It results in a posterior and MAP of:
One interesting takeaway is that the most common real-world case is now rarely the answer under the MAP.
What this is telling you is that in four cases, the MAP is being chosen by the choice of the prior. Part of the issue is the small sample size, of course, but part is a lesson in the role an informative prior may have.
In five of nine cases, the data trumps the prior as they are distinct enough and unusual enough to provide sufficient information to the user. In four cases, there isn't enough information in the data to overcome the prior bias, which is that the fair coin cases are ten times more likely than either double-sided case and five times more likely than either biased case.
In a more real-world case, I am working on a class of time series whose parameters have certain theoretical properties that can be used to construct a prior. In simulations, I end up with relative efficiencies of around 20-1 gains over the Frequentist counter-part. I don't want a maximum entropy method because I do want to find the parameter, however, I do use a relatively agnostic prior over one of the nuisance parameters because I don't have a good reason to favor one location over another over a pretty wide range.
The prior should never be random, but it can be diffuse. It should reflect what you know.
You are still free to use your mixed distribution, which gives a third set of MAP estimators. They beg the question of why you chose that prior though.
Unfortunately, the best solution is to struggle with the prior, document it, and perform a sensitivity analysis.
This answer is also the Frequentist and Fisherian Likelihoodist argument against Bayesian methods. Is it a virtue to have prior knowledge if it is imperfect prior knowledge? | Importance of the prior
If priors are supposed to encode "a priori" information of parameters that might be useful during training, then why is it that they are often drawn from probability distributions?
The prior is a dis |
52,808 | Variance of sum of dependent random variables | It's quite easy to prove this once you understand the relationship between the covariance and correlation and if you recognize that the variances for both $X_i$ and $X_j$ are identically $\sigma^2$:
\begin{eqnarray*}
V\left[\frac{1}{m}\sum_{i=1}^{m}y_{i}\right] & = & \frac{1}{m^{2}}\left[\sum_{i=1}^{m}V(y_{i})+\sum_{i=1}^{m}\sum_{i\ne j}^{m}Cov(X_{i},X_{j})\right]\\
& = & \frac{1}{m^{2}}\left[\sum_{i=1}^{m}\sigma^{2}+\sigma^{2}\sum_{i=1}^{m}\sum_{i\ne j}^{m}\frac{Cov(X_{i},X_{j})}{\sigma^{2}}\right]\\
& = & \frac{1}{m^{2}}\left[m\sigma^{2}+\sigma^{2}\sum_{i=1}^{m}\sum_{i\ne j}^{m}\rho\right]\\
& = & \frac{1}{m^{2}}\left[m\sigma^{2}+\sigma^{2}(m^{2}-m)\rho\right]\\
& = & \frac{\sigma^{2}}{m}+\frac{\sigma^{2}(m-1)\rho}{m}\\
& = & \frac{\sigma^{2}}{m}+\frac{\sigma^{2}\rho m}{m}-\frac{\sigma^{2}\rho}{m}\\
& = & \frac{\sigma^{2}-\sigma^{2}\rho}{m}+\rho\sigma^{2}\\
& = & \frac{\left(1-\rho\right)\sigma^{2}}{m}+\rho\sigma^{2}\\
& = & \frac{1}{m}\left(1-\rho\right)\sigma^{2}+\rho\sigma^{2}\,\,\,\,\,\,\,\,\,\blacksquare
\end{eqnarray*} | Variance of sum of dependent random variables | It's quite easy to prove this once you understand the relationship between the covariance and correlation and if you recognize that the variances for both $X_i$ and $X_j$ are identically $\sigma^2$:
\ | Variance of sum of dependent random variables
It's quite easy to prove this once you understand the relationship between the covariance and correlation and if you recognize that the variances for both $X_i$ and $X_j$ are identically $\sigma^2$:
\begin{eqnarray*}
V\left[\frac{1}{m}\sum_{i=1}^{m}y_{i}\right] & = & \frac{1}{m^{2}}\left[\sum_{i=1}^{m}V(y_{i})+\sum_{i=1}^{m}\sum_{i\ne j}^{m}Cov(X_{i},X_{j})\right]\\
& = & \frac{1}{m^{2}}\left[\sum_{i=1}^{m}\sigma^{2}+\sigma^{2}\sum_{i=1}^{m}\sum_{i\ne j}^{m}\frac{Cov(X_{i},X_{j})}{\sigma^{2}}\right]\\
& = & \frac{1}{m^{2}}\left[m\sigma^{2}+\sigma^{2}\sum_{i=1}^{m}\sum_{i\ne j}^{m}\rho\right]\\
& = & \frac{1}{m^{2}}\left[m\sigma^{2}+\sigma^{2}(m^{2}-m)\rho\right]\\
& = & \frac{\sigma^{2}}{m}+\frac{\sigma^{2}(m-1)\rho}{m}\\
& = & \frac{\sigma^{2}}{m}+\frac{\sigma^{2}\rho m}{m}-\frac{\sigma^{2}\rho}{m}\\
& = & \frac{\sigma^{2}-\sigma^{2}\rho}{m}+\rho\sigma^{2}\\
& = & \frac{\left(1-\rho\right)\sigma^{2}}{m}+\rho\sigma^{2}\\
& = & \frac{1}{m}\left(1-\rho\right)\sigma^{2}+\rho\sigma^{2}\,\,\,\,\,\,\,\,\,\blacksquare
\end{eqnarray*} | Variance of sum of dependent random variables
It's quite easy to prove this once you understand the relationship between the covariance and correlation and if you recognize that the variances for both $X_i$ and $X_j$ are identically $\sigma^2$:
\ |
52,809 | Log-normal returns | I suppose you mean $P_t$ and $P_{t-1}$ are i.i.d. Note that we may express
$$ P_t = e^{\mu + \sigma Z_t}, P_{t-1} = e^{\mu + \sigma Z_{t-1}}$$
where $Z_t, Z_{t-1}$ are i.i.d. standard normal. Then
$$ R_t = \frac {P_t - P_{t-1}} {P_{t-1}} = \frac {P_t} {P_{t-1}} -1
= \frac {e^{\mu + \sigma Z_t}} {e^{\mu + \sigma Z_{t-1}}} -1
= e^{\sigma (Z_t - Z_{t-1})} - 1$$
Since $Z_t - Z_{t-1} \sim \mathcal{N}(0, 2)$, we have $e^{\sigma (Z_t - Z_{t-1})} \sim \text{lognormal}(0,2\sigma^2)$ and thus the resulting $R_t$ is a shifted lognormal. | Log-normal returns | I suppose you mean $P_t$ and $P_{t-1}$ are i.i.d. Note that we may express
$$ P_t = e^{\mu + \sigma Z_t}, P_{t-1} = e^{\mu + \sigma Z_{t-1}}$$
where $Z_t, Z_{t-1}$ are i.i.d. standard normal. Then
$$ | Log-normal returns
I suppose you mean $P_t$ and $P_{t-1}$ are i.i.d. Note that we may express
$$ P_t = e^{\mu + \sigma Z_t}, P_{t-1} = e^{\mu + \sigma Z_{t-1}}$$
where $Z_t, Z_{t-1}$ are i.i.d. standard normal. Then
$$ R_t = \frac {P_t - P_{t-1}} {P_{t-1}} = \frac {P_t} {P_{t-1}} -1
= \frac {e^{\mu + \sigma Z_t}} {e^{\mu + \sigma Z_{t-1}}} -1
= e^{\sigma (Z_t - Z_{t-1})} - 1$$
Since $Z_t - Z_{t-1} \sim \mathcal{N}(0, 2)$, we have $e^{\sigma (Z_t - Z_{t-1})} \sim \text{lognormal}(0,2\sigma^2)$ and thus the resulting $R_t$ is a shifted lognormal. | Log-normal returns
I suppose you mean $P_t$ and $P_{t-1}$ are i.i.d. Note that we may express
$$ P_t = e^{\mu + \sigma Z_t}, P_{t-1} = e^{\mu + \sigma Z_{t-1}}$$
where $Z_t, Z_{t-1}$ are i.i.d. standard normal. Then
$$ |
52,810 | Log-normal returns | Just some quick thoughts which do not fit into the comment section. If you look at
$$
\log (1+R_t) = \log(P_t) - \log(P_{t-i}) \sim N(0,2\sigma^2)
$$
you find that it is normal distributed (as the difference of two normal distributions).
So $R_t+1$ is lognormally distributed, and $R_t$ is a shifted lognormally distributed RV. | Log-normal returns | Just some quick thoughts which do not fit into the comment section. If you look at
$$
\log (1+R_t) = \log(P_t) - \log(P_{t-i}) \sim N(0,2\sigma^2)
$$
you find that it is normal distributed (as the dif | Log-normal returns
Just some quick thoughts which do not fit into the comment section. If you look at
$$
\log (1+R_t) = \log(P_t) - \log(P_{t-i}) \sim N(0,2\sigma^2)
$$
you find that it is normal distributed (as the difference of two normal distributions).
So $R_t+1$ is lognormally distributed, and $R_t$ is a shifted lognormally distributed RV. | Log-normal returns
Just some quick thoughts which do not fit into the comment section. If you look at
$$
\log (1+R_t) = \log(P_t) - \log(P_{t-i}) \sim N(0,2\sigma^2)
$$
you find that it is normal distributed (as the dif |
52,811 | Log-normal returns | Since the returns involve changes in the stock price over consecutive time periods, the answer to your question depends on the joint distribution of the stock price over time. Since you have only specified the marginal distribution of the stock price, you have not given enough information in your question to determine the distribution of interest. Nevertheless, I will give you a useful result here that holds in a simple case, and which can be extended more broadly to other models with some additional algebraic work.
Stationary AR(1) model: Suppose the log-price follows the stationary model of the form:
$$\ln P_t = (1-\phi) \mu + \phi \ln P_{t-1} + \varepsilon_t \quad \quad \quad \varepsilon_t \sim \text{N}( 0, (1-\phi^2) \sigma^2),$$
where $-1 < \phi < 1$ is the auto-correlation parameter of the model. In this case the stationary marginal distribution of the log-price is as you specified in your question (i.e., $\ln P_t \sim \text{N} (\mu, \sigma^2)$). Now, since $\ln P_t = \ln P_{t-1} + \ln (1+R_t)$ the density function of the latter term is:
$$\begin{equation} \begin{aligned}
p(\ln (1+R_t) = r)
&= \int \limits_{-\infty}^\infty p(\ln (1+R_t) = r | \ln P_{t-1} = s) \cdot \text{N}(s|\mu, \sigma^2) ds \\[6pt]
&= \int \limits_{-\infty}^\infty p(\ln P_t = r+s | \ln P_{t-1} = s) \cdot \text{N}(s|\mu, \sigma^2) ds \\[6pt]
&= \int \limits_{-\infty}^\infty \text{N}(r+s | \mu + \phi (s-\mu), (1-\phi^2) \sigma^2) \cdot \text{N}(s|\mu, \sigma^2) ds. \\[6pt]
\end{aligned} \end{equation}$$
To solve this integral equation we need to complete the square in the exponential terms of the normal density functions in the integrand. This would lead to a normal distribution, so that $1+R \sim \text{LogN}$. Obtaining the parameters of the distribution would require you to complete the square in the above integral, and perform the subsequent algebra. I leave this as an exercise. | Log-normal returns | Since the returns involve changes in the stock price over consecutive time periods, the answer to your question depends on the joint distribution of the stock price over time. Since you have only spe | Log-normal returns
Since the returns involve changes in the stock price over consecutive time periods, the answer to your question depends on the joint distribution of the stock price over time. Since you have only specified the marginal distribution of the stock price, you have not given enough information in your question to determine the distribution of interest. Nevertheless, I will give you a useful result here that holds in a simple case, and which can be extended more broadly to other models with some additional algebraic work.
Stationary AR(1) model: Suppose the log-price follows the stationary model of the form:
$$\ln P_t = (1-\phi) \mu + \phi \ln P_{t-1} + \varepsilon_t \quad \quad \quad \varepsilon_t \sim \text{N}( 0, (1-\phi^2) \sigma^2),$$
where $-1 < \phi < 1$ is the auto-correlation parameter of the model. In this case the stationary marginal distribution of the log-price is as you specified in your question (i.e., $\ln P_t \sim \text{N} (\mu, \sigma^2)$). Now, since $\ln P_t = \ln P_{t-1} + \ln (1+R_t)$ the density function of the latter term is:
$$\begin{equation} \begin{aligned}
p(\ln (1+R_t) = r)
&= \int \limits_{-\infty}^\infty p(\ln (1+R_t) = r | \ln P_{t-1} = s) \cdot \text{N}(s|\mu, \sigma^2) ds \\[6pt]
&= \int \limits_{-\infty}^\infty p(\ln P_t = r+s | \ln P_{t-1} = s) \cdot \text{N}(s|\mu, \sigma^2) ds \\[6pt]
&= \int \limits_{-\infty}^\infty \text{N}(r+s | \mu + \phi (s-\mu), (1-\phi^2) \sigma^2) \cdot \text{N}(s|\mu, \sigma^2) ds. \\[6pt]
\end{aligned} \end{equation}$$
To solve this integral equation we need to complete the square in the exponential terms of the normal density functions in the integrand. This would lead to a normal distribution, so that $1+R \sim \text{LogN}$. Obtaining the parameters of the distribution would require you to complete the square in the above integral, and perform the subsequent algebra. I leave this as an exercise. | Log-normal returns
Since the returns involve changes in the stock price over consecutive time periods, the answer to your question depends on the joint distribution of the stock price over time. Since you have only spe |
52,812 | Bayesian inference - iterative updating with Bernoulli distribution | The fact that your first graph merely oscillates between two values suggests to me that you are resetting the prior each time you perform an iteration. So what you are seeing in the graph is a sequence of posteriors, each of which only take one data point into account. That is not the correct method for iterative Bayesian updating. Remember that when you do iterative Bayesian updating, the prior for each new iteration is the posterior from the last iteration. So your algorithm should be:
Iterative Bayesian updating: Start with the prior mass function:
$$\pi_0(\theta) = \frac{1}{11} \quad \quad \quad \text{for all } \theta = \tfrac{0}{10}, \tfrac{1}{10}, ..., \tfrac{10}{10}.$$
For $i=1,...,n$ and $x_i \in \{0,1\}$, update your beliefs via the iteration:
$$\pi_i(\theta) = \frac{\theta^{x_i} (1-\theta)^{1-x_i} \pi_{i-1}(\theta)}{\sum_\theta \theta^{x_i} (1-\theta)^{1-x_i} \pi_{i-1} (\theta)}$$
Notice that in each iteration the prior $\pi_{i-1}$ is the posterior from the previous iteration. The mass function $\pi_n$ is the posterior after incorporating all the data. | Bayesian inference - iterative updating with Bernoulli distribution | The fact that your first graph merely oscillates between two values suggests to me that you are resetting the prior each time you perform an iteration. So what you are seeing in the graph is a sequen | Bayesian inference - iterative updating with Bernoulli distribution
The fact that your first graph merely oscillates between two values suggests to me that you are resetting the prior each time you perform an iteration. So what you are seeing in the graph is a sequence of posteriors, each of which only take one data point into account. That is not the correct method for iterative Bayesian updating. Remember that when you do iterative Bayesian updating, the prior for each new iteration is the posterior from the last iteration. So your algorithm should be:
Iterative Bayesian updating: Start with the prior mass function:
$$\pi_0(\theta) = \frac{1}{11} \quad \quad \quad \text{for all } \theta = \tfrac{0}{10}, \tfrac{1}{10}, ..., \tfrac{10}{10}.$$
For $i=1,...,n$ and $x_i \in \{0,1\}$, update your beliefs via the iteration:
$$\pi_i(\theta) = \frac{\theta^{x_i} (1-\theta)^{1-x_i} \pi_{i-1}(\theta)}{\sum_\theta \theta^{x_i} (1-\theta)^{1-x_i} \pi_{i-1} (\theta)}$$
Notice that in each iteration the prior $\pi_{i-1}$ is the posterior from the previous iteration. The mass function $\pi_n$ is the posterior after incorporating all the data. | Bayesian inference - iterative updating with Bernoulli distribution
The fact that your first graph merely oscillates between two values suggests to me that you are resetting the prior each time you perform an iteration. So what you are seeing in the graph is a sequen |
52,813 | Bayesian inference - iterative updating with Bernoulli distribution | thanks @Ben for insisting that I check my update process. It was broken indeed, I was not updating $P_{posterior}[\theta]$ correctly.
So now I can proclaim that Bayesian inference with Bernoulli works perfectly and I post here the new chart to show it 🙂 | Bayesian inference - iterative updating with Bernoulli distribution | thanks @Ben for insisting that I check my update process. It was broken indeed, I was not updating $P_{posterior}[\theta]$ correctly.
So now I can proclaim that Bayesian inference with Bernoulli works | Bayesian inference - iterative updating with Bernoulli distribution
thanks @Ben for insisting that I check my update process. It was broken indeed, I was not updating $P_{posterior}[\theta]$ correctly.
So now I can proclaim that Bayesian inference with Bernoulli works perfectly and I post here the new chart to show it 🙂 | Bayesian inference - iterative updating with Bernoulli distribution
thanks @Ben for insisting that I check my update process. It was broken indeed, I was not updating $P_{posterior}[\theta]$ correctly.
So now I can proclaim that Bayesian inference with Bernoulli works |
52,814 | paired T test: how to plot it? | To me, a bivariate plot of the before/after for observations with a 1:1 line works well.
A histogram of the differences conveys the results as well. | paired T test: how to plot it? | To me, a bivariate plot of the before/after for observations with a 1:1 line works well.
A histogram of the differences conveys the results as well. | paired T test: how to plot it?
To me, a bivariate plot of the before/after for observations with a 1:1 line works well.
A histogram of the differences conveys the results as well. | paired T test: how to plot it?
To me, a bivariate plot of the before/after for observations with a 1:1 line works well.
A histogram of the differences conveys the results as well. |
52,815 | paired T test: how to plot it? | Comment: Perhaps the $n = 200$ differences for the one important variable can be summarized as follows:
summary(d)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-28.180 -2.112 4.113 3.589 9.993 29.806
sum(d > 0)
[1] 134
The mean and median are both about 4 > 0; 134 of the 200 observations are larger
than 0. Let $\delta$ be the population difference, then $H_0: \delta = 0$ against $H_a: \delta \ne 0$ is rejected with a P-value < 0.0005.
A stripchart of the data shows values of each of the 200 differences (with
a few not distinguishable at the resolution of this plot). Clearly, there are more positive differences than negative ones.
stripchart(d, pch="|")
abline(v = 0, col="green2") | paired T test: how to plot it? | Comment: Perhaps the $n = 200$ differences for the one important variable can be summarized as follows:
summary(d)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-28.180 -2.112 4.113 3.589 9 | paired T test: how to plot it?
Comment: Perhaps the $n = 200$ differences for the one important variable can be summarized as follows:
summary(d)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-28.180 -2.112 4.113 3.589 9.993 29.806
sum(d > 0)
[1] 134
The mean and median are both about 4 > 0; 134 of the 200 observations are larger
than 0. Let $\delta$ be the population difference, then $H_0: \delta = 0$ against $H_a: \delta \ne 0$ is rejected with a P-value < 0.0005.
A stripchart of the data shows values of each of the 200 differences (with
a few not distinguishable at the resolution of this plot). Clearly, there are more positive differences than negative ones.
stripchart(d, pch="|")
abline(v = 0, col="green2") | paired T test: how to plot it?
Comment: Perhaps the $n = 200$ differences for the one important variable can be summarized as follows:
summary(d)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-28.180 -2.112 4.113 3.589 9 |
52,816 | paired T test: how to plot it? | Late to the party but I wanted to add a more recent reference from 2017 to this thread for those like me who are looking for inspiration on how to graph paired data. The reference advocates the use of the hybrid parallel line plot for plotting such data.
The reference is as follows:
Graphic Portrayal of Studies With Paired Data: A Tutorial by David L. Schriger, MD, MPH
and can be downloaded from this website:
https://els-jbs-prod-cdn.jbs.elsevierhealth.com/pb/assets/raw/Health%20Advance/journals/ymem/Schriger_graph2-1520020828083.pdf. | paired T test: how to plot it? | Late to the party but I wanted to add a more recent reference from 2017 to this thread for those like me who are looking for inspiration on how to graph paired data. The reference advocates the use of | paired T test: how to plot it?
Late to the party but I wanted to add a more recent reference from 2017 to this thread for those like me who are looking for inspiration on how to graph paired data. The reference advocates the use of the hybrid parallel line plot for plotting such data.
The reference is as follows:
Graphic Portrayal of Studies With Paired Data: A Tutorial by David L. Schriger, MD, MPH
and can be downloaded from this website:
https://els-jbs-prod-cdn.jbs.elsevierhealth.com/pb/assets/raw/Health%20Advance/journals/ymem/Schriger_graph2-1520020828083.pdf. | paired T test: how to plot it?
Late to the party but I wanted to add a more recent reference from 2017 to this thread for those like me who are looking for inspiration on how to graph paired data. The reference advocates the use of |
52,817 | paired T test: how to plot it? | In addition to the other excellent answer, it could be useful to have a graphical summary. That could be a histogram of the differences, but even more useful could be a scatterplot of means aganst differences: A Tukey mean-difference plot (also called Bland-Altman plot). For discussion and examples see Bland-Altman (Tukey Mean-Difference) plot for differing scales. In the context of paired data we could call it a plot of change scores versus means.
Also see https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot or search this site. There is a paper dedicated to plotting paired data: https://www.jstor.org/stable/2685323?seq=1#metadata_info_tab_contents | paired T test: how to plot it? | In addition to the other excellent answer, it could be useful to have a graphical summary. That could be a histogram of the differences, but even more useful could be a scatterplot of means aganst di | paired T test: how to plot it?
In addition to the other excellent answer, it could be useful to have a graphical summary. That could be a histogram of the differences, but even more useful could be a scatterplot of means aganst differences: A Tukey mean-difference plot (also called Bland-Altman plot). For discussion and examples see Bland-Altman (Tukey Mean-Difference) plot for differing scales. In the context of paired data we could call it a plot of change scores versus means.
Also see https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot or search this site. There is a paper dedicated to plotting paired data: https://www.jstor.org/stable/2685323?seq=1#metadata_info_tab_contents | paired T test: how to plot it?
In addition to the other excellent answer, it could be useful to have a graphical summary. That could be a histogram of the differences, but even more useful could be a scatterplot of means aganst di |
52,818 | Wilcoxon signed-rank test null hypothesis statement | This answer has been revised after being accepted, as I did not adequately appreciate Wilcoxon's critique of the sign test to extend the null hypothesis. I address the difference between the revised and previous answer at the end
The Wilcoxon sign rank test has these null and alternative hypotheses (see Snedecor, G. W. and Cochran, W. G. (1989) Statistical Methods, 8th edition. Iowa State University Press: Ames, IA.):
$\text{H}_{0}$: The magnitude of paired differences are symmetrically distributed about zero.
$\text{H}_{\text{A}}$: The magnitude of paired differences are either not symmetrically distributed, or are not distributed about zero, or both.
(The symmetric distribution about zero rapidly approaches a normal distribution as the sample size increases. See Belera, 2010.)
Many introductory texts motivate the signed rank test as a test of median difference, or more rarely in my experience, mean difference, without mentioning that two fairly strict assumptions are required for this interpretation:
The distribution of both groups must have the same shape.
The variance of both groups must be equal.
If both these assumptions are true, then the signed rank test can validly be interpreted as having a null hypothesis of equal medians (or equal means).
References
Bellera, C. A., Julien, M., and Hanley, J. A. (2010). Normal approximations to the distributions of the Wilcoxon statistics: Accurate to what $n$? Graphical insights. Journal of Statistics Education, 18(2):1–17.
Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80–83.
Motivation for my revised answer
My previously accepted answer was that the null and alternative hypotheses were in paired observations:
$$H_0:P(X_A>X_B)=0.5;H_A:P(X_A>X_B)≠0.5$$
These are null and alternative hypotheses about relative stochastic size (sometimes zeroth-order stochastic dominance). In plain language the null hypothesis is that the probability of a random observation from group $A$ exceeding the paired observation drawn from group $B$ is one half (i.e. a random observation in group $A$ has just as much probability of being greater than, as being less than its paired observation in group $B$).
In plain language the alternative hypothesis is that this probability is not one half (i.e. one of the groups is more likely to be greater than the other than less than the other).
But the sign rank can be false due to asymmetry (because of the magnitude of rank differences are larger in one direction than the other), so that even when $P(X_A>X_B)=0.5$ we reject the null if the magnitudes when $X_{A}>X_{B}$ are, say, much larger than when $X_{A}<X_{B}$. Magnitude of rank difference is therefore why the sign rank test must incorporate symmetry into the null as in my revision. My thanks to @SalMangiafico for his patient tutelage. | Wilcoxon signed-rank test null hypothesis statement | This answer has been revised after being accepted, as I did not adequately appreciate Wilcoxon's critique of the sign test to extend the null hypothesis. I address the difference between the revised a | Wilcoxon signed-rank test null hypothesis statement
This answer has been revised after being accepted, as I did not adequately appreciate Wilcoxon's critique of the sign test to extend the null hypothesis. I address the difference between the revised and previous answer at the end
The Wilcoxon sign rank test has these null and alternative hypotheses (see Snedecor, G. W. and Cochran, W. G. (1989) Statistical Methods, 8th edition. Iowa State University Press: Ames, IA.):
$\text{H}_{0}$: The magnitude of paired differences are symmetrically distributed about zero.
$\text{H}_{\text{A}}$: The magnitude of paired differences are either not symmetrically distributed, or are not distributed about zero, or both.
(The symmetric distribution about zero rapidly approaches a normal distribution as the sample size increases. See Belera, 2010.)
Many introductory texts motivate the signed rank test as a test of median difference, or more rarely in my experience, mean difference, without mentioning that two fairly strict assumptions are required for this interpretation:
The distribution of both groups must have the same shape.
The variance of both groups must be equal.
If both these assumptions are true, then the signed rank test can validly be interpreted as having a null hypothesis of equal medians (or equal means).
References
Bellera, C. A., Julien, M., and Hanley, J. A. (2010). Normal approximations to the distributions of the Wilcoxon statistics: Accurate to what $n$? Graphical insights. Journal of Statistics Education, 18(2):1–17.
Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80–83.
Motivation for my revised answer
My previously accepted answer was that the null and alternative hypotheses were in paired observations:
$$H_0:P(X_A>X_B)=0.5;H_A:P(X_A>X_B)≠0.5$$
These are null and alternative hypotheses about relative stochastic size (sometimes zeroth-order stochastic dominance). In plain language the null hypothesis is that the probability of a random observation from group $A$ exceeding the paired observation drawn from group $B$ is one half (i.e. a random observation in group $A$ has just as much probability of being greater than, as being less than its paired observation in group $B$).
In plain language the alternative hypothesis is that this probability is not one half (i.e. one of the groups is more likely to be greater than the other than less than the other).
But the sign rank can be false due to asymmetry (because of the magnitude of rank differences are larger in one direction than the other), so that even when $P(X_A>X_B)=0.5$ we reject the null if the magnitudes when $X_{A}>X_{B}$ are, say, much larger than when $X_{A}<X_{B}$. Magnitude of rank difference is therefore why the sign rank test must incorporate symmetry into the null as in my revision. My thanks to @SalMangiafico for his patient tutelage. | Wilcoxon signed-rank test null hypothesis statement
This answer has been revised after being accepted, as I did not adequately appreciate Wilcoxon's critique of the sign test to extend the null hypothesis. I address the difference between the revised a |
52,819 | Wilcoxon signed-rank test null hypothesis statement | In the general case, the Wilcoxon ranked-sum test (WRS) doesn't address either the mean or the median, so I think I would avoid mentioning either of these when describing the test.
It is easy to think of cases where the median of the differences is 0 but that the WRS rejects the null hypothesis. (One such case is given in my comments to the answer by @Alexis). If you really want a test of the median of the differences, the sign test is the appropriate test.
How you want to describe the test, I think, depends on your audience.
The null hypothesis (of the two-sided case) is that the rank of differences are symmetric about 0. This will likely make sense to an audience if you are able to present a histogram of the actual differences and are able to say, "See? the ranks of the differences are basically symmetric, centered around 2." Or, "See, they are not symmetric; there are many higher values on the positive side and fewer on the lower side, suggesting...."
But this may not translate well to what you are actually trying to point out, like that some intervention increased some measured thing.
Depending on the audience, it may be better to be vague about the test, e.g. "I performed WRS to see if there was an increase from Time 1 to Time 2." If you feel bad about that, you could add a footnote describing the test.
There are some visual ways to present the results that may make a lot sense to the audience. For example, here is the same data:
As a scatter plot with a 1:1 line
As a series of bars of the differences
And as a histogram of the differences | Wilcoxon signed-rank test null hypothesis statement | In the general case, the Wilcoxon ranked-sum test (WRS) doesn't address either the mean or the median, so I think I would avoid mentioning either of these when describing the test.
It is easy to thi | Wilcoxon signed-rank test null hypothesis statement
In the general case, the Wilcoxon ranked-sum test (WRS) doesn't address either the mean or the median, so I think I would avoid mentioning either of these when describing the test.
It is easy to think of cases where the median of the differences is 0 but that the WRS rejects the null hypothesis. (One such case is given in my comments to the answer by @Alexis). If you really want a test of the median of the differences, the sign test is the appropriate test.
How you want to describe the test, I think, depends on your audience.
The null hypothesis (of the two-sided case) is that the rank of differences are symmetric about 0. This will likely make sense to an audience if you are able to present a histogram of the actual differences and are able to say, "See? the ranks of the differences are basically symmetric, centered around 2." Or, "See, they are not symmetric; there are many higher values on the positive side and fewer on the lower side, suggesting...."
But this may not translate well to what you are actually trying to point out, like that some intervention increased some measured thing.
Depending on the audience, it may be better to be vague about the test, e.g. "I performed WRS to see if there was an increase from Time 1 to Time 2." If you feel bad about that, you could add a footnote describing the test.
There are some visual ways to present the results that may make a lot sense to the audience. For example, here is the same data:
As a scatter plot with a 1:1 line
As a series of bars of the differences
And as a histogram of the differences | Wilcoxon signed-rank test null hypothesis statement
In the general case, the Wilcoxon ranked-sum test (WRS) doesn't address either the mean or the median, so I think I would avoid mentioning either of these when describing the test.
It is easy to thi |
52,820 | Confidence interval for mean of a uniform distribution | A student-$t$ confidence interval is quite robust to deviations from normality. If the data is uniformly distributed, the following simulation shows that the student-$t$ interval is slightly anti-conservative with a true confidence level around 0.947, for a nominal level of 0.95 and a sample size of $n=10$.
> a <- 0
> b <- 1
> mu <- (a+b)/2
> nsim <- 1e+5
> hits <- 0
> n <- 10
> for (i in 1:nsim) {
+ x <- runif(n, a, b)
+ ci <- confint(lm(x ~ 1))
+ if (ci[1] < mu & ci[2] > mu)
+ hits <- hits + 1
+ }
> hits/nsim
[1] 0.94738
In terms of coverage, you're thus not doing any big mistake by relying on this interval.
You can get a mostly shorter interval with exact coverage, however, by constructing the interval from the smallest and largest observations $X_{(1)}$ and $X_{(n)}$. These are sufficient statistics for $a$ and $b$. It follows that
$$
Z_1=\frac{\frac{X_{(1)} + X_{(n)}}2-\frac{a+b}2}{X_{(n)}-X_{(1)}} \tag{1}
$$
is a pivotal quantity since the distribution of $Z_1$ doesn't depend on $a$ and $b$, only $n$. This follows because we alternatively can write (1) as
$$
Z_1=\frac{U_{(1)} + U_{(n)}-1}{2(U_{(n)}-U_{(1)})} \tag{2}
$$
where $U_{(1)}$ and $U_{(n)}$ denote the corresponding order statistics associated with $n$ observations from a uniform distribution on (0,1).
The joint density of $U_{(1)}$ and $U_{(n)}$ is
$$
f_{U_{(1)},U_{(n)}}(u_1,u_n)=n(n-1)(u_n-u_1)^{n-2}.
$$
for $0<u_1<u_n<1$.
Letting
$$
Z_2 = U_{(n)}-U_{(1)},
$$
the joint density of $Z_1,Z_2$ is
\begin{align}
f_{Z_1,Z_2}(z_1,z_2)
&=f_{U_{(1)},U_{(n)}}(u_1(z_1,z_2),u_n(z_1,z_2))\left|\begin{matrix}\frac{\partial u_1}{\partial z_1} & \frac{\partial u_1}{\partial z_2} \\ \frac{\partial u_n}{\partial z_1} & \frac{\partial u_n}{\partial z_2}\end{matrix}\right|
\\&=n(n-1)z_2^{n-2}\left|\begin{matrix}z_2 & z_1 -\frac12 \\ z_2 & z_1 +\frac12\end{matrix}\right|
\\&=n(n-1)z_2^{n-1},
\end{align}
for $-\infty<z_1<\infty$ and $0<z_2<\frac1{2|z_1|+1}$. Elsewhere the density is zero. Hence, the density of the pivot $Z_1$ is
$$
f_{Z_1}(z_1)=\int_0^{\frac1{2|z_1|+1}}f_{Z_1,Z_2}(z_1,z_2)dz_2=\frac{n-1}{(2|z_1|+1)^n}.
$$
Further calculations shows that the upper $\alpha/2$-quantile of this symmetric distribution is given by
$$
q_{\alpha/2} = \frac12\left( \alpha^{-\frac1{n-1}} - 1 \right).
$$
Inverting the double inequality involving (1), an exact $(1-\alpha)$ confidence interval for the mean $(a+b)/2$ is thus
$$
\frac{X_{(1)}+X_{(n)}}2 \pm \frac{X_{(n)}-X_{(1)}}2\left( \alpha^{-\frac1{n-1}} - 1 \right).
$$
Further simulations shows that the length of the exact interval indeed is considerably shorter:
> l1 <- l2 <- numeric(nsim)
> hits2 <- 0
> n <- 10
> pm <- c(-1,1)
> for (i in 1:nsim) {
+ x <- runif(n, a, b)
+ ci <- confint(lm(x ~ 1))
+ l1[i] <- ci[2]-ci[1]
+ ci2 <- (min(x)+max(x))/2 + pm*(max(x)-min(x))/2*(1/0.05^(1/(n-1))-1)
+ l2[i] <- ci2[2]-ci2[1]
+ if (ci[1] < mu & ci[2] > mu)
+ hits <- hits + 1
+ if (ci2[1] < mu & ci2[2] > mu)
+ hits2 <- hits2 + 1
+ }
> hits2/nsim
[1] 0.94917
> mean(l1)
[1] 0.407362
> mean(l2)
[1] 0.3230234 | Confidence interval for mean of a uniform distribution | A student-$t$ confidence interval is quite robust to deviations from normality. If the data is uniformly distributed, the following simulation shows that the student-$t$ interval is slightly anti-con | Confidence interval for mean of a uniform distribution
A student-$t$ confidence interval is quite robust to deviations from normality. If the data is uniformly distributed, the following simulation shows that the student-$t$ interval is slightly anti-conservative with a true confidence level around 0.947, for a nominal level of 0.95 and a sample size of $n=10$.
> a <- 0
> b <- 1
> mu <- (a+b)/2
> nsim <- 1e+5
> hits <- 0
> n <- 10
> for (i in 1:nsim) {
+ x <- runif(n, a, b)
+ ci <- confint(lm(x ~ 1))
+ if (ci[1] < mu & ci[2] > mu)
+ hits <- hits + 1
+ }
> hits/nsim
[1] 0.94738
In terms of coverage, you're thus not doing any big mistake by relying on this interval.
You can get a mostly shorter interval with exact coverage, however, by constructing the interval from the smallest and largest observations $X_{(1)}$ and $X_{(n)}$. These are sufficient statistics for $a$ and $b$. It follows that
$$
Z_1=\frac{\frac{X_{(1)} + X_{(n)}}2-\frac{a+b}2}{X_{(n)}-X_{(1)}} \tag{1}
$$
is a pivotal quantity since the distribution of $Z_1$ doesn't depend on $a$ and $b$, only $n$. This follows because we alternatively can write (1) as
$$
Z_1=\frac{U_{(1)} + U_{(n)}-1}{2(U_{(n)}-U_{(1)})} \tag{2}
$$
where $U_{(1)}$ and $U_{(n)}$ denote the corresponding order statistics associated with $n$ observations from a uniform distribution on (0,1).
The joint density of $U_{(1)}$ and $U_{(n)}$ is
$$
f_{U_{(1)},U_{(n)}}(u_1,u_n)=n(n-1)(u_n-u_1)^{n-2}.
$$
for $0<u_1<u_n<1$.
Letting
$$
Z_2 = U_{(n)}-U_{(1)},
$$
the joint density of $Z_1,Z_2$ is
\begin{align}
f_{Z_1,Z_2}(z_1,z_2)
&=f_{U_{(1)},U_{(n)}}(u_1(z_1,z_2),u_n(z_1,z_2))\left|\begin{matrix}\frac{\partial u_1}{\partial z_1} & \frac{\partial u_1}{\partial z_2} \\ \frac{\partial u_n}{\partial z_1} & \frac{\partial u_n}{\partial z_2}\end{matrix}\right|
\\&=n(n-1)z_2^{n-2}\left|\begin{matrix}z_2 & z_1 -\frac12 \\ z_2 & z_1 +\frac12\end{matrix}\right|
\\&=n(n-1)z_2^{n-1},
\end{align}
for $-\infty<z_1<\infty$ and $0<z_2<\frac1{2|z_1|+1}$. Elsewhere the density is zero. Hence, the density of the pivot $Z_1$ is
$$
f_{Z_1}(z_1)=\int_0^{\frac1{2|z_1|+1}}f_{Z_1,Z_2}(z_1,z_2)dz_2=\frac{n-1}{(2|z_1|+1)^n}.
$$
Further calculations shows that the upper $\alpha/2$-quantile of this symmetric distribution is given by
$$
q_{\alpha/2} = \frac12\left( \alpha^{-\frac1{n-1}} - 1 \right).
$$
Inverting the double inequality involving (1), an exact $(1-\alpha)$ confidence interval for the mean $(a+b)/2$ is thus
$$
\frac{X_{(1)}+X_{(n)}}2 \pm \frac{X_{(n)}-X_{(1)}}2\left( \alpha^{-\frac1{n-1}} - 1 \right).
$$
Further simulations shows that the length of the exact interval indeed is considerably shorter:
> l1 <- l2 <- numeric(nsim)
> hits2 <- 0
> n <- 10
> pm <- c(-1,1)
> for (i in 1:nsim) {
+ x <- runif(n, a, b)
+ ci <- confint(lm(x ~ 1))
+ l1[i] <- ci[2]-ci[1]
+ ci2 <- (min(x)+max(x))/2 + pm*(max(x)-min(x))/2*(1/0.05^(1/(n-1))-1)
+ l2[i] <- ci2[2]-ci2[1]
+ if (ci[1] < mu & ci[2] > mu)
+ hits <- hits + 1
+ if (ci2[1] < mu & ci2[2] > mu)
+ hits2 <- hits2 + 1
+ }
> hits2/nsim
[1] 0.94917
> mean(l1)
[1] 0.407362
> mean(l2)
[1] 0.3230234 | Confidence interval for mean of a uniform distribution
A student-$t$ confidence interval is quite robust to deviations from normality. If the data is uniformly distributed, the following simulation shows that the student-$t$ interval is slightly anti-con |
52,821 | Adding an observation level random term messes up residuals vs fitted plot. Why? | Thanks for updating your post, Charly. I played with some over-dispersed Poisson data to see the impact of adding an observation level effect in the glmer model on the plot of residual versus fitted values. Here is the R code:
# generate data like here: https://rpubs.com/INBOstats/OLRE
set.seed(324)
n.i <- 10
n.j <- 10
n.k <- 10
beta.0 <- 1
beta.1 <- 0.3
sigma.b <- 0.5
theta <- 5
dataset <- expand.grid(
X = seq_len(n.i),
b = seq_len(n.j),
Replicate = seq_len(n.k)
)
rf.b <- rnorm(n.j, mean = 0, sd = sigma.b)
dataset$eta <- beta.0 + beta.1 * dataset$X + rf.b[dataset$b]
dataset$mu <- exp(dataset$eta)
dataset$Y <- rnbinom(nrow(dataset), mu = dataset$mu, size = theta)
dataset$OLRE <- seq_len(nrow(dataset))
require(lme4)
m.1 <- glmer(Y ~ X + (1 | b), family=poisson(link="log"), data=dataset)
m.2 <- glmer(Y ~ X + (1 | b) + (1 | OLRE), family=poisson(link="log"),
data=dataset)
Note that model m.2 includes an observation level random effect to account for over-dispersion.
To diagnose the presence of over-dispersion in model m.1, we can use the command:
# check for over-dispersion:
# values greater than 1.4 indicate over-dispersion
require(blmeco)
dispersion_glmer(m.1)
The value returned by dispersion_glmer is 2.204209, which is larger than the cut-off of 1.4 where we would start to suspect the presence of over-dispersion.
When applying dispersion_glmer to model m.2, we get a value of 1.023656:
dispersion_glmer(m.2)
Here is the R code for the plot of residuals (Pearson or deviance) versus fitted values:
par(mfrow=c(1,2))
plot(residuals(m.1, type="pearson") ~ fitted(m.1), col="darkgrey")
abline(h=0, col="red")
plot(residuals(m.2, type="pearson") ~ fitted(m.2), col="darkgrey")
abline(h=0, col="red")
par(mfrow=c(1,2))
plot(residuals(m.1, type="deviance") ~ fitted(m.1), col="darkgrey")
abline(h=0, col="red")
plot(residuals(m.2, type="deviance") ~ fitted(m.2), col="darkgrey")
abline(h=0, col="red")
As you can see, the Pearson residuals plot for the model m.2 (which includes an observation level random effect) looks horrendous compared to the plot for model m.1.
I am not showing the deviance residuals plot for m.2 as it looks about the same (that is, horrendous).
Here is the plot of fitted values versus observed response values for models m.1 and m.2:
par(mfrow=c(1,2))
plot(fitted(m.1) ~ dataset$Y, col="darkgrey",
xlim=c(0, 250), ylim=c(0, 250),
xlab="Y (response)", ylab="Fitted Values")
abline(a=0, b=1, col="red")
plot(fitted(m.2) ~ dataset$Y, col="darkgrey",
xlim=c(0, 250), ylim=c(0, 250),
xlab="Y (response)", ylab="Fitted Values")
abline(a=0, b=1, col="red")
The plot of fitted values versus actual response values seems to look better for model m.2.
We should check the summary corresponding to the two models:
summary(m.1)
summary(m.2)
As argued in https://rpubs.com/INBOstats/OLRE, large discrepancies between the fixed effect coefficients and especially the random effects variance for b would suggest that something may be off. (The extent of overdispersion present in the initial model would drive the extent of these discrepancies.)
Let's look at some diagnostic plots for the two models obtained with the Dharma package:
require(DHARMa)
fittedModel <- m.1
simulationOutput <- simulateResiduals(fittedModel = fittedModel)
plot(simulationOutput)
#----
fittedModel <- m.2
simulationOutput <- simulateResiduals(fittedModel = fittedModel)
plot(simulationOutput)
The diagnostic plots for model m.1 (especially the left panel) clearly shows overdispersion is an issue.
The diagnostic plot for model m.2 shows overdispersion is no longer an issue.
See https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html for more details on these types of plots.
Finally, let's do a posterior predictive check for the two models (i.e., plotting the fitted values obtained across simulated data sets constructed from each model over a histogram of the real response values Y), as explained at http://www.flutterbys.com.au/stats/ws/ws12.html:
range(dataset$Y) # Actual response values Y range from 0 to 247
set.seed(1234567)
glmer.sim1 <- simulate(m.1, nsim = 1000)
glmer.sim2 <- simulate(m.2, nsim = 1000)
out <- matrix(NA, ncol = 2, nrow = 251)
cnt <- 0:250
for (i in 1:length(cnt)) {
for (j in 1:2) {
eval(parse(text = paste("out[i,", j, "] <-
mean(sapply(glmer.sim", j,",\nFUN = function(x) {\nsum(x == cnt
[i]) }))", sep = "")))
}
}
plot(table(dataset$Y), ylab = "Frequency", xlab = "Y", lwd = 2,
col="darkgrey")
lines(x = 0:250, y = out[, 1], lwd = 2, lty = 2, col = "red")
lines(x = 0:250, y = out[, 2], lwd = 2, lty = 2, col = "blue")
The resulting plot shows that both models are doing a good job at approximating the distribution of Y.
Of course, there are other predictive checks one could look at, including the centipede plot, which would show where the model with observation level random effect would fail (e.g., the model would tend to under-predict low values of Y): http://newprairiepress.org/cgi/viewcontent.cgi?article=1005&context=agstatconference.
This particular example shows that it is possible for the addition of an observation level random effect to worsen the appearance of the plot of residuals versus fitted values, while producing other diagnostic plots which look fine. I wonder if other people on this site may be able to add further insights into how one should proceed in this situation, other than to report what happens with each diagnostic plot when the correction for over-dispersion is used. | Adding an observation level random term messes up residuals vs fitted plot. Why? | Thanks for updating your post, Charly. I played with some over-dispersed Poisson data to see the impact of adding an observation level effect in the glmer model on the plot of residual versus fitted v | Adding an observation level random term messes up residuals vs fitted plot. Why?
Thanks for updating your post, Charly. I played with some over-dispersed Poisson data to see the impact of adding an observation level effect in the glmer model on the plot of residual versus fitted values. Here is the R code:
# generate data like here: https://rpubs.com/INBOstats/OLRE
set.seed(324)
n.i <- 10
n.j <- 10
n.k <- 10
beta.0 <- 1
beta.1 <- 0.3
sigma.b <- 0.5
theta <- 5
dataset <- expand.grid(
X = seq_len(n.i),
b = seq_len(n.j),
Replicate = seq_len(n.k)
)
rf.b <- rnorm(n.j, mean = 0, sd = sigma.b)
dataset$eta <- beta.0 + beta.1 * dataset$X + rf.b[dataset$b]
dataset$mu <- exp(dataset$eta)
dataset$Y <- rnbinom(nrow(dataset), mu = dataset$mu, size = theta)
dataset$OLRE <- seq_len(nrow(dataset))
require(lme4)
m.1 <- glmer(Y ~ X + (1 | b), family=poisson(link="log"), data=dataset)
m.2 <- glmer(Y ~ X + (1 | b) + (1 | OLRE), family=poisson(link="log"),
data=dataset)
Note that model m.2 includes an observation level random effect to account for over-dispersion.
To diagnose the presence of over-dispersion in model m.1, we can use the command:
# check for over-dispersion:
# values greater than 1.4 indicate over-dispersion
require(blmeco)
dispersion_glmer(m.1)
The value returned by dispersion_glmer is 2.204209, which is larger than the cut-off of 1.4 where we would start to suspect the presence of over-dispersion.
When applying dispersion_glmer to model m.2, we get a value of 1.023656:
dispersion_glmer(m.2)
Here is the R code for the plot of residuals (Pearson or deviance) versus fitted values:
par(mfrow=c(1,2))
plot(residuals(m.1, type="pearson") ~ fitted(m.1), col="darkgrey")
abline(h=0, col="red")
plot(residuals(m.2, type="pearson") ~ fitted(m.2), col="darkgrey")
abline(h=0, col="red")
par(mfrow=c(1,2))
plot(residuals(m.1, type="deviance") ~ fitted(m.1), col="darkgrey")
abline(h=0, col="red")
plot(residuals(m.2, type="deviance") ~ fitted(m.2), col="darkgrey")
abline(h=0, col="red")
As you can see, the Pearson residuals plot for the model m.2 (which includes an observation level random effect) looks horrendous compared to the plot for model m.1.
I am not showing the deviance residuals plot for m.2 as it looks about the same (that is, horrendous).
Here is the plot of fitted values versus observed response values for models m.1 and m.2:
par(mfrow=c(1,2))
plot(fitted(m.1) ~ dataset$Y, col="darkgrey",
xlim=c(0, 250), ylim=c(0, 250),
xlab="Y (response)", ylab="Fitted Values")
abline(a=0, b=1, col="red")
plot(fitted(m.2) ~ dataset$Y, col="darkgrey",
xlim=c(0, 250), ylim=c(0, 250),
xlab="Y (response)", ylab="Fitted Values")
abline(a=0, b=1, col="red")
The plot of fitted values versus actual response values seems to look better for model m.2.
We should check the summary corresponding to the two models:
summary(m.1)
summary(m.2)
As argued in https://rpubs.com/INBOstats/OLRE, large discrepancies between the fixed effect coefficients and especially the random effects variance for b would suggest that something may be off. (The extent of overdispersion present in the initial model would drive the extent of these discrepancies.)
Let's look at some diagnostic plots for the two models obtained with the Dharma package:
require(DHARMa)
fittedModel <- m.1
simulationOutput <- simulateResiduals(fittedModel = fittedModel)
plot(simulationOutput)
#----
fittedModel <- m.2
simulationOutput <- simulateResiduals(fittedModel = fittedModel)
plot(simulationOutput)
The diagnostic plots for model m.1 (especially the left panel) clearly shows overdispersion is an issue.
The diagnostic plot for model m.2 shows overdispersion is no longer an issue.
See https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html for more details on these types of plots.
Finally, let's do a posterior predictive check for the two models (i.e., plotting the fitted values obtained across simulated data sets constructed from each model over a histogram of the real response values Y), as explained at http://www.flutterbys.com.au/stats/ws/ws12.html:
range(dataset$Y) # Actual response values Y range from 0 to 247
set.seed(1234567)
glmer.sim1 <- simulate(m.1, nsim = 1000)
glmer.sim2 <- simulate(m.2, nsim = 1000)
out <- matrix(NA, ncol = 2, nrow = 251)
cnt <- 0:250
for (i in 1:length(cnt)) {
for (j in 1:2) {
eval(parse(text = paste("out[i,", j, "] <-
mean(sapply(glmer.sim", j,",\nFUN = function(x) {\nsum(x == cnt
[i]) }))", sep = "")))
}
}
plot(table(dataset$Y), ylab = "Frequency", xlab = "Y", lwd = 2,
col="darkgrey")
lines(x = 0:250, y = out[, 1], lwd = 2, lty = 2, col = "red")
lines(x = 0:250, y = out[, 2], lwd = 2, lty = 2, col = "blue")
The resulting plot shows that both models are doing a good job at approximating the distribution of Y.
Of course, there are other predictive checks one could look at, including the centipede plot, which would show where the model with observation level random effect would fail (e.g., the model would tend to under-predict low values of Y): http://newprairiepress.org/cgi/viewcontent.cgi?article=1005&context=agstatconference.
This particular example shows that it is possible for the addition of an observation level random effect to worsen the appearance of the plot of residuals versus fitted values, while producing other diagnostic plots which look fine. I wonder if other people on this site may be able to add further insights into how one should proceed in this situation, other than to report what happens with each diagnostic plot when the correction for over-dispersion is used. | Adding an observation level random term messes up residuals vs fitted plot. Why?
Thanks for updating your post, Charly. I played with some over-dispersed Poisson data to see the impact of adding an observation level effect in the glmer model on the plot of residual versus fitted v |
52,822 | Coding as a categorical or continuous variable? | Given these categories as data beyond my control, I would code
1 No
2 Sometimes
3 Yes
4 Don't know
on these grounds:
Sometimes sounds weaker than Yes, which is more emphatic.
Don't know doesn't usually belong in an ordered sequence.
Then some analyses will call for ignoring 4 and some don't. All depends on the question being asked: for example, are you describing the data or modelling?
But I think it's wrong to call "Don't know" missing. We all answer questionnaires too. If I am allowed to say "Don't know" as one of various possible answers, that is not at all equivalent to my refusing or declining to answer the question. As an occasional survey participant as well as a statistically minded person I object to data being analysed like that.
There is no case for calling this variable continuous. It's discrete. 1 to 3 alone is ordered, 1 to 4 is just nominal or unordered.
A context of logistic regression doesn't change how you think about the variable, unless it is being considered as a response and you are choosing between ordinal and multinomial logistic.
EDIT
Thinking more about this, it's hard to see that "Sometimes" and "Yes" are mutually exclusive! What are the questions? Do you ever eat meat, drink alcohol, smoke tobacco?
There is a separate problem if people were presented with these answers in this order:
1- No
2- Do not know
3- Sometimes
4- Yes
Then it's entirely possible that, rationally or not, some people might regard that as an ordered scale. For example, "Do you approve of the behaviour of prominent politician?". There is something of a case for saying that "Don't know" is in between the extremes, as in "I don't know enough or don't want to be judgmental about the topic". But then people are being expected to know the difference between "Don't know" and "Sometimes". That can happen: I had no idea what was involved in a minor medical condition until it happened to me and was named and explained.
Without qualitative evidence about how the questionnaire was received or understood, it's very hard to do more than speculate. | Coding as a categorical or continuous variable? | Given these categories as data beyond my control, I would code
1 No
2 Sometimes
3 Yes
4 Don't know
on these grounds:
Sometimes sounds weaker than Yes, which is more emphatic.
Don't know doesn | Coding as a categorical or continuous variable?
Given these categories as data beyond my control, I would code
1 No
2 Sometimes
3 Yes
4 Don't know
on these grounds:
Sometimes sounds weaker than Yes, which is more emphatic.
Don't know doesn't usually belong in an ordered sequence.
Then some analyses will call for ignoring 4 and some don't. All depends on the question being asked: for example, are you describing the data or modelling?
But I think it's wrong to call "Don't know" missing. We all answer questionnaires too. If I am allowed to say "Don't know" as one of various possible answers, that is not at all equivalent to my refusing or declining to answer the question. As an occasional survey participant as well as a statistically minded person I object to data being analysed like that.
There is no case for calling this variable continuous. It's discrete. 1 to 3 alone is ordered, 1 to 4 is just nominal or unordered.
A context of logistic regression doesn't change how you think about the variable, unless it is being considered as a response and you are choosing between ordinal and multinomial logistic.
EDIT
Thinking more about this, it's hard to see that "Sometimes" and "Yes" are mutually exclusive! What are the questions? Do you ever eat meat, drink alcohol, smoke tobacco?
There is a separate problem if people were presented with these answers in this order:
1- No
2- Do not know
3- Sometimes
4- Yes
Then it's entirely possible that, rationally or not, some people might regard that as an ordered scale. For example, "Do you approve of the behaviour of prominent politician?". There is something of a case for saying that "Don't know" is in between the extremes, as in "I don't know enough or don't want to be judgmental about the topic". But then people are being expected to know the difference between "Don't know" and "Sometimes". That can happen: I had no idea what was involved in a minor medical condition until it happened to me and was named and explained.
Without qualitative evidence about how the questionnaire was received or understood, it's very hard to do more than speculate. | Coding as a categorical or continuous variable?
Given these categories as data beyond my control, I would code
1 No
2 Sometimes
3 Yes
4 Don't know
on these grounds:
Sometimes sounds weaker than Yes, which is more emphatic.
Don't know doesn |
52,823 | Coding as a categorical or continuous variable? | (Assuming for simplicity that we're treating "Do not know" as missing:) The three simple approaches are:
Code it as a categorical covariate. If, for instance, you use "No" as the reference level, then you get two coefficients from your regression: a) the log-odds of the outcome (all else being equal, roughly speaking) for a respondent who answers "Sometimes" versus one who answers "No"; and b) the log-odds of the outcome for a respondent who answers "Yes" versus one who answers "No". Disadvantage, you don't get a value for answering "Yes" versus "Sometimes".
Use it as a continuous variable. Disadvantage, this assumes that the difference between "No" and "Sometimes" is the same magnitude as the difference between "Sometimes" and "Yes".
I guess you could decide that all you care about is a binary choice -- "Yes" versus "No" or "Sometimes", and re-code it as a boolean variable. But that throws out information.
There are more complex choices as well. This answer has some ideas; and this one, and the links it points to, give you many more. | Coding as a categorical or continuous variable? | (Assuming for simplicity that we're treating "Do not know" as missing:) The three simple approaches are:
Code it as a categorical covariate. If, for instance, you use "No" as the reference level, the | Coding as a categorical or continuous variable?
(Assuming for simplicity that we're treating "Do not know" as missing:) The three simple approaches are:
Code it as a categorical covariate. If, for instance, you use "No" as the reference level, then you get two coefficients from your regression: a) the log-odds of the outcome (all else being equal, roughly speaking) for a respondent who answers "Sometimes" versus one who answers "No"; and b) the log-odds of the outcome for a respondent who answers "Yes" versus one who answers "No". Disadvantage, you don't get a value for answering "Yes" versus "Sometimes".
Use it as a continuous variable. Disadvantage, this assumes that the difference between "No" and "Sometimes" is the same magnitude as the difference between "Sometimes" and "Yes".
I guess you could decide that all you care about is a binary choice -- "Yes" versus "No" or "Sometimes", and re-code it as a boolean variable. But that throws out information.
There are more complex choices as well. This answer has some ideas; and this one, and the links it points to, give you many more. | Coding as a categorical or continuous variable?
(Assuming for simplicity that we're treating "Do not know" as missing:) The three simple approaches are:
Code it as a categorical covariate. If, for instance, you use "No" as the reference level, the |
52,824 | hurdle model with non-zero gaussian distribution in R | If you want to model data that essentially follow a normal distribution for the positive values but have a point mass at zero, you could start with a Gaussian model censored at zero. In the econometric literature this is known as the tobit model.
The next step would be to fit a two-part model with (1) a binary hurdle for zero vs. non-zero (e.g., using a probit link, corresponding to an underlying Gaussian distribution) and (2) a zero-truncated Gaussian model for the positive observations. In the econometrics literature this is known as the Cragg model. The tobit model is nested in the Cragg model, namely if the scaled coefficients from both parts coincide. See also: Is a "hurdle model" really one model? Or just two separate, sequential models?
A potential caveat is that while the usual (un-censored and un-truncated) Gaussian regression is consistent under heteroscedasticity, the same does not hold for the censored and truncated versions. Hence, taking heteroscedasticity into account might matter.
An R package that implements heteroscedastic censored or truncated models is crch at https://CRAN.R-project.org/package=crch. A paper introducing the package along with a worked example that compares the censored model with the two-part hurdle model is Messner, Mayr, Zeileis (2016), "Heteroscedastic Censored and Truncated Regression with crch", "The R Journal", 8(1), 173-181. https://journal.R-project.org/archive/2016/RJ-2016-012/ | hurdle model with non-zero gaussian distribution in R | If you want to model data that essentially follow a normal distribution for the positive values but have a point mass at zero, you could start with a Gaussian model censored at zero. In the econometri | hurdle model with non-zero gaussian distribution in R
If you want to model data that essentially follow a normal distribution for the positive values but have a point mass at zero, you could start with a Gaussian model censored at zero. In the econometric literature this is known as the tobit model.
The next step would be to fit a two-part model with (1) a binary hurdle for zero vs. non-zero (e.g., using a probit link, corresponding to an underlying Gaussian distribution) and (2) a zero-truncated Gaussian model for the positive observations. In the econometrics literature this is known as the Cragg model. The tobit model is nested in the Cragg model, namely if the scaled coefficients from both parts coincide. See also: Is a "hurdle model" really one model? Or just two separate, sequential models?
A potential caveat is that while the usual (un-censored and un-truncated) Gaussian regression is consistent under heteroscedasticity, the same does not hold for the censored and truncated versions. Hence, taking heteroscedasticity into account might matter.
An R package that implements heteroscedastic censored or truncated models is crch at https://CRAN.R-project.org/package=crch. A paper introducing the package along with a worked example that compares the censored model with the two-part hurdle model is Messner, Mayr, Zeileis (2016), "Heteroscedastic Censored and Truncated Regression with crch", "The R Journal", 8(1), 173-181. https://journal.R-project.org/archive/2016/RJ-2016-012/ | hurdle model with non-zero gaussian distribution in R
If you want to model data that essentially follow a normal distribution for the positive values but have a point mass at zero, you could start with a Gaussian model censored at zero. In the econometri |
52,825 | ML: sampling imbalanced dataset leads to selection bias | By sampling we make the algorithm think that the prior probabilities
of the classes are the same. This seems to affect the predictions as
well and therefore the probabilities cannot be interpreted as
probabilities anymore and have to be recalibrated.
You seem mostly correct, the phenomenon is called prior probability shift. It is the case where $P_{train}(x|y) = P_{test}(x|y)$ but $P_{train}(y) \neq P_{test}(y)$. I'm not sure about what you mean by 'probabilities not remaining the same and have to be recalibrated'.
Lets assume we have a two class classification problem, imbalanced
datasets that we oversample to get the same class distribution. We run
decision trees on it. The test set in imbalanced but does it really
matter?
Yes, it matters and that is the cause of the problem.
Each sample of the test set just goes through the nodes of the decision trees and it never checks if the sample belongs to the majority or minority class.
Correct. The problem is not with how decision tree predicts a given sample point. The issue lies with the way it was trained and with the characterization of the feature space for each class.
Oversampling the minority class is a way to deal with the imbalanced class problem but it is not ideal. When the minority class is over-sampled by increasing amounts, the effect is to identify similar but more specific regions in the feature space as the decision region for the minority class. The decision tree would predict a given point in the way that you mentioned but if its decision regions are not accurate based on the way it was trained then it won't predict well.
So, why does the prior probability of classes affect the prediction of a sample?
Prior probability shift is a particular type of dataset shift. There's a fair amount of work in the literature on this topic and whether it's a generative or discriminative model, both of them suffer from the problem. The general idea is whether you are trying to train a discriminative model $P(y|x) = \frac{P(x|y)P(y)}{P(x)}$ or a generative model $P(x,y) = P(x|y)P(y)$, the change in $P(y)$ affects $P(y|x)$ and $P(x,y)$. If the $P(x)$ changes in train and test dataset, then the phenomenon is called covariate shift. You can learn more about dataset shift here, it was probably one of the first compilation of the work done on dataset shift.
On a side note, you can refer to this paper on SMOTE. It addresses the oversampling issue with decision tree and provides a better way to rebalance the dataset by creating synthetic points of the minority class. It is widely used and I believe various implementations of this method already exists. | ML: sampling imbalanced dataset leads to selection bias | By sampling we make the algorithm think that the prior probabilities
of the classes are the same. This seems to affect the predictions as
well and therefore the probabilities cannot be interpreted | ML: sampling imbalanced dataset leads to selection bias
By sampling we make the algorithm think that the prior probabilities
of the classes are the same. This seems to affect the predictions as
well and therefore the probabilities cannot be interpreted as
probabilities anymore and have to be recalibrated.
You seem mostly correct, the phenomenon is called prior probability shift. It is the case where $P_{train}(x|y) = P_{test}(x|y)$ but $P_{train}(y) \neq P_{test}(y)$. I'm not sure about what you mean by 'probabilities not remaining the same and have to be recalibrated'.
Lets assume we have a two class classification problem, imbalanced
datasets that we oversample to get the same class distribution. We run
decision trees on it. The test set in imbalanced but does it really
matter?
Yes, it matters and that is the cause of the problem.
Each sample of the test set just goes through the nodes of the decision trees and it never checks if the sample belongs to the majority or minority class.
Correct. The problem is not with how decision tree predicts a given sample point. The issue lies with the way it was trained and with the characterization of the feature space for each class.
Oversampling the minority class is a way to deal with the imbalanced class problem but it is not ideal. When the minority class is over-sampled by increasing amounts, the effect is to identify similar but more specific regions in the feature space as the decision region for the minority class. The decision tree would predict a given point in the way that you mentioned but if its decision regions are not accurate based on the way it was trained then it won't predict well.
So, why does the prior probability of classes affect the prediction of a sample?
Prior probability shift is a particular type of dataset shift. There's a fair amount of work in the literature on this topic and whether it's a generative or discriminative model, both of them suffer from the problem. The general idea is whether you are trying to train a discriminative model $P(y|x) = \frac{P(x|y)P(y)}{P(x)}$ or a generative model $P(x,y) = P(x|y)P(y)$, the change in $P(y)$ affects $P(y|x)$ and $P(x,y)$. If the $P(x)$ changes in train and test dataset, then the phenomenon is called covariate shift. You can learn more about dataset shift here, it was probably one of the first compilation of the work done on dataset shift.
On a side note, you can refer to this paper on SMOTE. It addresses the oversampling issue with decision tree and provides a better way to rebalance the dataset by creating synthetic points of the minority class. It is widely used and I believe various implementations of this method already exists. | ML: sampling imbalanced dataset leads to selection bias
By sampling we make the algorithm think that the prior probabilities
of the classes are the same. This seems to affect the predictions as
well and therefore the probabilities cannot be interpreted |
52,826 | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model? | The answer is - it depends.
The issue with missing data and leaving it out of your model completely is that it might affect the representativeness of your sampled data.
The kind of deletion you are referring to in your question is known as listwise deletion. This is where you exclude an observation completely because for at least one of your observed variables you do not have any data. Leaving out this data point might introduce bias into your model.
I will illustrate with an example: One common way this occurs is in surveys and is known as a non-response bias. Participants might not answer certain sensitive questions such as "Are you HIV positive?" etc. So in your model if you leave out all the unanswered questions you might think that there is a low incidence of HIV in your sample but this might simply be wrong.
In essence dealing with missing data comes down to understanding why the data point is missing at all: Is it random or is it systematic?
There are ways to find out if the missing data is systematic or not and the approach is similar to @David Dale's suggestion. You introduce new missing variables that indicate wether a data point is missing or not. We can then compare the mean likelihood of y for the 1’s and 0’s (missing and non-missing). If there is a significant difference in means, we have evidence that the data is not missing at random. In other words, there’s a pattern to the missingness. This is the first type of missing data pattern called missing not at random (MNAR).
If there is no significant difference in means, between our primary variable and y then we have evidence that the data is missing at random (MAR).
And finally, if there is no signifact difference in means between all our variables (primary and not) and the y then we have evidence that the data is missing completely at random (MCAR). This is the most desirable situation to be in.
If your data is MNAR then you will affect your model by listwise deletion of those values. | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model? | The answer is - it depends.
The issue with missing data and leaving it out of your model completely is that it might affect the representativeness of your sampled data.
The kind of deletion you are | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model?
The answer is - it depends.
The issue with missing data and leaving it out of your model completely is that it might affect the representativeness of your sampled data.
The kind of deletion you are referring to in your question is known as listwise deletion. This is where you exclude an observation completely because for at least one of your observed variables you do not have any data. Leaving out this data point might introduce bias into your model.
I will illustrate with an example: One common way this occurs is in surveys and is known as a non-response bias. Participants might not answer certain sensitive questions such as "Are you HIV positive?" etc. So in your model if you leave out all the unanswered questions you might think that there is a low incidence of HIV in your sample but this might simply be wrong.
In essence dealing with missing data comes down to understanding why the data point is missing at all: Is it random or is it systematic?
There are ways to find out if the missing data is systematic or not and the approach is similar to @David Dale's suggestion. You introduce new missing variables that indicate wether a data point is missing or not. We can then compare the mean likelihood of y for the 1’s and 0’s (missing and non-missing). If there is a significant difference in means, we have evidence that the data is not missing at random. In other words, there’s a pattern to the missingness. This is the first type of missing data pattern called missing not at random (MNAR).
If there is no significant difference in means, between our primary variable and y then we have evidence that the data is missing at random (MAR).
And finally, if there is no signifact difference in means between all our variables (primary and not) and the y then we have evidence that the data is missing completely at random (MCAR). This is the most desirable situation to be in.
If your data is MNAR then you will affect your model by listwise deletion of those values. | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model?
The answer is - it depends.
The issue with missing data and leaving it out of your model completely is that it might affect the representativeness of your sampled data.
The kind of deletion you are |
52,827 | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model? | It Depends
Whether excluding cases with missing data is fine or not depends on a few factors. Whatever choice is made requires justification, so there is a bit or work to do with the data. The more ancillary information available to you, the better you can make a choice. Testing the randomness of the missing values is essential to guide your steps. This is because things become much more complex once you start to look at biased missingness.
I'll use an example of an arbitrary disease that requires some blood test and a questionaire.
Are values
Missing Completely At Random (MCAR): no pattern at all to the missingness. This may be that one subject is missing because the person got caught in traffic and didn't have time to complete all the tests, another because they faint at the sight of needles, another because the doctor's scrawl was completely uninterpretable. The reasons are unnconnected to each other and to whether the person has the disease or not.
Missing At Random (MAR). Has a relationship with what you have observed, but is not due to the underlying value. For example a questionnaire answer may be more likely to be missing in old subjects who have worse memory, but the likelihood of forgetting is not related what the answer would have been. Age may be a risk factor in the disease, so the forgetting is confounded with disease risk even through it is not caused by the disease.
Missing Not At Random (MNAR) there is a pattern to which variables are missing, the likelihood of being missing is biased by the actual value of the variable. For example, males may not wish to report impotence and choose to not report it. To diagnose this often needs additional data as you don’t know what the value should be.
See
https://www.theanalysisfactor.com/missing-data-mechanism/ which explains these and provides some advice on how to deal with the different types.
Possible Solutions
The five options I am aware off (1st 2 you have already discussed) are:
Ignore missing data in model. When you come to predict new
samples that have missing data it will return and invalid result due
to the missing value. You are right that this approach will limit
your ability to make generalisations about the entire population unless you can prove that the missing data is MCAR. The model will fail when missing data is present, so you are not going to produce false predictions. Failure is useful information and not to be ignored. Mechanisms to
handle the failure need to be appropriately designed. This will have theoretically have a neglible effect if data is truely MCAR as the missingness is unbiased. With MNAR there is definite bias in the missingness and this would be passed onto the model that ignores missingness. With MAR there is a risk that although the missingness is not due to the disease, it may be linked to factors that are confounded with disease and so may still bias your model.
Impute missing data into model. As a couple of commenters have pointed out the main value of imputation is maintaining statistical robustness of the dataset. It doesn't help the individual receiving a diagnosis if you replace their true value with (for a simple case) the mean of the population. Its aim is to preserve the overall statistical properties of the dataset for the general population. Imputation works best in a multivariate
dataset where the relationship with other variables can help improve
the imputation (specific patterns in variables T, U, V, W may be
associated with more probable values of X) as these allow tailored estmation of the individual's value. If mechanisms causing
missingness (MAR or MNAR) can be identified these should be used to
guide the imputation. Validation of the model should include missing
data to ensure the imputation and model is handled well in independent
samples. As @JWH2006 points out the extent of missingness will also affect your choice of tool - the lower the proportion of observations the riskier imputation becomes, requiring more powerful methods.
If your variable types allow it, you can recode missing data to a new
numeric level in X that does not otherwise arise. This implicitly
handles the impact of missingness and will capture an element of any
bias that exists. This should mitigate against missingness bias in all three categories.
A mashup of 2 and 3 alongside imputation you can create a Xmissing variable (e.g.
0 observed, 1 missing) that will be given its own model coefficient
and confidence intervals to explicitly handle the missingness. Again, this should mitigate against bias in the missingness.
If
there is the possibility of revisiting the data source (even for a
partial sample of the missing and complete samples) and reacquiring
it then this may be useful to better understand the mechanism of
missingness to make a more robust procedure. Above I suggested some approaches can mitigate against bias, but they will never be guaranteed to eliminate its effect. If there is any way to go back and fill in the blanks it will improve your understanding of how the missingness arose and therefore give clues on how to handle it best.
Remember
It also matters if the data is truly missing as opposed to something like not-applicable, not-detected or a saturation event. Not detected events are usually recoded to half the detection limit. Saturation events usually set to the saturation threshold. | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model? | It Depends
Whether excluding cases with missing data is fine or not depends on a few factors. Whatever choice is made requires justification, so there is a bit or work to do with the data. The more an | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model?
It Depends
Whether excluding cases with missing data is fine or not depends on a few factors. Whatever choice is made requires justification, so there is a bit or work to do with the data. The more ancillary information available to you, the better you can make a choice. Testing the randomness of the missing values is essential to guide your steps. This is because things become much more complex once you start to look at biased missingness.
I'll use an example of an arbitrary disease that requires some blood test and a questionaire.
Are values
Missing Completely At Random (MCAR): no pattern at all to the missingness. This may be that one subject is missing because the person got caught in traffic and didn't have time to complete all the tests, another because they faint at the sight of needles, another because the doctor's scrawl was completely uninterpretable. The reasons are unnconnected to each other and to whether the person has the disease or not.
Missing At Random (MAR). Has a relationship with what you have observed, but is not due to the underlying value. For example a questionnaire answer may be more likely to be missing in old subjects who have worse memory, but the likelihood of forgetting is not related what the answer would have been. Age may be a risk factor in the disease, so the forgetting is confounded with disease risk even through it is not caused by the disease.
Missing Not At Random (MNAR) there is a pattern to which variables are missing, the likelihood of being missing is biased by the actual value of the variable. For example, males may not wish to report impotence and choose to not report it. To diagnose this often needs additional data as you don’t know what the value should be.
See
https://www.theanalysisfactor.com/missing-data-mechanism/ which explains these and provides some advice on how to deal with the different types.
Possible Solutions
The five options I am aware off (1st 2 you have already discussed) are:
Ignore missing data in model. When you come to predict new
samples that have missing data it will return and invalid result due
to the missing value. You are right that this approach will limit
your ability to make generalisations about the entire population unless you can prove that the missing data is MCAR. The model will fail when missing data is present, so you are not going to produce false predictions. Failure is useful information and not to be ignored. Mechanisms to
handle the failure need to be appropriately designed. This will have theoretically have a neglible effect if data is truely MCAR as the missingness is unbiased. With MNAR there is definite bias in the missingness and this would be passed onto the model that ignores missingness. With MAR there is a risk that although the missingness is not due to the disease, it may be linked to factors that are confounded with disease and so may still bias your model.
Impute missing data into model. As a couple of commenters have pointed out the main value of imputation is maintaining statistical robustness of the dataset. It doesn't help the individual receiving a diagnosis if you replace their true value with (for a simple case) the mean of the population. Its aim is to preserve the overall statistical properties of the dataset for the general population. Imputation works best in a multivariate
dataset where the relationship with other variables can help improve
the imputation (specific patterns in variables T, U, V, W may be
associated with more probable values of X) as these allow tailored estmation of the individual's value. If mechanisms causing
missingness (MAR or MNAR) can be identified these should be used to
guide the imputation. Validation of the model should include missing
data to ensure the imputation and model is handled well in independent
samples. As @JWH2006 points out the extent of missingness will also affect your choice of tool - the lower the proportion of observations the riskier imputation becomes, requiring more powerful methods.
If your variable types allow it, you can recode missing data to a new
numeric level in X that does not otherwise arise. This implicitly
handles the impact of missingness and will capture an element of any
bias that exists. This should mitigate against missingness bias in all three categories.
A mashup of 2 and 3 alongside imputation you can create a Xmissing variable (e.g.
0 observed, 1 missing) that will be given its own model coefficient
and confidence intervals to explicitly handle the missingness. Again, this should mitigate against bias in the missingness.
If
there is the possibility of revisiting the data source (even for a
partial sample of the missing and complete samples) and reacquiring
it then this may be useful to better understand the mechanism of
missingness to make a more robust procedure. Above I suggested some approaches can mitigate against bias, but they will never be guaranteed to eliminate its effect. If there is any way to go back and fill in the blanks it will improve your understanding of how the missingness arose and therefore give clues on how to handle it best.
Remember
It also matters if the data is truly missing as opposed to something like not-applicable, not-detected or a saturation event. Not detected events are usually recoded to half the detection limit. Saturation events usually set to the saturation threshold. | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model?
It Depends
Whether excluding cases with missing data is fine or not depends on a few factors. Whatever choice is made requires justification, so there is a bit or work to do with the data. The more an |
52,828 | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model? | The answer is "Yes". If you both train and apply your model only on the samples with non-missing $x$, then it is totally OK. Your single heroic assumption here is that the process that makes $x$ be missing is the same for the training and test data, which is quite reasonable.
If you want to train and apply your model on any data (both with and without missing values), there is a preprocessing scheme with wich popular families of models (linear, neural network, decision trees and their ensembles) can work adequately. You impute missing $x$ with any value you want (e.g. 0), and at the same time you create a binary marker of missingnes for each column with potential missing values. For example, the table
x1 x2
0 1
2 nan
3 1
nan 0
nan nan
turns into
x1 x2 x1m x2m
0 1 0 0
2 0 0 1
3 1 0 0
0 0 1 0
0 0 1 1
In this case e.g. a linear model would find values for coefficients to x1m and x2m which give the best prediction, even if the values are missing not-at-random. | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model? | The answer is "Yes". If you both train and apply your model only on the samples with non-missing $x$, then it is totally OK. Your single heroic assumption here is that the process that makes $x$ be mi | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model?
The answer is "Yes". If you both train and apply your model only on the samples with non-missing $x$, then it is totally OK. Your single heroic assumption here is that the process that makes $x$ be missing is the same for the training and test data, which is quite reasonable.
If you want to train and apply your model on any data (both with and without missing values), there is a preprocessing scheme with wich popular families of models (linear, neural network, decision trees and their ensembles) can work adequately. You impute missing $x$ with any value you want (e.g. 0), and at the same time you create a binary marker of missingnes for each column with potential missing values. For example, the table
x1 x2
0 1
2 nan
3 1
nan 0
nan nan
turns into
x1 x2 x1m x2m
0 1 0 0
2 0 0 1
3 1 0 0
0 0 1 0
0 0 1 1
In this case e.g. a linear model would find values for coefficients to x1m and x2m which give the best prediction, even if the values are missing not-at-random. | Is excluding cases with missing data fine for a predictive (not explanatory/descriptive) model?
The answer is "Yes". If you both train and apply your model only on the samples with non-missing $x$, then it is totally OK. Your single heroic assumption here is that the process that makes $x$ be mi |
52,829 | Haldane's prior Beta(0,0) - Part 1 | Haldane prior is beta distribution with parameters $\alpha = \beta = 0$. So it is
$$
f(p) = \frac{p^{\alpha-1} (1-p)^{\beta-1}}{B(\alpha, \beta)} = \frac{p^{-1}(1-p)^{-1}}{B(0, 0)}
$$
where $B(0, 0)$ is the normalizing constant that is infinite as described in Wikipedia:
The function $p^{-1}(1-p)^{-1}$ can be viewed as the limit of the
numerator of the beta distribution as both shape parameters approach
zero: $\alpha, \beta \to 0$. The Beta function (in the denominator of
the beta distribution) approaches infinity, for both parameters
approaching zero, $\alpha, \beta \to 0$. Therefore, $p^{-1}(1-p)^{-1}$
divided by the Beta function approaches a 2-point Bernoulli
distribution with equal probability $1/2$ at each Dirac delta function
end, at $0$ and $1$, and nothing in between, as $\alpha, \beta \to 0$.
So Haldane prior is not a proper distribution. It is an abstract idea of what would be the beta distribution be if it had $\alpha = \beta = 0$ parameters. As a distribution, it is rather not usable, yet it can be used as an "uninformative" prior for binomial distribution. It is often described in it's approximate form $f(p) \propto p^{-1}(1-p)^{-1}$, since the normalizing constant is meaningless. | Haldane's prior Beta(0,0) - Part 1 | Haldane prior is beta distribution with parameters $\alpha = \beta = 0$. So it is
$$
f(p) = \frac{p^{\alpha-1} (1-p)^{\beta-1}}{B(\alpha, \beta)} = \frac{p^{-1}(1-p)^{-1}}{B(0, 0)}
$$
where $B(0, 0)$ | Haldane's prior Beta(0,0) - Part 1
Haldane prior is beta distribution with parameters $\alpha = \beta = 0$. So it is
$$
f(p) = \frac{p^{\alpha-1} (1-p)^{\beta-1}}{B(\alpha, \beta)} = \frac{p^{-1}(1-p)^{-1}}{B(0, 0)}
$$
where $B(0, 0)$ is the normalizing constant that is infinite as described in Wikipedia:
The function $p^{-1}(1-p)^{-1}$ can be viewed as the limit of the
numerator of the beta distribution as both shape parameters approach
zero: $\alpha, \beta \to 0$. The Beta function (in the denominator of
the beta distribution) approaches infinity, for both parameters
approaching zero, $\alpha, \beta \to 0$. Therefore, $p^{-1}(1-p)^{-1}$
divided by the Beta function approaches a 2-point Bernoulli
distribution with equal probability $1/2$ at each Dirac delta function
end, at $0$ and $1$, and nothing in between, as $\alpha, \beta \to 0$.
So Haldane prior is not a proper distribution. It is an abstract idea of what would be the beta distribution be if it had $\alpha = \beta = 0$ parameters. As a distribution, it is rather not usable, yet it can be used as an "uninformative" prior for binomial distribution. It is often described in it's approximate form $f(p) \propto p^{-1}(1-p)^{-1}$, since the normalizing constant is meaningless. | Haldane's prior Beta(0,0) - Part 1
Haldane prior is beta distribution with parameters $\alpha = \beta = 0$. So it is
$$
f(p) = \frac{p^{\alpha-1} (1-p)^{\beta-1}}{B(\alpha, \beta)} = \frac{p^{-1}(1-p)^{-1}}{B(0, 0)}
$$
where $B(0, 0)$ |
52,830 | Haldane's prior Beta(0,0) - Part 1 | The second expression is correct as this is an improper distribution, i.e. it doesn't integrate to $1$. Thus it doesn't have a density and you can only specify it up to proportionality. | Haldane's prior Beta(0,0) - Part 1 | The second expression is correct as this is an improper distribution, i.e. it doesn't integrate to $1$. Thus it doesn't have a density and you can only specify it up to proportionality. | Haldane's prior Beta(0,0) - Part 1
The second expression is correct as this is an improper distribution, i.e. it doesn't integrate to $1$. Thus it doesn't have a density and you can only specify it up to proportionality. | Haldane's prior Beta(0,0) - Part 1
The second expression is correct as this is an improper distribution, i.e. it doesn't integrate to $1$. Thus it doesn't have a density and you can only specify it up to proportionality. |
52,831 | normality test on small samples | Elaboration on t.f's answer.
The normality test is a sneaky beast, because conceptually it works the other way round than a "normal" statistical test. Normally, you base your knowledge based on the rejection of the null. Here, the "desired" outcome ("proof" of normality) is the non-rejection. However, failure to reject is not the same as proving the null! The fact that cannot find an effect, does not mean it is not there.
With few samples, you will therefore never reject your hypothesis, so you are likely to falsely assume that your data is normal.
Conversely, if you have plenty of data, you will always reject your null, because no data in real world is perfectly normal. Consider human height - typically assumed, in biology, to have a normal distribution. In fact, it has been assumed to be normal for the past 150 years (ever since Galton). However, height has clear boundaries: it cannot be negative, it cannot be 100 meters. Therefore, it cannot be normally distributed.
You will find a more detailed discussion with numerous examples here.
So what can you do?
Is there any reason to believe that your data is not normally distributed? Can you guess the distribution a priori? Typical examples may include bacterial growth or frequency of occurrence of an event.
Use a q-q plot or a similar visual aid to make the decision.
If forced by your thesis advisor to make a normality test, rather than focusing on the p-value alone, consider the effect size or calculate the skewness.
Do you have similar data from other experiments? Can you use it to increase your sample size? | normality test on small samples | Elaboration on t.f's answer.
The normality test is a sneaky beast, because conceptually it works the other way round than a "normal" statistical test. Normally, you base your knowledge based on the re | normality test on small samples
Elaboration on t.f's answer.
The normality test is a sneaky beast, because conceptually it works the other way round than a "normal" statistical test. Normally, you base your knowledge based on the rejection of the null. Here, the "desired" outcome ("proof" of normality) is the non-rejection. However, failure to reject is not the same as proving the null! The fact that cannot find an effect, does not mean it is not there.
With few samples, you will therefore never reject your hypothesis, so you are likely to falsely assume that your data is normal.
Conversely, if you have plenty of data, you will always reject your null, because no data in real world is perfectly normal. Consider human height - typically assumed, in biology, to have a normal distribution. In fact, it has been assumed to be normal for the past 150 years (ever since Galton). However, height has clear boundaries: it cannot be negative, it cannot be 100 meters. Therefore, it cannot be normally distributed.
You will find a more detailed discussion with numerous examples here.
So what can you do?
Is there any reason to believe that your data is not normally distributed? Can you guess the distribution a priori? Typical examples may include bacterial growth or frequency of occurrence of an event.
Use a q-q plot or a similar visual aid to make the decision.
If forced by your thesis advisor to make a normality test, rather than focusing on the p-value alone, consider the effect size or calculate the skewness.
Do you have similar data from other experiments? Can you use it to increase your sample size? | normality test on small samples
Elaboration on t.f's answer.
The normality test is a sneaky beast, because conceptually it works the other way round than a "normal" statistical test. Normally, you base your knowledge based on the re |
52,832 | normality test on small samples | Your conclusion is correct, the normality assumption is required for t-test. However the t-test is fairly robust to violations of it. In any case you can use a non parametric for example the test Mann Whitney when you can guarantee the normality assumption.
You should, with such a small amount of observation the power of your test is small, and therefore the 'effect' is probably large if you detected it (meaning the data is very not normal). It usually the problem when you have thousand of observation you reject the hypothesis that the distribution is normal but it usually due to the fact that nothing is perfectly normal and you have a lot of power even when the 'effect' is weak. | normality test on small samples | Your conclusion is correct, the normality assumption is required for t-test. However the t-test is fairly robust to violations of it. In any case you can use a non parametric for example the test Mann | normality test on small samples
Your conclusion is correct, the normality assumption is required for t-test. However the t-test is fairly robust to violations of it. In any case you can use a non parametric for example the test Mann Whitney when you can guarantee the normality assumption.
You should, with such a small amount of observation the power of your test is small, and therefore the 'effect' is probably large if you detected it (meaning the data is very not normal). It usually the problem when you have thousand of observation you reject the hypothesis that the distribution is normal but it usually due to the fact that nothing is perfectly normal and you have a lot of power even when the 'effect' is weak. | normality test on small samples
Your conclusion is correct, the normality assumption is required for t-test. However the t-test is fairly robust to violations of it. In any case you can use a non parametric for example the test Mann |
52,833 | normality test on small samples | No data are ever perfectly normally distributed, so the idea that the data have to be normal for the t-test to be applicable is wrong. If it were so, nobody could ever use the t-test.
For a good range of distributions, the t-test is asymptotically valid for large enough sample size due to the Central Limit Theorem, Slutsky Lemma, and the fact that the t-distribution with increasing degrees of freedom converges to the normal.
Obviously the given datasets are rather small, so it can be doubted that the sample size is large enough. The general question to ask however is not "are the data normally distributed?" (which they aren't anyway), but rather whether normality is violated in ways that will critically affect the behaviour of the t-test. This is mainly the case for large outliers and strong skewness. The 1.125 in sample B looks rather dodgy indeed. It would prompt me to use a Wilcoxon test here.
Normality tests can sometimes uncover critical violations of normality, so they give some information, however what they do is somewhat different from what is really required, as they may also detect harmless deviations, and occasionally miss a critical one. Unfortunately there is no reliable alternative as long as you don't feel perfectly confident to detect problems from looking at the data. There is also literature that suggests that preliminary normality testing should not be used, because it affects the theory behind the t-test (this also applies to visually detecting problems from the data, although this is rarely mentioned). For a review and discussion see https://arxiv.org/abs/1908.02218
Other model assumptions such as independence are often more problematic than normality. | normality test on small samples | No data are ever perfectly normally distributed, so the idea that the data have to be normal for the t-test to be applicable is wrong. If it were so, nobody could ever use the t-test.
For a good rang | normality test on small samples
No data are ever perfectly normally distributed, so the idea that the data have to be normal for the t-test to be applicable is wrong. If it were so, nobody could ever use the t-test.
For a good range of distributions, the t-test is asymptotically valid for large enough sample size due to the Central Limit Theorem, Slutsky Lemma, and the fact that the t-distribution with increasing degrees of freedom converges to the normal.
Obviously the given datasets are rather small, so it can be doubted that the sample size is large enough. The general question to ask however is not "are the data normally distributed?" (which they aren't anyway), but rather whether normality is violated in ways that will critically affect the behaviour of the t-test. This is mainly the case for large outliers and strong skewness. The 1.125 in sample B looks rather dodgy indeed. It would prompt me to use a Wilcoxon test here.
Normality tests can sometimes uncover critical violations of normality, so they give some information, however what they do is somewhat different from what is really required, as they may also detect harmless deviations, and occasionally miss a critical one. Unfortunately there is no reliable alternative as long as you don't feel perfectly confident to detect problems from looking at the data. There is also literature that suggests that preliminary normality testing should not be used, because it affects the theory behind the t-test (this also applies to visually detecting problems from the data, although this is rarely mentioned). For a review and discussion see https://arxiv.org/abs/1908.02218
Other model assumptions such as independence are often more problematic than normality. | normality test on small samples
No data are ever perfectly normally distributed, so the idea that the data have to be normal for the t-test to be applicable is wrong. If it were so, nobody could ever use the t-test.
For a good rang |
52,834 | normality test on small samples | There's one thing nobody talked about: just look at the given data with common sense.
Group B has 2 zeros, then small values and one 1.125. So - why? Typical result, typo (0.125 as the other ones), ....? This data looks so unstable that I would not trust it to be typical. Just try to imagine how a larger sample of this kind would look like - this becomes difficult (something larger than 1 once in a while, otherwise small values? Filling up the gap between small and large values?).
On the other hand, even though unstable, the values are clearly smaller than those of group A. So if you want to compare the groups (need to with a statistical test and cannot just show a dot plot of the measurements), then use a non-parametric test (Wilcoxon test, equivalent to Mann-Whitney-U) -> with such large differences lack of power is problem, so you do not even have to care about normality.
This detailed testing of each and every step while losing track of the final aim is rather less scientific than omitting it ;) | normality test on small samples | There's one thing nobody talked about: just look at the given data with common sense.
Group B has 2 zeros, then small values and one 1.125. So - why? Typical result, typo (0.125 as the other ones), .. | normality test on small samples
There's one thing nobody talked about: just look at the given data with common sense.
Group B has 2 zeros, then small values and one 1.125. So - why? Typical result, typo (0.125 as the other ones), ....? This data looks so unstable that I would not trust it to be typical. Just try to imagine how a larger sample of this kind would look like - this becomes difficult (something larger than 1 once in a while, otherwise small values? Filling up the gap between small and large values?).
On the other hand, even though unstable, the values are clearly smaller than those of group A. So if you want to compare the groups (need to with a statistical test and cannot just show a dot plot of the measurements), then use a non-parametric test (Wilcoxon test, equivalent to Mann-Whitney-U) -> with such large differences lack of power is problem, so you do not even have to care about normality.
This detailed testing of each and every step while losing track of the final aim is rather less scientific than omitting it ;) | normality test on small samples
There's one thing nobody talked about: just look at the given data with common sense.
Group B has 2 zeros, then small values and one 1.125. So - why? Typical result, typo (0.125 as the other ones), .. |
52,835 | Unstable logistic regression when data not well separated | The warning about "fitted probabilities numerically 0 or 1" might be useful for diagnosing separability, but these issues are only indirectly related.
Here is a dataset and a binomial GLM fit (in gray) where there is enough overlap among the $x$ values for the two response classes that there is little concern about separability. In particular, the estimate of the $x$ coefficient of $2.35$ is modest and significant: its standard error is only $1.1$ $(p=0.03)$. The gray curve shows the fit. Corresponding to values on this curve are their log odds, or "link" function. Those I have indicated with colors; the legend gives the common (base-10) logs. The software flags fitted values that are within $2.22\times 10^{-15}$ of either $0$ or $1$. Such points have white halos around them.
All that's going on here is there's such a wide range of $x$ values that for some points, the fit is very, very close to $0$ (for very negative $x$) or very, very close to $1$ (for the most positive $x$). This isn't a problem in this case.
It might be a problem in the next example. Now a single outlying value of $x$ triggers the warning message.
How can we assess this? Simply delete the datum and re-fit the model. In this example, it makes almost no difference: the coefficient estimate does not change, nor does the p-value.
Finally, to check a multiple regression, first form the linear combinations of the coefficient estimates and the variables, $x_i\hat\beta$: this is the link function. Plot the responses against these values exactly as above and study the patterns, looking at (a) the degree to which the 1's overlap the 0's (which assesses separability) and (b) the points with extreme values of the link.
Here is the plot for your data:
The point at the far right corresponds to the red dot in your figure: the fitted value is $1$ because that dot is far from the area where 0's transition to 1's. If you remove it from the data, nothing changes. Thus, it's not influencing the results. This graph indicates you have obtained a reasonable fit.
You can also see that slight changes in the values of $x_1$ or $x_2$ at a couple of critical points (those near $0$) could create perfect separation. But is this really a problem? It would only mean that the software could no longer distinguish between this fit and other fits with arbitrarily sharp transitions near $x\beta=0$. However, all would produce similar predictions at all points sufficiently far from the transition line and the location of that line would still be fairly well estimated. | Unstable logistic regression when data not well separated | The warning about "fitted probabilities numerically 0 or 1" might be useful for diagnosing separability, but these issues are only indirectly related.
Here is a dataset and a binomial GLM fit (in gray | Unstable logistic regression when data not well separated
The warning about "fitted probabilities numerically 0 or 1" might be useful for diagnosing separability, but these issues are only indirectly related.
Here is a dataset and a binomial GLM fit (in gray) where there is enough overlap among the $x$ values for the two response classes that there is little concern about separability. In particular, the estimate of the $x$ coefficient of $2.35$ is modest and significant: its standard error is only $1.1$ $(p=0.03)$. The gray curve shows the fit. Corresponding to values on this curve are their log odds, or "link" function. Those I have indicated with colors; the legend gives the common (base-10) logs. The software flags fitted values that are within $2.22\times 10^{-15}$ of either $0$ or $1$. Such points have white halos around them.
All that's going on here is there's such a wide range of $x$ values that for some points, the fit is very, very close to $0$ (for very negative $x$) or very, very close to $1$ (for the most positive $x$). This isn't a problem in this case.
It might be a problem in the next example. Now a single outlying value of $x$ triggers the warning message.
How can we assess this? Simply delete the datum and re-fit the model. In this example, it makes almost no difference: the coefficient estimate does not change, nor does the p-value.
Finally, to check a multiple regression, first form the linear combinations of the coefficient estimates and the variables, $x_i\hat\beta$: this is the link function. Plot the responses against these values exactly as above and study the patterns, looking at (a) the degree to which the 1's overlap the 0's (which assesses separability) and (b) the points with extreme values of the link.
Here is the plot for your data:
The point at the far right corresponds to the red dot in your figure: the fitted value is $1$ because that dot is far from the area where 0's transition to 1's. If you remove it from the data, nothing changes. Thus, it's not influencing the results. This graph indicates you have obtained a reasonable fit.
You can also see that slight changes in the values of $x_1$ or $x_2$ at a couple of critical points (those near $0$) could create perfect separation. But is this really a problem? It would only mean that the software could no longer distinguish between this fit and other fits with arbitrarily sharp transitions near $x\beta=0$. However, all would produce similar predictions at all points sufficiently far from the transition line and the location of that line would still be fairly well estimated. | Unstable logistic regression when data not well separated
The warning about "fitted probabilities numerically 0 or 1" might be useful for diagnosing separability, but these issues are only indirectly related.
Here is a dataset and a binomial GLM fit (in gray |
52,836 | Unstable logistic regression when data not well separated | Perfect seperation will cause the optimization not converge, not converge will cause the coefficients to be very large, and the very large coefficient will cause "fitted probabilities numerically 0 or 1".
On the reverse side, "fitted probabilities numerically 0 or 1" does not mean the fitting does not converge. It just means with finite precision system IEEE754, the fitted number is very close to 0 or 1, and the computer cannot differentiate it.
This is very likely to happen, if we have some "outliers" (as demonstrated in @whuber's answer, second figure), here is a simpler example by adding one outlier to mtcars data.
Note the fit1 is fine, but fit2 has the warning.
> d1=mtcars[,c("am","mpg")]
> fit1=glm(am~mpg,d1, family="binomial")
> d2=rbind(d1,c(0,-100))
> fit2=glm(am~mpg,d2, family="binomial")
Warning message:
glm.fit: fitted probabilities numerically 0 or 1 occurred | Unstable logistic regression when data not well separated | Perfect seperation will cause the optimization not converge, not converge will cause the coefficients to be very large, and the very large coefficient will cause "fitted probabilities numerically 0 or | Unstable logistic regression when data not well separated
Perfect seperation will cause the optimization not converge, not converge will cause the coefficients to be very large, and the very large coefficient will cause "fitted probabilities numerically 0 or 1".
On the reverse side, "fitted probabilities numerically 0 or 1" does not mean the fitting does not converge. It just means with finite precision system IEEE754, the fitted number is very close to 0 or 1, and the computer cannot differentiate it.
This is very likely to happen, if we have some "outliers" (as demonstrated in @whuber's answer, second figure), here is a simpler example by adding one outlier to mtcars data.
Note the fit1 is fine, but fit2 has the warning.
> d1=mtcars[,c("am","mpg")]
> fit1=glm(am~mpg,d1, family="binomial")
> d2=rbind(d1,c(0,-100))
> fit2=glm(am~mpg,d2, family="binomial")
Warning message:
glm.fit: fitted probabilities numerically 0 or 1 occurred | Unstable logistic regression when data not well separated
Perfect seperation will cause the optimization not converge, not converge will cause the coefficients to be very large, and the very large coefficient will cause "fitted probabilities numerically 0 or |
52,837 | Unstable logistic regression when data not well separated | You are having trouble picturing multidimensional separation. While neither X1 nor X2 separately perfectly predict the Y outcome, together they do. Make use of the coplot to avoid this problem in the future
coplot(y ~ x1 | x2, data=l, panel=panel.smooth)
The same recommendations apply that have been described elsewhere for handling such an issue. | Unstable logistic regression when data not well separated | You are having trouble picturing multidimensional separation. While neither X1 nor X2 separately perfectly predict the Y outcome, together they do. Make use of the coplot to avoid this problem in the | Unstable logistic regression when data not well separated
You are having trouble picturing multidimensional separation. While neither X1 nor X2 separately perfectly predict the Y outcome, together they do. Make use of the coplot to avoid this problem in the future
coplot(y ~ x1 | x2, data=l, panel=panel.smooth)
The same recommendations apply that have been described elsewhere for handling such an issue. | Unstable logistic regression when data not well separated
You are having trouble picturing multidimensional separation. While neither X1 nor X2 separately perfectly predict the Y outcome, together they do. Make use of the coplot to avoid this problem in the |
52,838 | AIC with test data, is it possible? | The quantity that AIC / AICc estimates is the expected out-of-sample log-likelihood (see Burnham & Anderson 2004, Sec. 2.2),
$$
\mathbf E_y \mathbf E_x [ \log g(x | \hat \theta (y)) ],
$$
(multiplied with $-2$). This formula means, you obtain maximum likelihood parameter estimates from one sample, $\hat \theta (y)$, then compute the logarithm of the likelihood $g$ of these parameters on another independent sample $x$ (from the same source), and then average across infinitely many realizations of both samples, $\mathbf E_y \mathbf E_x$.
Akaike's main result is that $\log g(x | \hat \theta (x)) - K$ is an asymptotically unbiased estimator of the quantity given above, where $K$ is the number of parameters. Equivalently,
$$
\mathrm{AIC} = - 2 \log g(x | \hat \theta (x)) + 2 K
$$
is an asymptotically unbiased estimator of
$$
- 2 ~\mathbf E_y \mathbf E_x [ \log g(x | \hat \theta (y)) ].
$$
If the sample size is small compared to the number of parameters $K$, a better correction term (AICc; see Burnham & Anderson 2004, Sec. 7.7.6, and McQuarrie & Tsai 1998) may be necessary which is more complicated than $2K$, and it is different for different models (likelihood functions $g$).
If you use cross-validation with
$$
- 2 \log g(x_\mathrm{test} | \hat \theta (x_\mathrm{train}))
$$
as the error measure, you are estimating (almost) the same thing.
The differences between AIC / AICc and cross-validation are:
– AIC / AICc formulas are based on an approximation for large samples (they are only asymptotically correct).
– For cross-validation, in order to have test data $x_\mathrm{test}$ the training (estimation) data $x_\mathrm{train}$ must be smaller than the original sample $x$. Moreover, the cross-validated measure depends on two random samples instead of just one, so it may be noisier.
Answer a: AIC / AICc and cross-validation are alternative methods to do the same thing (if log-likelihood is the error measure). It therefore does not make sense to use them together.
Answer b: Since AIC / AICc and cross-validation with -2 log likelihood estimate (almost) the same quantity, it should be OK to apply the same scale to evaluate differences between models. | AIC with test data, is it possible? | The quantity that AIC / AICc estimates is the expected out-of-sample log-likelihood (see Burnham & Anderson 2004, Sec. 2.2),
$$
\mathbf E_y \mathbf E_x [ \log g(x | \hat \theta (y)) ],
$$
(multiplied | AIC with test data, is it possible?
The quantity that AIC / AICc estimates is the expected out-of-sample log-likelihood (see Burnham & Anderson 2004, Sec. 2.2),
$$
\mathbf E_y \mathbf E_x [ \log g(x | \hat \theta (y)) ],
$$
(multiplied with $-2$). This formula means, you obtain maximum likelihood parameter estimates from one sample, $\hat \theta (y)$, then compute the logarithm of the likelihood $g$ of these parameters on another independent sample $x$ (from the same source), and then average across infinitely many realizations of both samples, $\mathbf E_y \mathbf E_x$.
Akaike's main result is that $\log g(x | \hat \theta (x)) - K$ is an asymptotically unbiased estimator of the quantity given above, where $K$ is the number of parameters. Equivalently,
$$
\mathrm{AIC} = - 2 \log g(x | \hat \theta (x)) + 2 K
$$
is an asymptotically unbiased estimator of
$$
- 2 ~\mathbf E_y \mathbf E_x [ \log g(x | \hat \theta (y)) ].
$$
If the sample size is small compared to the number of parameters $K$, a better correction term (AICc; see Burnham & Anderson 2004, Sec. 7.7.6, and McQuarrie & Tsai 1998) may be necessary which is more complicated than $2K$, and it is different for different models (likelihood functions $g$).
If you use cross-validation with
$$
- 2 \log g(x_\mathrm{test} | \hat \theta (x_\mathrm{train}))
$$
as the error measure, you are estimating (almost) the same thing.
The differences between AIC / AICc and cross-validation are:
– AIC / AICc formulas are based on an approximation for large samples (they are only asymptotically correct).
– For cross-validation, in order to have test data $x_\mathrm{test}$ the training (estimation) data $x_\mathrm{train}$ must be smaller than the original sample $x$. Moreover, the cross-validated measure depends on two random samples instead of just one, so it may be noisier.
Answer a: AIC / AICc and cross-validation are alternative methods to do the same thing (if log-likelihood is the error measure). It therefore does not make sense to use them together.
Answer b: Since AIC / AICc and cross-validation with -2 log likelihood estimate (almost) the same quantity, it should be OK to apply the same scale to evaluate differences between models. | AIC with test data, is it possible?
The quantity that AIC / AICc estimates is the expected out-of-sample log-likelihood (see Burnham & Anderson 2004, Sec. 2.2),
$$
\mathbf E_y \mathbf E_x [ \log g(x | \hat \theta (y)) ],
$$
(multiplied |
52,839 | AIC with test data, is it possible? | a. Is it possible to use AICc with test data
Information criteria shouldn't be necessary if you are evaluating models on a hold-out dataset. You can just choose the model that produces the lowest error (mean squared error, mean absolute error, or whatever metric you prefer) on the test data.
b. Is there some kind of error based model selection on test data that can be conclusive
'Conclusive' is unfortunately a bit fuzzy. Any model selection procedure (and any statistical test) retains some probability of being incorrect, or less accurate than the alternatives. But to basically repeat my answer to the previous question: the most robust approach is generally to evaluate your models' prediction errors through cross-validation. This question has a list of cross-validation techniques that can get you started: Compendium of cross-validation techniques | AIC with test data, is it possible? | a. Is it possible to use AICc with test data
Information criteria shouldn't be necessary if you are evaluating models on a hold-out dataset. You can just choose the model that produces the lowest erro | AIC with test data, is it possible?
a. Is it possible to use AICc with test data
Information criteria shouldn't be necessary if you are evaluating models on a hold-out dataset. You can just choose the model that produces the lowest error (mean squared error, mean absolute error, or whatever metric you prefer) on the test data.
b. Is there some kind of error based model selection on test data that can be conclusive
'Conclusive' is unfortunately a bit fuzzy. Any model selection procedure (and any statistical test) retains some probability of being incorrect, or less accurate than the alternatives. But to basically repeat my answer to the previous question: the most robust approach is generally to evaluate your models' prediction errors through cross-validation. This question has a list of cross-validation techniques that can get you started: Compendium of cross-validation techniques | AIC with test data, is it possible?
a. Is it possible to use AICc with test data
Information criteria shouldn't be necessary if you are evaluating models on a hold-out dataset. You can just choose the model that produces the lowest erro |
52,840 | Gradient descent explodes if learning rate is too large | The learning rate can seen as step size, $\eta$. As such, gradient descent is taking successive steps in the direction of the minimum. If the step size $\eta$ is too large, it can (plausibly) "jump over" the minima we are trying to reach, ie. we overshoot. This can lead to osculations around the minimum or in some cases to outright divergence. It is important to note that the step gradient descent takes is a function of step size $\eta$ as well as the gradient values $g$. If we are in a local minimum with zero gradient the algorithm will not update the parameters $p$ because the gradient is zero, similarly if $p$ is in a "steep slope", even a small $\eta$ will lead to a large update in $p$'s values.
Particular for the case of divergence what happens is that as soon as an oversized step $\eta$ is taken from an initial point $p_{i=0}$, the gradient descent algorithm lands to a point $p_{i=1}$ that is worse than $p_{i=0}$ in terms of cost. At this new but cost function-wise worse point $p_{i=1}$, when recalculating the gradients, the gradient values are increased, so the next (hopefully corrective) step is even larger. Nevertheless if this next step leads to a point $p_{i=2}$ with even larger error because we overshoot again, we can be led to use even larger gradient values, leading ultimately to a vicious cycle of ever increasing gradient values and "exploding coefficients" $p_i$.
In the code you provided you might wish add a print(gradient(X, y, p)) statement in the param_update function. If that is add, we can monitor the gradient in each iteration and see that in the case of a reasonably valued $\eta$ the gradient values slowly decrease while in the case of unreasonably large $\eta$ the gradient values get steadily larger and larger.
Advanced variants of gradient descent use the concept to adaptive learning rate, the optimisation algorithm Adadelta is a famous example of this. We might wish to play with a toy version of this notion by using a steadily decreasing step size. Assuming that we start with $\eta = \eta_0$, we can scale the step size $\eta_t$ used for the $t$ iteration according to: $\eta_t = \frac{\eta_0}{t}$. Notice you if we adaptively decrease $\eta$ we need to start with a reasonably large $\eta_0$ (say 1.0 for your example). | Gradient descent explodes if learning rate is too large | The learning rate can seen as step size, $\eta$. As such, gradient descent is taking successive steps in the direction of the minimum. If the step size $\eta$ is too large, it can (plausibly) "jump ov | Gradient descent explodes if learning rate is too large
The learning rate can seen as step size, $\eta$. As such, gradient descent is taking successive steps in the direction of the minimum. If the step size $\eta$ is too large, it can (plausibly) "jump over" the minima we are trying to reach, ie. we overshoot. This can lead to osculations around the minimum or in some cases to outright divergence. It is important to note that the step gradient descent takes is a function of step size $\eta$ as well as the gradient values $g$. If we are in a local minimum with zero gradient the algorithm will not update the parameters $p$ because the gradient is zero, similarly if $p$ is in a "steep slope", even a small $\eta$ will lead to a large update in $p$'s values.
Particular for the case of divergence what happens is that as soon as an oversized step $\eta$ is taken from an initial point $p_{i=0}$, the gradient descent algorithm lands to a point $p_{i=1}$ that is worse than $p_{i=0}$ in terms of cost. At this new but cost function-wise worse point $p_{i=1}$, when recalculating the gradients, the gradient values are increased, so the next (hopefully corrective) step is even larger. Nevertheless if this next step leads to a point $p_{i=2}$ with even larger error because we overshoot again, we can be led to use even larger gradient values, leading ultimately to a vicious cycle of ever increasing gradient values and "exploding coefficients" $p_i$.
In the code you provided you might wish add a print(gradient(X, y, p)) statement in the param_update function. If that is add, we can monitor the gradient in each iteration and see that in the case of a reasonably valued $\eta$ the gradient values slowly decrease while in the case of unreasonably large $\eta$ the gradient values get steadily larger and larger.
Advanced variants of gradient descent use the concept to adaptive learning rate, the optimisation algorithm Adadelta is a famous example of this. We might wish to play with a toy version of this notion by using a steadily decreasing step size. Assuming that we start with $\eta = \eta_0$, we can scale the step size $\eta_t$ used for the $t$ iteration according to: $\eta_t = \frac{\eta_0}{t}$. Notice you if we adaptively decrease $\eta$ we need to start with a reasonably large $\eta_0$ (say 1.0 for your example). | Gradient descent explodes if learning rate is too large
The learning rate can seen as step size, $\eta$. As such, gradient descent is taking successive steps in the direction of the minimum. If the step size $\eta$ is too large, it can (plausibly) "jump ov |
52,841 | Gradient descent explodes if learning rate is too large | There are theoretical results which show that Gradient Descent (GD) is guaranteed to converge, given that we pick the right step size $\eta$ according to the problem at hand.
As far as understand, you want to minimize the least squares cost $f(p) = (1/3)\|y - Xp\|_2^2$, where $p$ is your decision variable and $X$, $y$ are given data. The gradient of this cost is $\nabla f(p) = (2/3)(X^\top Xp - X^\top y)$, in agreement with your code.
In order to choose an $\eta$ that guarantee convergence, we need to analyse the cost function we are minimizing. We say that a function $f$ is $\beta$-smooth if $\|\nabla f(u) - \nabla f(v)\|_2 \leq \beta\|u - v\|_2$, for all $u,v$. For smooth costs, GD converges if we choose $\eta \leq 1/\beta$. Since the least squares cost is smooth, we just need to estimate its $\beta$ parameter. From your problem, we have
\begin{align*}
\|\nabla f(u) - \nabla f(v)\|_2 &= (2/3)\|X^\top Xu - X^\top Xv\|_2 \\
&\leq (2/3)\|X^\top X\|_2\|u - v\|_2 \\
&= (20/3)\|u - v\|_2
\end{align*}
for the $X$ defined in your code. Thus, for this specific cost, we have $\beta = 20/3$, and convergence of GD is guaranteed for $\eta \leq 1/\beta = 0.15$. This is in accordance with your numerical experiments, where GD converged for $\eta = 0.1$, but not for $\eta = 0.3$.
On a final note, notice that $\eta \leq 1/\beta$ is a sufficient, but not necessary condition for convergence. That is, it does not imply that the GD algorithm will always diverge when using $\eta > 1/\beta$. | Gradient descent explodes if learning rate is too large | There are theoretical results which show that Gradient Descent (GD) is guaranteed to converge, given that we pick the right step size $\eta$ according to the problem at hand.
As far as understand, you | Gradient descent explodes if learning rate is too large
There are theoretical results which show that Gradient Descent (GD) is guaranteed to converge, given that we pick the right step size $\eta$ according to the problem at hand.
As far as understand, you want to minimize the least squares cost $f(p) = (1/3)\|y - Xp\|_2^2$, where $p$ is your decision variable and $X$, $y$ are given data. The gradient of this cost is $\nabla f(p) = (2/3)(X^\top Xp - X^\top y)$, in agreement with your code.
In order to choose an $\eta$ that guarantee convergence, we need to analyse the cost function we are minimizing. We say that a function $f$ is $\beta$-smooth if $\|\nabla f(u) - \nabla f(v)\|_2 \leq \beta\|u - v\|_2$, for all $u,v$. For smooth costs, GD converges if we choose $\eta \leq 1/\beta$. Since the least squares cost is smooth, we just need to estimate its $\beta$ parameter. From your problem, we have
\begin{align*}
\|\nabla f(u) - \nabla f(v)\|_2 &= (2/3)\|X^\top Xu - X^\top Xv\|_2 \\
&\leq (2/3)\|X^\top X\|_2\|u - v\|_2 \\
&= (20/3)\|u - v\|_2
\end{align*}
for the $X$ defined in your code. Thus, for this specific cost, we have $\beta = 20/3$, and convergence of GD is guaranteed for $\eta \leq 1/\beta = 0.15$. This is in accordance with your numerical experiments, where GD converged for $\eta = 0.1$, but not for $\eta = 0.3$.
On a final note, notice that $\eta \leq 1/\beta$ is a sufficient, but not necessary condition for convergence. That is, it does not imply that the GD algorithm will always diverge when using $\eta > 1/\beta$. | Gradient descent explodes if learning rate is too large
There are theoretical results which show that Gradient Descent (GD) is guaranteed to converge, given that we pick the right step size $\eta$ according to the problem at hand.
As far as understand, you |
52,842 | Kullback–Leibler divergence | I just wonder why he wants to measure the similarity between the
distributions $p(x|\theta)$ and $p(x|\theta_0)$.
You're kind of asking the wrong question. If we're in a setting where we're using MLE, then the idea behind it is that we're estimating the parameters of our model with the parameters that maximize the likelihood. It will probably be the case that the true likelihood ($p(x|\theta_0)$) isn't actually going to be in the parametric family of likelihoods we're working with!
What he's doing here is showing $that\ performing\ MLE\ is\ equivalent$ to minimizing the KL divergence between the true likelihood and the family of likelihoods we're using for the MLE. So while the true $p(x|\theta_0)$ might not actually be inside of the family of likelihoods you're performing MLE over, what this tells us is that the MLE from our family of likelihoods will be the closest in this family to the true distribution $in\ KL\ divergence$. This is nice because even if we're off in our model specification, we find that the MLE will still be close to the truth in some sense. | Kullback–Leibler divergence | I just wonder why he wants to measure the similarity between the
distributions $p(x|\theta)$ and $p(x|\theta_0)$.
You're kind of asking the wrong question. If we're in a setting where we're using M | Kullback–Leibler divergence
I just wonder why he wants to measure the similarity between the
distributions $p(x|\theta)$ and $p(x|\theta_0)$.
You're kind of asking the wrong question. If we're in a setting where we're using MLE, then the idea behind it is that we're estimating the parameters of our model with the parameters that maximize the likelihood. It will probably be the case that the true likelihood ($p(x|\theta_0)$) isn't actually going to be in the parametric family of likelihoods we're working with!
What he's doing here is showing $that\ performing\ MLE\ is\ equivalent$ to minimizing the KL divergence between the true likelihood and the family of likelihoods we're using for the MLE. So while the true $p(x|\theta_0)$ might not actually be inside of the family of likelihoods you're performing MLE over, what this tells us is that the MLE from our family of likelihoods will be the closest in this family to the true distribution $in\ KL\ divergence$. This is nice because even if we're off in our model specification, we find that the MLE will still be close to the truth in some sense. | Kullback–Leibler divergence
I just wonder why he wants to measure the similarity between the
distributions $p(x|\theta)$ and $p(x|\theta_0)$.
You're kind of asking the wrong question. If we're in a setting where we're using M |
52,843 | Same results from bayesian and frequentist hypothesis testing? References needed | With flat priors and a large sample size (and many times without both of these things), (a) Bayesian and frequentist point-estimates will be virtually the same, and (b) the credible and confidence intervals will virtually cover the same range, leading to the same "hypothesis test" decision of "reject" or "fail to reject" the null. (Tim's comment shows this).
However, I will push back on this sentence:
'in large samples and for uninformative priors the results are the same, period'
There is no situation where the "results are the same." Period. Whether or not the confidence/credible interval includes zero is not the totality of the "results." It is one aspect of the results, and it is many times a trivial aspect.
Let's say you estimate a regression coefficient of b = 0.5. Frequentist p-values will tell you the probability of observing your data—or more extreme data—given that the null hypothesis is true. However, using MCMC, you can sample from the posterior to calculate the probability that the coefficient is greater than zero; that is, you have the probability of an alternative hypothesis, given the data.
Using Bayesian estimation, you can also compare probabilities—given the data—of different hypotheses (i.e., "Bayes factors"). You cannot do this in the frequentist paradigm.
As Tim mentioned, the mode of the posterior will be virtually the same thing as the point-estimate in a maximum likelihood, frequentist paradigm. However, in Bayesian estimation, we can look at the median or mean of the posterior, as well, as often our posteriors are not (and should not) be symmetric.
The argument you want to make, however, is a straw-person argument, as it does not make sense to select uniform priors. For example, nobody would choose a flat prior from $-\infty$ to $+\infty$ for a variance, because variances cannot be negative. Similarly, if all of our variables are standardized, we would expect there to be very few cases where coefficients are < -1 or > +1.
There are certain situations where—even if you have uniform priors and large sample sizes—maximum likelihood approaches will not converge. In this case, giving even a weakly-informative prior will allow you to get some type of estimation, instead of your maximum likelihood procedure just saying, "Could not converge." I have run into this with multilevel models.
In short: Having virtually the same point estimates and 95% credible/confidence interval upper and lower bounds does not mean that you have "the same results." We might make certain inductive, big-picture conclusions in the exact same way, but the totality of the results are not the same. I think the wealth of information you get from MCMC posteriors (i.e., you can estimate how uncertain you are about variances of coefficients!) is often overlooked when comparing frequentist and Bayesian methods. | Same results from bayesian and frequentist hypothesis testing? References needed | With flat priors and a large sample size (and many times without both of these things), (a) Bayesian and frequentist point-estimates will be virtually the same, and (b) the credible and confidence int | Same results from bayesian and frequentist hypothesis testing? References needed
With flat priors and a large sample size (and many times without both of these things), (a) Bayesian and frequentist point-estimates will be virtually the same, and (b) the credible and confidence intervals will virtually cover the same range, leading to the same "hypothesis test" decision of "reject" or "fail to reject" the null. (Tim's comment shows this).
However, I will push back on this sentence:
'in large samples and for uninformative priors the results are the same, period'
There is no situation where the "results are the same." Period. Whether or not the confidence/credible interval includes zero is not the totality of the "results." It is one aspect of the results, and it is many times a trivial aspect.
Let's say you estimate a regression coefficient of b = 0.5. Frequentist p-values will tell you the probability of observing your data—or more extreme data—given that the null hypothesis is true. However, using MCMC, you can sample from the posterior to calculate the probability that the coefficient is greater than zero; that is, you have the probability of an alternative hypothesis, given the data.
Using Bayesian estimation, you can also compare probabilities—given the data—of different hypotheses (i.e., "Bayes factors"). You cannot do this in the frequentist paradigm.
As Tim mentioned, the mode of the posterior will be virtually the same thing as the point-estimate in a maximum likelihood, frequentist paradigm. However, in Bayesian estimation, we can look at the median or mean of the posterior, as well, as often our posteriors are not (and should not) be symmetric.
The argument you want to make, however, is a straw-person argument, as it does not make sense to select uniform priors. For example, nobody would choose a flat prior from $-\infty$ to $+\infty$ for a variance, because variances cannot be negative. Similarly, if all of our variables are standardized, we would expect there to be very few cases where coefficients are < -1 or > +1.
There are certain situations where—even if you have uniform priors and large sample sizes—maximum likelihood approaches will not converge. In this case, giving even a weakly-informative prior will allow you to get some type of estimation, instead of your maximum likelihood procedure just saying, "Could not converge." I have run into this with multilevel models.
In short: Having virtually the same point estimates and 95% credible/confidence interval upper and lower bounds does not mean that you have "the same results." We might make certain inductive, big-picture conclusions in the exact same way, but the totality of the results are not the same. I think the wealth of information you get from MCMC posteriors (i.e., you can estimate how uncertain you are about variances of coefficients!) is often overlooked when comparing frequentist and Bayesian methods. | Same results from bayesian and frequentist hypothesis testing? References needed
With flat priors and a large sample size (and many times without both of these things), (a) Bayesian and frequentist point-estimates will be virtually the same, and (b) the credible and confidence int |
52,844 | Same results from bayesian and frequentist hypothesis testing? References needed | This paper of Altham compares Bayesian and non-Bayesian (Fisher's exact) tests for equal proportions between two groups.
It turns out Fisher’s exact test can be understood as taking the prior Beta(1, 0) and Beta(0, 1) which is a surprising because using the uniform prior Beta(1,1) would have been more intuitive. Altham notes, “[Fisher’s exact test] seems to correspond to a strong prior belief in negative association of rows and columns.” She also shows that Fisher’s exact test is strictly conservative relative to the Bayesian analysis with uniform priors. | Same results from bayesian and frequentist hypothesis testing? References needed | This paper of Altham compares Bayesian and non-Bayesian (Fisher's exact) tests for equal proportions between two groups.
It turns out Fisher’s exact test can be understood as taking the prior Beta(1, | Same results from bayesian and frequentist hypothesis testing? References needed
This paper of Altham compares Bayesian and non-Bayesian (Fisher's exact) tests for equal proportions between two groups.
It turns out Fisher’s exact test can be understood as taking the prior Beta(1, 0) and Beta(0, 1) which is a surprising because using the uniform prior Beta(1,1) would have been more intuitive. Altham notes, “[Fisher’s exact test] seems to correspond to a strong prior belief in negative association of rows and columns.” She also shows that Fisher’s exact test is strictly conservative relative to the Bayesian analysis with uniform priors. | Same results from bayesian and frequentist hypothesis testing? References needed
This paper of Altham compares Bayesian and non-Bayesian (Fisher's exact) tests for equal proportions between two groups.
It turns out Fisher’s exact test can be understood as taking the prior Beta(1, |
52,845 | Why this formula for 5 year trend? | The formula is the linear regression of the series (whatever the variable is), which I will call $y$, on time ($x$), giving units-of-$y$ per year. This is then divided by average $y$ and the result given as a percentage to get a percentage annual growth relative to the mean $y$ (the estimate of the middle-year value of $y$).
Note that the slope of a regression line is $$\hat{\beta}=\frac{\sum_i (y_i-\bar{y}) (x_i-\bar{x})}{\sum_i (x_i-\bar{x})^2}\,.$$
Now for the $x$'s being five consecutive years $(x_i-\bar{x})$ will always be $-2,-1,0,1,2$ and $\sum_i (x_i-\bar{x})^2=10$.
At the same time
\begin{eqnarray}
\sum_i (y_i-\bar{y}) (x_i-\bar{x})&=&\sum_i y_i (x_i-\bar{x}) - \bar{y}\sum_i (x_i-\bar{x})\\
&=&\sum_i y_i (x_i-\bar{x})\,,
\end{eqnarray}
but again the $x_i-\bar{x}$'s are just $-2, ..., 2$ so that's $-2 \, y_1 - 1 \, y_2 + 0 \, y_3 + 1 \, y_4 + 2 \, y_5$.
(Note that $\sum_i (x_i-\bar{x})$ is always $0$, which is why the second term disappeared.)
Consequently the estimated yearly increase using a least squares straight line fit is $\hat{\beta}= (-2 \, y_1 - 1 \, y_2 + 0 \, y_3 + 1 \, y_4 + 2 \, y_5)/10$. This will be in units-of-$y$ per year.
Now to get this to be a yearly relative (or percentage) increase we need to divide by a $y$-value. The mean of the $y$'s, $\bar{y}$ (which is also the estimated middle value in the trend line) is what they've divided by. I don't know why that particular choice was made, rather than some other possible choice -- but for some things I guess this would be a reasonable thing to do.
I don't know if this is standard, but I bet you could find a formula like this (estimated annual linear increase divided by the mean) in some business stats books or a similar source. | Why this formula for 5 year trend? | The formula is the linear regression of the series (whatever the variable is), which I will call $y$, on time ($x$), giving units-of-$y$ per year. This is then divided by average $y$ and the result gi | Why this formula for 5 year trend?
The formula is the linear regression of the series (whatever the variable is), which I will call $y$, on time ($x$), giving units-of-$y$ per year. This is then divided by average $y$ and the result given as a percentage to get a percentage annual growth relative to the mean $y$ (the estimate of the middle-year value of $y$).
Note that the slope of a regression line is $$\hat{\beta}=\frac{\sum_i (y_i-\bar{y}) (x_i-\bar{x})}{\sum_i (x_i-\bar{x})^2}\,.$$
Now for the $x$'s being five consecutive years $(x_i-\bar{x})$ will always be $-2,-1,0,1,2$ and $\sum_i (x_i-\bar{x})^2=10$.
At the same time
\begin{eqnarray}
\sum_i (y_i-\bar{y}) (x_i-\bar{x})&=&\sum_i y_i (x_i-\bar{x}) - \bar{y}\sum_i (x_i-\bar{x})\\
&=&\sum_i y_i (x_i-\bar{x})\,,
\end{eqnarray}
but again the $x_i-\bar{x}$'s are just $-2, ..., 2$ so that's $-2 \, y_1 - 1 \, y_2 + 0 \, y_3 + 1 \, y_4 + 2 \, y_5$.
(Note that $\sum_i (x_i-\bar{x})$ is always $0$, which is why the second term disappeared.)
Consequently the estimated yearly increase using a least squares straight line fit is $\hat{\beta}= (-2 \, y_1 - 1 \, y_2 + 0 \, y_3 + 1 \, y_4 + 2 \, y_5)/10$. This will be in units-of-$y$ per year.
Now to get this to be a yearly relative (or percentage) increase we need to divide by a $y$-value. The mean of the $y$'s, $\bar{y}$ (which is also the estimated middle value in the trend line) is what they've divided by. I don't know why that particular choice was made, rather than some other possible choice -- but for some things I guess this would be a reasonable thing to do.
I don't know if this is standard, but I bet you could find a formula like this (estimated annual linear increase divided by the mean) in some business stats books or a similar source. | Why this formula for 5 year trend?
The formula is the linear regression of the series (whatever the variable is), which I will call $y$, on time ($x$), giving units-of-$y$ per year. This is then divided by average $y$ and the result gi |
52,846 | Removing features with low variance in classification models | If variables represent different physical quantities their scaling can be different. By changing units (e.g. from measuring distance in kilometers to measuring distance in nanometers) you can change the scaling of a variable arbitrarily, so why would you even consider removing low-variance variables?
What does make sense though is that if you have a large number of predictors, they are probably highly correlated with each other. So you can use PCA on the training data to collect correlated features into principal components. Again, however, the scaling is relevant here. So unless all your variables are really measured in the same units, I would z-score (remove mean, divide by std) the data first and then apply PCA. | Removing features with low variance in classification models | If variables represent different physical quantities their scaling can be different. By changing units (e.g. from measuring distance in kilometers to measuring distance in nanometers) you can change t | Removing features with low variance in classification models
If variables represent different physical quantities their scaling can be different. By changing units (e.g. from measuring distance in kilometers to measuring distance in nanometers) you can change the scaling of a variable arbitrarily, so why would you even consider removing low-variance variables?
What does make sense though is that if you have a large number of predictors, they are probably highly correlated with each other. So you can use PCA on the training data to collect correlated features into principal components. Again, however, the scaling is relevant here. So unless all your variables are really measured in the same units, I would z-score (remove mean, divide by std) the data first and then apply PCA. | Removing features with low variance in classification models
If variables represent different physical quantities their scaling can be different. By changing units (e.g. from measuring distance in kilometers to measuring distance in nanometers) you can change t |
52,847 | Removing features with low variance in classification models | It is not always the case that features with low variance are removed. As @appletree stressed, changing the scale of a feature also change its variance ! So it would be dangerous to discard them.
However, if the variance is zero, it means that the feature is constant and will not improve the performance of the model (or make some matrices singular). In that case, it should be removed. Or if only a handful of observations differ from a constant value, the variance will also be very low.
This situation, where a feature has been poorly evaluated, or brings little information because it is (almost) constant can be a justification to remove a column. | Removing features with low variance in classification models | It is not always the case that features with low variance are removed. As @appletree stressed, changing the scale of a feature also change its variance ! So it would be dangerous to discard them.
Howe | Removing features with low variance in classification models
It is not always the case that features with low variance are removed. As @appletree stressed, changing the scale of a feature also change its variance ! So it would be dangerous to discard them.
However, if the variance is zero, it means that the feature is constant and will not improve the performance of the model (or make some matrices singular). In that case, it should be removed. Or if only a handful of observations differ from a constant value, the variance will also be very low.
This situation, where a feature has been poorly evaluated, or brings little information because it is (almost) constant can be a justification to remove a column. | Removing features with low variance in classification models
It is not always the case that features with low variance are removed. As @appletree stressed, changing the scale of a feature also change its variance ! So it would be dangerous to discard them.
Howe |
52,848 | Removing features with low variance in classification models | If there is high correlation between 2 features then you would discard one of them. The features that are removed because of low variance have very low variance, that would be near to zero. You should always perform all the tests with existing data before discarding any features. Variables which are all 0's or have near to zero variance can be dropped due to less predictive power. In some cases it might cause a problem as well. Like in Naive Bayes Classifier, if one value is 0, then the entire equation becomes 0. Hence we use Laplace Smoothing where we add 1 to each feature count so that it doesn't come down to zero. | Removing features with low variance in classification models | If there is high correlation between 2 features then you would discard one of them. The features that are removed because of low variance have very low variance, that would be near to zero. You should | Removing features with low variance in classification models
If there is high correlation between 2 features then you would discard one of them. The features that are removed because of low variance have very low variance, that would be near to zero. You should always perform all the tests with existing data before discarding any features. Variables which are all 0's or have near to zero variance can be dropped due to less predictive power. In some cases it might cause a problem as well. Like in Naive Bayes Classifier, if one value is 0, then the entire equation becomes 0. Hence we use Laplace Smoothing where we add 1 to each feature count so that it doesn't come down to zero. | Removing features with low variance in classification models
If there is high correlation between 2 features then you would discard one of them. The features that are removed because of low variance have very low variance, that would be near to zero. You should |
52,849 | How to divide $n$ students to $m$ groups so that their level is as close as possible? | If you use the k-means algorithm to cluster your students, it will minimize the sum of squares within each group. This algorithm finds groups by penalizing skill variations in each group. The larger the differences in skill relative to the cluster mean, the larger will be the penalty. So, this should suit your needs.
You can do this very easily in R with the kmeans() command.
However, this algorithm does not enforce equal group sizes. Solutions to this limitation are discussed here. | How to divide $n$ students to $m$ groups so that their level is as close as possible? | If you use the k-means algorithm to cluster your students, it will minimize the sum of squares within each group. This algorithm finds groups by penalizing skill variations in each group. The larger t | How to divide $n$ students to $m$ groups so that their level is as close as possible?
If you use the k-means algorithm to cluster your students, it will minimize the sum of squares within each group. This algorithm finds groups by penalizing skill variations in each group. The larger the differences in skill relative to the cluster mean, the larger will be the penalty. So, this should suit your needs.
You can do this very easily in R with the kmeans() command.
However, this algorithm does not enforce equal group sizes. Solutions to this limitation are discussed here. | How to divide $n$ students to $m$ groups so that their level is as close as possible?
If you use the k-means algorithm to cluster your students, it will minimize the sum of squares within each group. This algorithm finds groups by penalizing skill variations in each group. The larger t |
52,850 | How to divide $n$ students to $m$ groups so that their level is as close as possible? | The problem of finding the optimal class assignment is equivalent to minimizing the following objective function:
$$ \sum_{i=1}^{10} \sum_{j=1}^{20} \left\| \mathbf x_{ij} - \boldsymbol\mu_i \right\|^2 $$
where:
$$\mathbf x_{ij} = \text{vector of test scores of the } j^{th} \text{ student in the } i^{th} \text{ class}\\
\boldsymbol\mu_{i} = \text{mean of the test score vectors of the students in the } i^{th} \text{ class}$$
There are many ways to minimize such a function, but the one I generally prefer in small problems such as this is simulated annealing. Simulated annealing works by taking some initial proposed solution, and iteratively making tweaks to it (big tweaks at first, then progressively smaller ones) until eventually arriving at a final answer which is generally near-optimal.
In addition to an objective function, simulated annealing also requires a notion of what constitutes a "tweak" (more formally, what constitutes a pair of neighboring states). In this case such a tweak would correspond to swapping two random students from two random classes with one another.
Note: The objective function that I've provided is equivalent to summing over all within-class pairwise distances, as discussed in the Description section of the k-means wiki. | How to divide $n$ students to $m$ groups so that their level is as close as possible? | The problem of finding the optimal class assignment is equivalent to minimizing the following objective function:
$$ \sum_{i=1}^{10} \sum_{j=1}^{20} \left\| \mathbf x_{ij} - \boldsymbol\mu_i \right\|^ | How to divide $n$ students to $m$ groups so that their level is as close as possible?
The problem of finding the optimal class assignment is equivalent to minimizing the following objective function:
$$ \sum_{i=1}^{10} \sum_{j=1}^{20} \left\| \mathbf x_{ij} - \boldsymbol\mu_i \right\|^2 $$
where:
$$\mathbf x_{ij} = \text{vector of test scores of the } j^{th} \text{ student in the } i^{th} \text{ class}\\
\boldsymbol\mu_{i} = \text{mean of the test score vectors of the students in the } i^{th} \text{ class}$$
There are many ways to minimize such a function, but the one I generally prefer in small problems such as this is simulated annealing. Simulated annealing works by taking some initial proposed solution, and iteratively making tweaks to it (big tweaks at first, then progressively smaller ones) until eventually arriving at a final answer which is generally near-optimal.
In addition to an objective function, simulated annealing also requires a notion of what constitutes a "tweak" (more formally, what constitutes a pair of neighboring states). In this case such a tweak would correspond to swapping two random students from two random classes with one another.
Note: The objective function that I've provided is equivalent to summing over all within-class pairwise distances, as discussed in the Description section of the k-means wiki. | How to divide $n$ students to $m$ groups so that their level is as close as possible?
The problem of finding the optimal class assignment is equivalent to minimizing the following objective function:
$$ \sum_{i=1}^{10} \sum_{j=1}^{20} \left\| \mathbf x_{ij} - \boldsymbol\mu_i \right\|^ |
52,851 | How to divide $n$ students to $m$ groups so that their level is as close as possible? | Do you think that would be an optimal assignment for the class? I would personally split the groups so that I have the top in each category in different groups. The students can leverage their skills and improve in that way.
I could frame the problem as
$\arg\max\limits_{x \in Cat} Score_x \bigcup \arg\min\limits_{x \in Cat} Score_x$
where you are sampling without replacement. | How to divide $n$ students to $m$ groups so that their level is as close as possible? | Do you think that would be an optimal assignment for the class? I would personally split the groups so that I have the top in each category in different groups. The students can leverage their skills | How to divide $n$ students to $m$ groups so that their level is as close as possible?
Do you think that would be an optimal assignment for the class? I would personally split the groups so that I have the top in each category in different groups. The students can leverage their skills and improve in that way.
I could frame the problem as
$\arg\max\limits_{x \in Cat} Score_x \bigcup \arg\min\limits_{x \in Cat} Score_x$
where you are sampling without replacement. | How to divide $n$ students to $m$ groups so that their level is as close as possible?
Do you think that would be an optimal assignment for the class? I would personally split the groups so that I have the top in each category in different groups. The students can leverage their skills |
52,852 | Mathematical function that maps a vector to an ordered version of that vector | There is a fairly standard notation: given a vector $x=(x_1, x_2, \ldots, x_n)$, its order statistics $(x_{[1]}, x_{[2]}, \ldots, x_{[n]})$ are the permutation of $x$ for which $x_{[1]} \le x_{[2]} \le \cdots \le x_{[n]}$. Some people use parentheses instead of square brackets around the indexes, as in $x_{(i)}$ instead of $x_{[i]}$.
After you have described this function, you are free to name it anything you like, using $^\prime$, "$f$", "$\operatorname{Bob}$", or anything else.
Here's a formal way to do it that many mathematicians would be comfortable with:
Let the permutation group $\mathfrak{S}^n$ act on $\mathbb{R}^n$ by permuting the coordinates: that is, for $\sigma\in\mathfrak{S}^n$ and $x=(x_1,x_2,\ldots, x_n)\in\mathbb{R}^n$, define $$\sigma\cdot x = \left(x_{\sigma(1)},x_{\sigma(2)}, \ldots, x_{\sigma(n)}\right).$$ For any $x\in \mathbb{R}^n$, let $\sigma_x \in \mathfrak{S}^n$ be any permutation of the coordinates of $x$ for which $x_{\sigma_x(1)} \le x_{\sigma_x(2)} \le \cdots \le x_{\sigma_x(n)}.$ Define $f:\mathbb{R}^n\to\mathbb{R}^n$ by $$f(x) = \sigma_x \cdot x.$$
(Because $\sigma_x$ is not unique whenever two or more coordinates of $x$ are the same, you still have to show that $f$ is well-defined, but that is sufficiently straightforward and obvious that you could leave it to the reader.) | Mathematical function that maps a vector to an ordered version of that vector | There is a fairly standard notation: given a vector $x=(x_1, x_2, \ldots, x_n)$, its order statistics $(x_{[1]}, x_{[2]}, \ldots, x_{[n]})$ are the permutation of $x$ for which $x_{[1]} \le x_{[2]} \l | Mathematical function that maps a vector to an ordered version of that vector
There is a fairly standard notation: given a vector $x=(x_1, x_2, \ldots, x_n)$, its order statistics $(x_{[1]}, x_{[2]}, \ldots, x_{[n]})$ are the permutation of $x$ for which $x_{[1]} \le x_{[2]} \le \cdots \le x_{[n]}$. Some people use parentheses instead of square brackets around the indexes, as in $x_{(i)}$ instead of $x_{[i]}$.
After you have described this function, you are free to name it anything you like, using $^\prime$, "$f$", "$\operatorname{Bob}$", or anything else.
Here's a formal way to do it that many mathematicians would be comfortable with:
Let the permutation group $\mathfrak{S}^n$ act on $\mathbb{R}^n$ by permuting the coordinates: that is, for $\sigma\in\mathfrak{S}^n$ and $x=(x_1,x_2,\ldots, x_n)\in\mathbb{R}^n$, define $$\sigma\cdot x = \left(x_{\sigma(1)},x_{\sigma(2)}, \ldots, x_{\sigma(n)}\right).$$ For any $x\in \mathbb{R}^n$, let $\sigma_x \in \mathfrak{S}^n$ be any permutation of the coordinates of $x$ for which $x_{\sigma_x(1)} \le x_{\sigma_x(2)} \le \cdots \le x_{\sigma_x(n)}.$ Define $f:\mathbb{R}^n\to\mathbb{R}^n$ by $$f(x) = \sigma_x \cdot x.$$
(Because $\sigma_x$ is not unique whenever two or more coordinates of $x$ are the same, you still have to show that $f$ is well-defined, but that is sufficiently straightforward and obvious that you could leave it to the reader.) | Mathematical function that maps a vector to an ordered version of that vector
There is a fairly standard notation: given a vector $x=(x_1, x_2, \ldots, x_n)$, its order statistics $(x_{[1]}, x_{[2]}, \ldots, x_{[n]})$ are the permutation of $x$ for which $x_{[1]} \le x_{[2]} \l |
52,853 | How to have a "None of the above" category in a Logistic Regression? | First of all, you are talking about multinomial regression, not logistic regression. Second, neither logistic regression nor multinomial regression are classifiers. Logistic regression and multinomial regression both predict probabilities of belonging to some class. To make a classifier of them you need a decision rule (if probability is greater then some value, classify this as some class).
It is up to you to define the decision rule. You are free to decide that if the probability of belonging to any class is not greater then some value, neither of the categories can be chosen. | How to have a "None of the above" category in a Logistic Regression? | First of all, you are talking about multinomial regression, not logistic regression. Second, neither logistic regression nor multinomial regression are classifiers. Logistic regression and multinomial | How to have a "None of the above" category in a Logistic Regression?
First of all, you are talking about multinomial regression, not logistic regression. Second, neither logistic regression nor multinomial regression are classifiers. Logistic regression and multinomial regression both predict probabilities of belonging to some class. To make a classifier of them you need a decision rule (if probability is greater then some value, classify this as some class).
It is up to you to define the decision rule. You are free to decide that if the probability of belonging to any class is not greater then some value, neither of the categories can be chosen. | How to have a "None of the above" category in a Logistic Regression?
First of all, you are talking about multinomial regression, not logistic regression. Second, neither logistic regression nor multinomial regression are classifiers. Logistic regression and multinomial |
52,854 | How to have a "None of the above" category in a Logistic Regression? | No way, you either have some of the non-class data already and set them into a 6th class and train the model with the 6th class. Or you set some thresholds for scoring the likeliness that a sample falls into each class evaluated by something like hidden markov model, but the accuracy would be worse than machine learning models. And as you don't even have non-class data now, you can't really have a fair classifier that can distinguish the true non-class data. | How to have a "None of the above" category in a Logistic Regression? | No way, you either have some of the non-class data already and set them into a 6th class and train the model with the 6th class. Or you set some thresholds for scoring the likeliness that a sample fal | How to have a "None of the above" category in a Logistic Regression?
No way, you either have some of the non-class data already and set them into a 6th class and train the model with the 6th class. Or you set some thresholds for scoring the likeliness that a sample falls into each class evaluated by something like hidden markov model, but the accuracy would be worse than machine learning models. And as you don't even have non-class data now, you can't really have a fair classifier that can distinguish the true non-class data. | How to have a "None of the above" category in a Logistic Regression?
No way, you either have some of the non-class data already and set them into a 6th class and train the model with the 6th class. Or you set some thresholds for scoring the likeliness that a sample fal |
52,855 | $R^2$ Score Vs OOB Score Random Forest | In a cross-sectional data set (no time series or panel data), the OOB estimate of true performance of a random forest is usually very accurate and in my opinion can even replace (cross-)validation. Put differently, you can trust the OOB accuracy in such cases. This is in constrast to the insample (training set) accuracy: By construction, random forests tend to extremely overfit on the training data because the individual trees are usually very deep and unstable. So don't get lured by an insample accuracy/R-squared of 97%.
One warning: According to your description, you have used the test set to optimize hyperparameters of the model. This is inappropriate. The role of a test set is to get an impression of the performance of the final model. So you basically use it just once. There is no point of using the hold-out test sample in model optimization. It is very easy to overfit on the test data! That would be the role of a separate validation set, cross-validation or the OOB info. | $R^2$ Score Vs OOB Score Random Forest | In a cross-sectional data set (no time series or panel data), the OOB estimate of true performance of a random forest is usually very accurate and in my opinion can even replace (cross-)validation. Pu | $R^2$ Score Vs OOB Score Random Forest
In a cross-sectional data set (no time series or panel data), the OOB estimate of true performance of a random forest is usually very accurate and in my opinion can even replace (cross-)validation. Put differently, you can trust the OOB accuracy in such cases. This is in constrast to the insample (training set) accuracy: By construction, random forests tend to extremely overfit on the training data because the individual trees are usually very deep and unstable. So don't get lured by an insample accuracy/R-squared of 97%.
One warning: According to your description, you have used the test set to optimize hyperparameters of the model. This is inappropriate. The role of a test set is to get an impression of the performance of the final model. So you basically use it just once. There is no point of using the hold-out test sample in model optimization. It is very easy to overfit on the test data! That would be the role of a separate validation set, cross-validation or the OOB info. | $R^2$ Score Vs OOB Score Random Forest
In a cross-sectional data set (no time series or panel data), the OOB estimate of true performance of a random forest is usually very accurate and in my opinion can even replace (cross-)validation. Pu |
52,856 | $R^2$ Score Vs OOB Score Random Forest | How you split your data is fine but it's not right to use your test case for improving your model.
... but this is the highest test accuracy ...
You can't do that. You should look at the cross-validation accuracy, that is your OOB.
It's fine to look at $R^2$, but OOB is generally considered the most unbiased approach. You should consider OOB to $R^2$ unless you have good reasons. | $R^2$ Score Vs OOB Score Random Forest | How you split your data is fine but it's not right to use your test case for improving your model.
... but this is the highest test accuracy ...
You can't do that. You should look at the cross-valid | $R^2$ Score Vs OOB Score Random Forest
How you split your data is fine but it's not right to use your test case for improving your model.
... but this is the highest test accuracy ...
You can't do that. You should look at the cross-validation accuracy, that is your OOB.
It's fine to look at $R^2$, but OOB is generally considered the most unbiased approach. You should consider OOB to $R^2$ unless you have good reasons. | $R^2$ Score Vs OOB Score Random Forest
How you split your data is fine but it's not right to use your test case for improving your model.
... but this is the highest test accuracy ...
You can't do that. You should look at the cross-valid |
52,857 | Why is $\frac{1}{n}\sum_{i=1}^{n}(y_i-\bar{y})^2=\frac{1}{n^2}\sum_{i<j}(y_i-y_j)^2$ | Recall that, if $\text{var}(X_i)=\sigma^2$ then
$$\mathbb{E}[(X_i-X_j)^2]=\text{var}(X_i-X_j)=2\sigma^2$$
The expansion that helps is
\begin{align*}
\sum_{i<j}(y_i-y_j)^2&=\frac{1}{2}\sum_{i<j}(y_i-y_j)^2+\frac{1}{2}\sum_{i>j}(y_i-y_j)^2\qquad\text{[by symmetry]}\\&=\frac{1}{2}\sum_{i<j}(y_i-y_j)^2+\frac{1}{2}\sum_{i>j}(y_i-y_j)^2+\underbrace{\frac{1}{2}\sum_{i=j}(y_i-y_j)^2}_{\text{equal to }0}\\&=\frac{1}{2}\sum_{i,j}(y_i-y_j)^2\\&=\frac{1}{2}\sum_{i,j}(y_i-\bar{y}+\bar{y}-y_j)^2\\&=\frac{1}{2}\sum_{i,j}\left\{(y_i-\bar{y})^2+(\bar{y}-y_j)^2+2(y_i-\bar{y})(\bar{y}-y_j)\right\}\\&=\frac{1}{2}\sum_{i,j}(y_i-\bar{y})^2+\frac{1}{2}\sum_{i,j}(\bar{y}-y_j)^2+\underbrace{\sum_{i,j}(y_i-\bar{y})(\bar{y}-y_j)}_{\text{equal to }0}\\&=\frac{n}{2}\sum_{i}(y_i-\bar{y})^2+\frac{n}{2}\sum_{j}(\bar{y}-y_j)^2+\overbrace{\sum_{i}(y_i-\bar{y})\sum_{j}(\bar{y}-y_j)}\\&=n\sum_{i}(y_i-\bar{y})^2
\end{align*}
Q.E.D. | Why is $\frac{1}{n}\sum_{i=1}^{n}(y_i-\bar{y})^2=\frac{1}{n^2}\sum_{i<j}(y_i-y_j)^2$ | Recall that, if $\text{var}(X_i)=\sigma^2$ then
$$\mathbb{E}[(X_i-X_j)^2]=\text{var}(X_i-X_j)=2\sigma^2$$
The expansion that helps is
\begin{align*}
\sum_{i<j}(y_i-y_j)^2&=\frac{1}{2}\sum_{i<j}(y_i-y_ | Why is $\frac{1}{n}\sum_{i=1}^{n}(y_i-\bar{y})^2=\frac{1}{n^2}\sum_{i<j}(y_i-y_j)^2$
Recall that, if $\text{var}(X_i)=\sigma^2$ then
$$\mathbb{E}[(X_i-X_j)^2]=\text{var}(X_i-X_j)=2\sigma^2$$
The expansion that helps is
\begin{align*}
\sum_{i<j}(y_i-y_j)^2&=\frac{1}{2}\sum_{i<j}(y_i-y_j)^2+\frac{1}{2}\sum_{i>j}(y_i-y_j)^2\qquad\text{[by symmetry]}\\&=\frac{1}{2}\sum_{i<j}(y_i-y_j)^2+\frac{1}{2}\sum_{i>j}(y_i-y_j)^2+\underbrace{\frac{1}{2}\sum_{i=j}(y_i-y_j)^2}_{\text{equal to }0}\\&=\frac{1}{2}\sum_{i,j}(y_i-y_j)^2\\&=\frac{1}{2}\sum_{i,j}(y_i-\bar{y}+\bar{y}-y_j)^2\\&=\frac{1}{2}\sum_{i,j}\left\{(y_i-\bar{y})^2+(\bar{y}-y_j)^2+2(y_i-\bar{y})(\bar{y}-y_j)\right\}\\&=\frac{1}{2}\sum_{i,j}(y_i-\bar{y})^2+\frac{1}{2}\sum_{i,j}(\bar{y}-y_j)^2+\underbrace{\sum_{i,j}(y_i-\bar{y})(\bar{y}-y_j)}_{\text{equal to }0}\\&=\frac{n}{2}\sum_{i}(y_i-\bar{y})^2+\frac{n}{2}\sum_{j}(\bar{y}-y_j)^2+\overbrace{\sum_{i}(y_i-\bar{y})\sum_{j}(\bar{y}-y_j)}\\&=n\sum_{i}(y_i-\bar{y})^2
\end{align*}
Q.E.D. | Why is $\frac{1}{n}\sum_{i=1}^{n}(y_i-\bar{y})^2=\frac{1}{n^2}\sum_{i<j}(y_i-y_j)^2$
Recall that, if $\text{var}(X_i)=\sigma^2$ then
$$\mathbb{E}[(X_i-X_j)^2]=\text{var}(X_i-X_j)=2\sigma^2$$
The expansion that helps is
\begin{align*}
\sum_{i<j}(y_i-y_j)^2&=\frac{1}{2}\sum_{i<j}(y_i-y_ |
52,858 | What is the difference between sample space and random variable? | From statistical inference by Casella and Berger,
Definition 1.1.1 The set, $S$, of all possible outcomes of a particular experiment is called the sample space for the experiment.
So sample space can be thought of as all possible observations one could make from a particular experiment. A sample space for a coin toss is a set $\{H, T\}$; a sample space for rolling a six-sided die is a set $\{1, 2, 3, 4, 5, 6\}$.
Definition 1.4.1 A random variable is a function from a sample space
$S$ into the real numbers
so random variable can be thought of as a function. The notation used for random variable is an uppercase letter. So if we have a random variable that maps sample space to real numbers, we have
$$X: S \to \mathbb{R}$$
No one really expresses random variables this way; instead, it's often denoted as $X$.
If that random variable $X$ is a set of possible values from a random experiment, then
$$X: S \to S$$
so random variable is an identity function. | What is the difference between sample space and random variable? | From statistical inference by Casella and Berger,
Definition 1.1.1 The set, $S$, of all possible outcomes of a particular experiment is called the sample space for the experiment.
So sample space c | What is the difference between sample space and random variable?
From statistical inference by Casella and Berger,
Definition 1.1.1 The set, $S$, of all possible outcomes of a particular experiment is called the sample space for the experiment.
So sample space can be thought of as all possible observations one could make from a particular experiment. A sample space for a coin toss is a set $\{H, T\}$; a sample space for rolling a six-sided die is a set $\{1, 2, 3, 4, 5, 6\}$.
Definition 1.4.1 A random variable is a function from a sample space
$S$ into the real numbers
so random variable can be thought of as a function. The notation used for random variable is an uppercase letter. So if we have a random variable that maps sample space to real numbers, we have
$$X: S \to \mathbb{R}$$
No one really expresses random variables this way; instead, it's often denoted as $X$.
If that random variable $X$ is a set of possible values from a random experiment, then
$$X: S \to S$$
so random variable is an identity function. | What is the difference between sample space and random variable?
From statistical inference by Casella and Berger,
Definition 1.1.1 The set, $S$, of all possible outcomes of a particular experiment is called the sample space for the experiment.
So sample space c |
52,859 | What is the difference between sample space and random variable? | A sample space is the SET of values a random variable can take.
You can think of random variable as an unopened box. This unopened box contains each member of the sample space with some probability.
In your example of a dice roll, the sample space is {1,2,3,4,5,6}. The random variable that represents a roll of the dice has a probability of 1/6 of taking on each of these 6 values. | What is the difference between sample space and random variable? | A sample space is the SET of values a random variable can take.
You can think of random variable as an unopened box. This unopened box contains each member of the sample space with some probability. | What is the difference between sample space and random variable?
A sample space is the SET of values a random variable can take.
You can think of random variable as an unopened box. This unopened box contains each member of the sample space with some probability.
In your example of a dice roll, the sample space is {1,2,3,4,5,6}. The random variable that represents a roll of the dice has a probability of 1/6 of taking on each of these 6 values. | What is the difference between sample space and random variable?
A sample space is the SET of values a random variable can take.
You can think of random variable as an unopened box. This unopened box contains each member of the sample space with some probability. |
52,860 | What is the difference between sample space and random variable? | Amazonian's answer is wrong. The sample space IS NOT the set of values a random variable can take. The sample space is the domain upon which a random variable is defined. The example Amazonian gave is one example of a random variable whose values happen to be the elements in the sample space.
A random variable is a function that assigns a value to every element in the sample space. There is nothing stopping you from defining a random variable X on the sample space S = {1, 2, 3, 4, 5, 6} where X = 10 * s for s in S. Then the values that X can take are {10, 20, 30, 40, 50, 60}. | What is the difference between sample space and random variable? | Amazonian's answer is wrong. The sample space IS NOT the set of values a random variable can take. The sample space is the domain upon which a random variable is defined. The example Amazonian gave is | What is the difference between sample space and random variable?
Amazonian's answer is wrong. The sample space IS NOT the set of values a random variable can take. The sample space is the domain upon which a random variable is defined. The example Amazonian gave is one example of a random variable whose values happen to be the elements in the sample space.
A random variable is a function that assigns a value to every element in the sample space. There is nothing stopping you from defining a random variable X on the sample space S = {1, 2, 3, 4, 5, 6} where X = 10 * s for s in S. Then the values that X can take are {10, 20, 30, 40, 50, 60}. | What is the difference between sample space and random variable?
Amazonian's answer is wrong. The sample space IS NOT the set of values a random variable can take. The sample space is the domain upon which a random variable is defined. The example Amazonian gave is |
52,861 | What is the difference between sample space and random variable? | When the sample space consists
exclusively of real numbers (as in your examples),
there is no difference between the sample space and the random variable.
The difference arises in other situations, because
the sample space may be a set of arbitrary elements, e.g. $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$.
With such freedom of possible outcomes it is difficult to work. So someone invented to work not with arbitrary elements, but only with real numbers.
To reach it, the first thing is to map such elements to real numbers, e.g.
\begin{aligned}
\color{red} {\text{red }} &\mapsto 6.72\\
\color{green} {\text{green}} &\mapsto -2\\
\color{blue} {\text{blue}} &\mapsto 19.5
\end{aligned}
And that mapping is called a random variable.
So now there is a difference, because
the sample space is $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$, whereas
the random variable is that mapping, often freely interpreted as the set $\{\color{red} {6.72}, \color{green} {-2}, \color{blue} {19.5}\}$.
Note:
Why is that mapping so important?
Because we got rid of problems how to operate with arbitrary things — after choosing such a mapping we may perform calculation with numbers. | What is the difference between sample space and random variable? | When the sample space consists
exclusively of real numbers (as in your examples),
there is no difference between the sample space and the random variable.
The difference arises in other situations, | What is the difference between sample space and random variable?
When the sample space consists
exclusively of real numbers (as in your examples),
there is no difference between the sample space and the random variable.
The difference arises in other situations, because
the sample space may be a set of arbitrary elements, e.g. $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$.
With such freedom of possible outcomes it is difficult to work. So someone invented to work not with arbitrary elements, but only with real numbers.
To reach it, the first thing is to map such elements to real numbers, e.g.
\begin{aligned}
\color{red} {\text{red }} &\mapsto 6.72\\
\color{green} {\text{green}} &\mapsto -2\\
\color{blue} {\text{blue}} &\mapsto 19.5
\end{aligned}
And that mapping is called a random variable.
So now there is a difference, because
the sample space is $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$, whereas
the random variable is that mapping, often freely interpreted as the set $\{\color{red} {6.72}, \color{green} {-2}, \color{blue} {19.5}\}$.
Note:
Why is that mapping so important?
Because we got rid of problems how to operate with arbitrary things — after choosing such a mapping we may perform calculation with numbers. | What is the difference between sample space and random variable?
When the sample space consists
exclusively of real numbers (as in your examples),
there is no difference between the sample space and the random variable.
The difference arises in other situations, |
52,862 | What is the difference between sample space and random variable? | I believe the confusion comes from the choice of examples used. For sake of clarity lets use instead the example of the throw of 2 dices. Then, in that case the sample space would be: {(1,1),(1,2),(2,1),(1,3),(3,1),...,(5,6),(6,5),(6,6)}.
A random variable is just a function having this set as domain and the Reals as codomain (in more advanced statistics one could use other more abstract codomains).
So, for example, one could define a random variable as being the sum of the results of the dice. So the different result one could obtain are: {2,3,4,5,6,7,8,9,10,11,12}, and the probability of each one occurring is derived directly from the sample space and the probability associated with each element of the set (in this particular case each element has the same probability, but in general it could not be).
From a formal perspective a sample space is "more fundamental" than a random variable. There can be no random variable without the specification of the elements and its correspondent probabilities of the sample space of its domain.
From a practical perspective, a random variables give you 2 advantages over simple sample spaces:
(i) they are numerical. That means that one can represent, for example, a coin toss {H,T} into the numerical values {1,0}, and then apply all sort of numerical tools that cannot be applied over qualitative elements.
(ii) it makes easier to deal with elements with different probabilities. In the example given we easily created a distribution with different probabilities based on a simple calculation over a sample space with elements of equal probability. Trying to create the given result artificially could be confusing and laborious.
It is common to just use a random variable without the explicit specification of its sample space. | What is the difference between sample space and random variable? | I believe the confusion comes from the choice of examples used. For sake of clarity lets use instead the example of the throw of 2 dices. Then, in that case the sample space would be: {(1,1),(1,2),(2, | What is the difference between sample space and random variable?
I believe the confusion comes from the choice of examples used. For sake of clarity lets use instead the example of the throw of 2 dices. Then, in that case the sample space would be: {(1,1),(1,2),(2,1),(1,3),(3,1),...,(5,6),(6,5),(6,6)}.
A random variable is just a function having this set as domain and the Reals as codomain (in more advanced statistics one could use other more abstract codomains).
So, for example, one could define a random variable as being the sum of the results of the dice. So the different result one could obtain are: {2,3,4,5,6,7,8,9,10,11,12}, and the probability of each one occurring is derived directly from the sample space and the probability associated with each element of the set (in this particular case each element has the same probability, but in general it could not be).
From a formal perspective a sample space is "more fundamental" than a random variable. There can be no random variable without the specification of the elements and its correspondent probabilities of the sample space of its domain.
From a practical perspective, a random variables give you 2 advantages over simple sample spaces:
(i) they are numerical. That means that one can represent, for example, a coin toss {H,T} into the numerical values {1,0}, and then apply all sort of numerical tools that cannot be applied over qualitative elements.
(ii) it makes easier to deal with elements with different probabilities. In the example given we easily created a distribution with different probabilities based on a simple calculation over a sample space with elements of equal probability. Trying to create the given result artificially could be confusing and laborious.
It is common to just use a random variable without the explicit specification of its sample space. | What is the difference between sample space and random variable?
I believe the confusion comes from the choice of examples used. For sake of clarity lets use instead the example of the throw of 2 dices. Then, in that case the sample space would be: {(1,1),(1,2),(2, |
52,863 | What is the difference between sample space and random variable? | For flipping a coin:
The sample space is {head, tail},
the random variable is {0, 1}.
For rolling a die:
The sample space is {1, 2, 3, 4, 5, 6},
the random variable is {1, 2, 3, 4, 5, 6}, too,
because the sample space already is the set of numbers only.
In other words, the sample space is the set of arbitrary elements, while the random variable is the set of numbers. | What is the difference between sample space and random variable? | For flipping a coin:
The sample space is {head, tail},
the random variable is {0, 1}.
For rolling a die:
The sample space is {1, 2, 3, 4, 5, 6},
the random variable is {1, 2, 3, 4, 5, 6}, too,
beca | What is the difference between sample space and random variable?
For flipping a coin:
The sample space is {head, tail},
the random variable is {0, 1}.
For rolling a die:
The sample space is {1, 2, 3, 4, 5, 6},
the random variable is {1, 2, 3, 4, 5, 6}, too,
because the sample space already is the set of numbers only.
In other words, the sample space is the set of arbitrary elements, while the random variable is the set of numbers. | What is the difference between sample space and random variable?
For flipping a coin:
The sample space is {head, tail},
the random variable is {0, 1}.
For rolling a die:
The sample space is {1, 2, 3, 4, 5, 6},
the random variable is {1, 2, 3, 4, 5, 6}, too,
beca |
52,864 | Why are the ROC curves not smooth? | I know the question is two years old and the technical answer was given in the comments, but a more elaborate answer might help others still struggling with the concepts.
OP's ROC curve wrong because he used the predicted values of his models instead of the probabilities.
What does this mean?
When a model is trained it learns the relationships between the input variables and the output variable. For each observation the model is shown, the model learns how probable it is that a given observation belongs to a certain class. When the model is presented with the test data it will guess for each unseen observation how probable it is to belong to a given class.
How does the model know if an observation belongs to a class?
During testing the model receives an observation for which it estimates a probability of 51% of belonging to Class X. How does take the decision to label as belonging to Class X or not? The researcher will set a threshold telling the model that all observations with a probability under 50% must be classified as Y and all those above must be classified as X. Sometimes the researcher wants to set a stricter rule because they're more interested in correctly predicting a given class like X rather than trying to predict all of them as well.
So you trained model has estimated a probability for each of your observations, but the threshold will ultimately decide to in which class your observation will be categorized.
Why does this matter?
The curve created by the ROC plots a point for each of the True positive rate and false positive rate of your model at different threshold levels. This helps the researcher to see the trade-off between the FPR and TPR for all threshold levels.
So when you pass the predicted values instead of the predicted probabilities to your ROC you will only have one point because these values were calculated using one specific threshold. Because that point is the TPR and FPR of your model for one specific threshold level.
What you need to do is use the probabilities instead and let the threshold vary.
Run your model as such:
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn_model = knn.fit(X_train,y_train)
#Use the values for your confusion matrix
knn_y_model = knn_model.predict(X=X_test)
# Use the probabilities for your ROC and Precision-recall curves
knn_y_proba = knn_model.predict_proba(X=X_test)
When creating your confusion matrix you will use the values of your model
from mlxtend.plotting import plot_confusion_matrix
fig, ax = plot_confusion_matrix(conf_mat=confusion_matrix(y_test,knn_y_model),
show_absolute=True,show_normed=True,colorbar=True)
plt.title("Confusion matrix - KNN")
plt.ylabel('True label')
plt.xlabel('Predicted label'
When creating your ROC curve you will use the probabilities
import scikitplot as skplt
plot = skplt.metrics.plot_roc(y_test, knn_y_proba)
plt.title("ROC Curves - K-Nearest Neighbors") | Why are the ROC curves not smooth? | I know the question is two years old and the technical answer was given in the comments, but a more elaborate answer might help others still struggling with the concepts.
OP's ROC curve wrong because | Why are the ROC curves not smooth?
I know the question is two years old and the technical answer was given in the comments, but a more elaborate answer might help others still struggling with the concepts.
OP's ROC curve wrong because he used the predicted values of his models instead of the probabilities.
What does this mean?
When a model is trained it learns the relationships between the input variables and the output variable. For each observation the model is shown, the model learns how probable it is that a given observation belongs to a certain class. When the model is presented with the test data it will guess for each unseen observation how probable it is to belong to a given class.
How does the model know if an observation belongs to a class?
During testing the model receives an observation for which it estimates a probability of 51% of belonging to Class X. How does take the decision to label as belonging to Class X or not? The researcher will set a threshold telling the model that all observations with a probability under 50% must be classified as Y and all those above must be classified as X. Sometimes the researcher wants to set a stricter rule because they're more interested in correctly predicting a given class like X rather than trying to predict all of them as well.
So you trained model has estimated a probability for each of your observations, but the threshold will ultimately decide to in which class your observation will be categorized.
Why does this matter?
The curve created by the ROC plots a point for each of the True positive rate and false positive rate of your model at different threshold levels. This helps the researcher to see the trade-off between the FPR and TPR for all threshold levels.
So when you pass the predicted values instead of the predicted probabilities to your ROC you will only have one point because these values were calculated using one specific threshold. Because that point is the TPR and FPR of your model for one specific threshold level.
What you need to do is use the probabilities instead and let the threshold vary.
Run your model as such:
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn_model = knn.fit(X_train,y_train)
#Use the values for your confusion matrix
knn_y_model = knn_model.predict(X=X_test)
# Use the probabilities for your ROC and Precision-recall curves
knn_y_proba = knn_model.predict_proba(X=X_test)
When creating your confusion matrix you will use the values of your model
from mlxtend.plotting import plot_confusion_matrix
fig, ax = plot_confusion_matrix(conf_mat=confusion_matrix(y_test,knn_y_model),
show_absolute=True,show_normed=True,colorbar=True)
plt.title("Confusion matrix - KNN")
plt.ylabel('True label')
plt.xlabel('Predicted label'
When creating your ROC curve you will use the probabilities
import scikitplot as skplt
plot = skplt.metrics.plot_roc(y_test, knn_y_proba)
plt.title("ROC Curves - K-Nearest Neighbors") | Why are the ROC curves not smooth?
I know the question is two years old and the technical answer was given in the comments, but a more elaborate answer might help others still struggling with the concepts.
OP's ROC curve wrong because |
52,865 | Why is $R^2$ the proportion of total variance of the data explained by the model? | There is an error in your equations, $RSS = \sum(Y_i - \hat{Y}_i)^2$
Maybe it would help not looking at so many equations to understand.
RSS is the sum of the residual variance, basically the sum of all the variance that the model can't explain.
Therefore
$\frac{RSS}{\sum{(Y_i - \bar{Y})^2}}$ is $\frac{unexplained \ variance}{Sum \ of \ all \ variance}$
so
$1- \frac{unexplained \ variance}{Sum \ of \ all \ variance} = \frac{Sum \ of \ all \ variance - unexplained \ variance}{Sum \ of \ all \ variance} = \frac{explained \ variance}{Sum \ of \ all \ variance} $
Does this help? | Why is $R^2$ the proportion of total variance of the data explained by the model? | There is an error in your equations, $RSS = \sum(Y_i - \hat{Y}_i)^2$
Maybe it would help not looking at so many equations to understand.
RSS is the sum of the residual variance, basically the sum of | Why is $R^2$ the proportion of total variance of the data explained by the model?
There is an error in your equations, $RSS = \sum(Y_i - \hat{Y}_i)^2$
Maybe it would help not looking at so many equations to understand.
RSS is the sum of the residual variance, basically the sum of all the variance that the model can't explain.
Therefore
$\frac{RSS}{\sum{(Y_i - \bar{Y})^2}}$ is $\frac{unexplained \ variance}{Sum \ of \ all \ variance}$
so
$1- \frac{unexplained \ variance}{Sum \ of \ all \ variance} = \frac{Sum \ of \ all \ variance - unexplained \ variance}{Sum \ of \ all \ variance} = \frac{explained \ variance}{Sum \ of \ all \ variance} $
Does this help? | Why is $R^2$ the proportion of total variance of the data explained by the model?
There is an error in your equations, $RSS = \sum(Y_i - \hat{Y}_i)^2$
Maybe it would help not looking at so many equations to understand.
RSS is the sum of the residual variance, basically the sum of |
52,866 | Why is $R^2$ the proportion of total variance of the data explained by the model? | We have $TSS = \sum_i (Y_i - \bar{Y})^2,\ RSS = \sum_i(Y_i - \hat{Y}_i)^2,\ ESS = \sum_i(\hat{Y}_i - \bar{Y})^2$
$TSS$ - total variance, $RSS$ - residual variance, $ESS$ - regression variance
From ANOVA identity we know that
$$TSS = RSS + ESS$$
So we have $R^2 = 1 - \frac{RSS}{TSS} = \frac{ESS}{TSS}$. From last equation you can clearly see that $R^2$ states how much "variance" is explained by the regression | Why is $R^2$ the proportion of total variance of the data explained by the model? | We have $TSS = \sum_i (Y_i - \bar{Y})^2,\ RSS = \sum_i(Y_i - \hat{Y}_i)^2,\ ESS = \sum_i(\hat{Y}_i - \bar{Y})^2$
$TSS$ - total variance, $RSS$ - residual variance, $ESS$ - regression variance
From A | Why is $R^2$ the proportion of total variance of the data explained by the model?
We have $TSS = \sum_i (Y_i - \bar{Y})^2,\ RSS = \sum_i(Y_i - \hat{Y}_i)^2,\ ESS = \sum_i(\hat{Y}_i - \bar{Y})^2$
$TSS$ - total variance, $RSS$ - residual variance, $ESS$ - regression variance
From ANOVA identity we know that
$$TSS = RSS + ESS$$
So we have $R^2 = 1 - \frac{RSS}{TSS} = \frac{ESS}{TSS}$. From last equation you can clearly see that $R^2$ states how much "variance" is explained by the regression | Why is $R^2$ the proportion of total variance of the data explained by the model?
We have $TSS = \sum_i (Y_i - \bar{Y})^2,\ RSS = \sum_i(Y_i - \hat{Y}_i)^2,\ ESS = \sum_i(\hat{Y}_i - \bar{Y})^2$
$TSS$ - total variance, $RSS$ - residual variance, $ESS$ - regression variance
From A |
52,867 | $X,Y,Z$ are IID ${\rm Poisson}(\lambda)$. Find the correlation between $X$ and $X+Y+Z$ | If $X,Y$ and $Z$ are independent, then their covariance is 0. So is their correlation.
For the sums look at
$$
cov(X,X+Y+Z) = Cov(X,X) + Cov(X,Y) + Cov(X,Z)
$$
the latter 2 are zero (due to independence) and the first one is $Cov(X,X) = Var(X) = \lambda$.
For the variance we get
$$
VAR(X+Y+Z) = VAR(X) + VAR(Y) + VAR(Z) = 3 \lambda
$$
due to independence.
Thus the correlations is
$$
cor(X,X+Y+Z) = \frac{cov(X,X+Y+Z)}{\sqrt{VAR(X+Y+Z)} \sqrt{VAR(X)}} = \frac{\lambda}{\lambda \sqrt{3}} = 1/\sqrt{3}.
$$ | $X,Y,Z$ are IID ${\rm Poisson}(\lambda)$. Find the correlation between $X$ and $X+Y+Z$ | If $X,Y$ and $Z$ are independent, then their covariance is 0. So is their correlation.
For the sums look at
$$
cov(X,X+Y+Z) = Cov(X,X) + Cov(X,Y) + Cov(X,Z)
$$
the latter 2 are zero (due to independe | $X,Y,Z$ are IID ${\rm Poisson}(\lambda)$. Find the correlation between $X$ and $X+Y+Z$
If $X,Y$ and $Z$ are independent, then their covariance is 0. So is their correlation.
For the sums look at
$$
cov(X,X+Y+Z) = Cov(X,X) + Cov(X,Y) + Cov(X,Z)
$$
the latter 2 are zero (due to independence) and the first one is $Cov(X,X) = Var(X) = \lambda$.
For the variance we get
$$
VAR(X+Y+Z) = VAR(X) + VAR(Y) + VAR(Z) = 3 \lambda
$$
due to independence.
Thus the correlations is
$$
cor(X,X+Y+Z) = \frac{cov(X,X+Y+Z)}{\sqrt{VAR(X+Y+Z)} \sqrt{VAR(X)}} = \frac{\lambda}{\lambda \sqrt{3}} = 1/\sqrt{3}.
$$ | $X,Y,Z$ are IID ${\rm Poisson}(\lambda)$. Find the correlation between $X$ and $X+Y+Z$
If $X,Y$ and $Z$ are independent, then their covariance is 0. So is their correlation.
For the sums look at
$$
cov(X,X+Y+Z) = Cov(X,X) + Cov(X,Y) + Cov(X,Z)
$$
the latter 2 are zero (due to independe |
52,868 | Distribution given sum | It can be instructional and satisfying to work this out using basic statistical knowledge, rather than just doing the integrals. It turns out that no calculations are needed!
Here's the circle of ideas:
The $X_i$ can be thought of as waiting times between random events.
When the waiting times have independent identical exponential distributions, the random events are a Poisson process.
When normalized by the last time (given by $Y=X_1+X_2+\cdots + X_n$), these events therefore look like $n-1$ independent uniform values in $[0,1]$.
The values $0 \le X_1/Y \le X_1/Y+X_2/Y \le \cdots \le (X_1/Y+\cdots+X_{n-1}/Y) \le 1$ therefore are the order statistics for $n-1$ iid uniform variables.
The $k^\text{th}$ order statistic has a Beta$(k, n-k)$ distribution.
The PDF of a Beta$(k,n-k)$ distribution is proportional to $x^{k-1}(1-x)^{n-k-1}$ for $0\le x \le 1$, with constant of proportionality equal to (of course!) the reciprocal of the Beta function value $B(k,n-k)$.
Since $Y$ is invariant under any permutation of the $X_i$ and the $X_i$ are exchangeable, all the conditional distributions $f(X_i|Y)$ are the same.
Thus, the distribution of any of the $X_i$ conditional on $Y$ must be $Y$ times a Beta$(1,n-1)$ distribution. Scaling the Beta PDF by $y$ gives the conditional probability element
$$f_{X_1|Y=y}(x)\, \mathrm{d}x = \frac{1}{B(1,n-1)}\left(1-\frac{x}{y}\right)^{n-2}\frac{\mathrm{d}x}{y}$$
for $0 \le X_i \le y$.
This reasoning further implies the $n$-variate distribution of the $X_i$ conditional on $Y$ is $Y$ times a symmetric Dirichlet distribution.
Reference
Balakrishnan, N. and A. Clifford Cohen, Order Statistics and Inference. Academic Press, 1991. | Distribution given sum | It can be instructional and satisfying to work this out using basic statistical knowledge, rather than just doing the integrals. It turns out that no calculations are needed!
Here's the circle of ide | Distribution given sum
It can be instructional and satisfying to work this out using basic statistical knowledge, rather than just doing the integrals. It turns out that no calculations are needed!
Here's the circle of ideas:
The $X_i$ can be thought of as waiting times between random events.
When the waiting times have independent identical exponential distributions, the random events are a Poisson process.
When normalized by the last time (given by $Y=X_1+X_2+\cdots + X_n$), these events therefore look like $n-1$ independent uniform values in $[0,1]$.
The values $0 \le X_1/Y \le X_1/Y+X_2/Y \le \cdots \le (X_1/Y+\cdots+X_{n-1}/Y) \le 1$ therefore are the order statistics for $n-1$ iid uniform variables.
The $k^\text{th}$ order statistic has a Beta$(k, n-k)$ distribution.
The PDF of a Beta$(k,n-k)$ distribution is proportional to $x^{k-1}(1-x)^{n-k-1}$ for $0\le x \le 1$, with constant of proportionality equal to (of course!) the reciprocal of the Beta function value $B(k,n-k)$.
Since $Y$ is invariant under any permutation of the $X_i$ and the $X_i$ are exchangeable, all the conditional distributions $f(X_i|Y)$ are the same.
Thus, the distribution of any of the $X_i$ conditional on $Y$ must be $Y$ times a Beta$(1,n-1)$ distribution. Scaling the Beta PDF by $y$ gives the conditional probability element
$$f_{X_1|Y=y}(x)\, \mathrm{d}x = \frac{1}{B(1,n-1)}\left(1-\frac{x}{y}\right)^{n-2}\frac{\mathrm{d}x}{y}$$
for $0 \le X_i \le y$.
This reasoning further implies the $n$-variate distribution of the $X_i$ conditional on $Y$ is $Y$ times a symmetric Dirichlet distribution.
Reference
Balakrishnan, N. and A. Clifford Cohen, Order Statistics and Inference. Academic Press, 1991. | Distribution given sum
It can be instructional and satisfying to work this out using basic statistical knowledge, rather than just doing the integrals. It turns out that no calculations are needed!
Here's the circle of ide |
52,869 | Distribution given sum | This might be the most "text book" answer on $f_{X_1|Y}(x_1|y)$.
Let $Z = X_2 + ... + X_n$. Then $Y = Z + X_1$.
First the joint distribution of $(Z, X_1)$ which is $f_{Z,X_1}(z,x_1) = \dfrac{\lambda^{n-1}}{\Gamma(n-1)}\,z^{n-2}\,e^{-z\lambda}\,\lambda \times e^{-x_1 \lambda}$ for $z \ge 0, x_1 \ge 0$.
Next we get the joint distribution of $(Y, X_1)$, which is $f_{Y,X_1}(y,x_1) = f_{Z,X_1}(z,x_1)$ since the absolute determinant of the Jacobian matrix is 1 (details below).
Therefore, $f_{Y,X_1}(y,x_1) = \dfrac{\lambda^n}{\Gamma(n-1)}\,(y-x_1)^{n-2}\,e^{-y\lambda}$ for $x_1 \ge 0, y \ge x_1$, and $0$ otherwise.
Finally, $f_{X_1|Y}(x_1|y) = \dfrac{f_{Y,X_1}(y,x_1)}{f_Y(y)}$. The rest follows easily from here.
Edit: I didn't see OP's "edit" part at first. So OP has solved it in the same "text book" way. I will leave my post here, in case it should be useful to others.
Edit 2: Details on the Jacobian.
We are doing variable transformations, from $Z, X_1$ to $Y, X_1$, defined as $Y=Z+X_1, X_1 = X_1$. So Jacobian matrix is 2-by-2, with the elements being (going across rows) $\dfrac{\partial Y}{\partial Z}, \dfrac{\partial Y}{\partial X_1}, \dfrac{\partial X_1}{\partial Z}, \dfrac{\partial X_1}{\partial X_1}$, i.e. $1,1,0,1.$ The absolute determinant of this matrix is $1.$ | Distribution given sum | This might be the most "text book" answer on $f_{X_1|Y}(x_1|y)$.
Let $Z = X_2 + ... + X_n$. Then $Y = Z + X_1$.
First the joint distribution of $(Z, X_1)$ which is $f_{Z,X_1}(z,x_1) = \dfrac{\lambda^ | Distribution given sum
This might be the most "text book" answer on $f_{X_1|Y}(x_1|y)$.
Let $Z = X_2 + ... + X_n$. Then $Y = Z + X_1$.
First the joint distribution of $(Z, X_1)$ which is $f_{Z,X_1}(z,x_1) = \dfrac{\lambda^{n-1}}{\Gamma(n-1)}\,z^{n-2}\,e^{-z\lambda}\,\lambda \times e^{-x_1 \lambda}$ for $z \ge 0, x_1 \ge 0$.
Next we get the joint distribution of $(Y, X_1)$, which is $f_{Y,X_1}(y,x_1) = f_{Z,X_1}(z,x_1)$ since the absolute determinant of the Jacobian matrix is 1 (details below).
Therefore, $f_{Y,X_1}(y,x_1) = \dfrac{\lambda^n}{\Gamma(n-1)}\,(y-x_1)^{n-2}\,e^{-y\lambda}$ for $x_1 \ge 0, y \ge x_1$, and $0$ otherwise.
Finally, $f_{X_1|Y}(x_1|y) = \dfrac{f_{Y,X_1}(y,x_1)}{f_Y(y)}$. The rest follows easily from here.
Edit: I didn't see OP's "edit" part at first. So OP has solved it in the same "text book" way. I will leave my post here, in case it should be useful to others.
Edit 2: Details on the Jacobian.
We are doing variable transformations, from $Z, X_1$ to $Y, X_1$, defined as $Y=Z+X_1, X_1 = X_1$. So Jacobian matrix is 2-by-2, with the elements being (going across rows) $\dfrac{\partial Y}{\partial Z}, \dfrac{\partial Y}{\partial X_1}, \dfrac{\partial X_1}{\partial Z}, \dfrac{\partial X_1}{\partial X_1}$, i.e. $1,1,0,1.$ The absolute determinant of this matrix is $1.$ | Distribution given sum
This might be the most "text book" answer on $f_{X_1|Y}(x_1|y)$.
Let $Z = X_2 + ... + X_n$. Then $Y = Z + X_1$.
First the joint distribution of $(Z, X_1)$ which is $f_{Z,X_1}(z,x_1) = \dfrac{\lambda^ |
52,870 | What's the difference between prior and marginal probabilities? | $P(S=s)$ and $P(R=r)$ both are marginal probabilities from the following table
$$
\begin{array}{c|cc|c}
& R=0 & R=1 \\
\hline
S=0 & 0.20 & 0.08 & 0.28 \\
S=1 & 0.70 & 0.02 & 0.72 \\
\hline
& 0.90 & 0.10 &
\end{array}
$$
Given such table, you can calculate conditional probabilities $P(S \mid R)$, or $P(R \mid S)$ by applying Bayes theorem, e.g.
$$ P(S \mid R) = \frac{P(R \mid S) \, P(S)}{P(R)} = \frac{P(R \cap S)}{P(R)} $$
the same way you could calculate $P(R \mid S)$. Notice that to apply it you need to know either conditional, or joint probabilities. This is a basic application of Bayes theorem and it has many nice applications (see e.g. here).
Now important thing to notice: applying Bayes theorem is not the same as using Bayesian statistics. $P(S)$ in your example is not more prior, then $P(R)$. Moreover, to calculate the "posterior" probability you need to know the joint or conditional probabilities. If you are thinking of some simple example like "there is 0.7 probability that Jack has stolen an orange from the shop", you cannot apply Bayes theorem to such problem by assuming that in your opinion the probability is, for example, 0.3, unless you also know the joint probabilities (probability that he is guilty when you assume he is etc.), or conditional probabilities (probability that you assume that he is guilty given the fact that he is guilty). This is not the way how we use priors in statistics.
When applying Bayes theorem in statistics we have some data $X$ that can be described using probability density function $f_\theta$, but we do not know the value of it's parameter $\theta$. To estimate $\theta$ we can use many different statistical approaches, for example, maximum likelihood estimation by maximizing the likelihood function
$$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\theta} f_\theta( X ) $$
Other approach to the problem is to include some prior information into the process of estimating the parameter and use a Bayesian approach. This is done by using Bayes theorem, but in a different way. First, we assume some probability distribution for $\theta$, let's call it $g$, and then assume a priori that the unknown parameter follows this distribution. We use Bayes theorem to combine the two sources of information: our a priori assumptions about $\theta$, that is our prior $g$; and the information that is contained in the data, that is likelihood function $f_\theta(X)$, so to obtain posterior estimate $g(\theta | X)$:
$$ g(\theta | X) \propto f_\theta(X) \, g(\theta) $$
If this still sounds complicated, you can start by going through multiple other questions tagged bayesian for lots of examples. There is also many good introductory books to start with, e.g. Bayesian Data Analysis by Andrew Gelman et al, or Doing Bayesian Data Analysis, Second Edition:
A Tutorial with R, JAGS, and Stan by John K. Kruschke. | What's the difference between prior and marginal probabilities? | $P(S=s)$ and $P(R=r)$ both are marginal probabilities from the following table
$$
\begin{array}{c|cc|c}
& R=0 & R=1 \\
\hline
S=0 & 0.20 & 0.08 & 0.28 \\
S=1 & 0.70 & 0.02 & 0.7 | What's the difference between prior and marginal probabilities?
$P(S=s)$ and $P(R=r)$ both are marginal probabilities from the following table
$$
\begin{array}{c|cc|c}
& R=0 & R=1 \\
\hline
S=0 & 0.20 & 0.08 & 0.28 \\
S=1 & 0.70 & 0.02 & 0.72 \\
\hline
& 0.90 & 0.10 &
\end{array}
$$
Given such table, you can calculate conditional probabilities $P(S \mid R)$, or $P(R \mid S)$ by applying Bayes theorem, e.g.
$$ P(S \mid R) = \frac{P(R \mid S) \, P(S)}{P(R)} = \frac{P(R \cap S)}{P(R)} $$
the same way you could calculate $P(R \mid S)$. Notice that to apply it you need to know either conditional, or joint probabilities. This is a basic application of Bayes theorem and it has many nice applications (see e.g. here).
Now important thing to notice: applying Bayes theorem is not the same as using Bayesian statistics. $P(S)$ in your example is not more prior, then $P(R)$. Moreover, to calculate the "posterior" probability you need to know the joint or conditional probabilities. If you are thinking of some simple example like "there is 0.7 probability that Jack has stolen an orange from the shop", you cannot apply Bayes theorem to such problem by assuming that in your opinion the probability is, for example, 0.3, unless you also know the joint probabilities (probability that he is guilty when you assume he is etc.), or conditional probabilities (probability that you assume that he is guilty given the fact that he is guilty). This is not the way how we use priors in statistics.
When applying Bayes theorem in statistics we have some data $X$ that can be described using probability density function $f_\theta$, but we do not know the value of it's parameter $\theta$. To estimate $\theta$ we can use many different statistical approaches, for example, maximum likelihood estimation by maximizing the likelihood function
$$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\theta} f_\theta( X ) $$
Other approach to the problem is to include some prior information into the process of estimating the parameter and use a Bayesian approach. This is done by using Bayes theorem, but in a different way. First, we assume some probability distribution for $\theta$, let's call it $g$, and then assume a priori that the unknown parameter follows this distribution. We use Bayes theorem to combine the two sources of information: our a priori assumptions about $\theta$, that is our prior $g$; and the information that is contained in the data, that is likelihood function $f_\theta(X)$, so to obtain posterior estimate $g(\theta | X)$:
$$ g(\theta | X) \propto f_\theta(X) \, g(\theta) $$
If this still sounds complicated, you can start by going through multiple other questions tagged bayesian for lots of examples. There is also many good introductory books to start with, e.g. Bayesian Data Analysis by Andrew Gelman et al, or Doing Bayesian Data Analysis, Second Edition:
A Tutorial with R, JAGS, and Stan by John K. Kruschke. | What's the difference between prior and marginal probabilities?
$P(S=s)$ and $P(R=r)$ both are marginal probabilities from the following table
$$
\begin{array}{c|cc|c}
& R=0 & R=1 \\
\hline
S=0 & 0.20 & 0.08 & 0.28 \\
S=1 & 0.70 & 0.02 & 0.7 |
52,871 | What's the difference between prior and marginal probabilities? | If you think of a table with columns as possible parameter values and with rows as possible data values (see image below), then the lower marginal distribution is the prior distribution on the parameter. The observed data indicate which row of the table is the row we actually live in, so to speak, hence we conditionalize on that row to find p(theta|D), which is the posterior distribution. In other words, Bayes rule gets us from the marginal (i.e., the prior), to the row-conditional (i.e., the posterior).
I realize that's a pretty terse description, but I hope it helps some. For full details, please see Chapter 5 of DBDA2E. | What's the difference between prior and marginal probabilities? | If you think of a table with columns as possible parameter values and with rows as possible data values (see image below), then the lower marginal distribution is the prior distribution on the paramet | What's the difference between prior and marginal probabilities?
If you think of a table with columns as possible parameter values and with rows as possible data values (see image below), then the lower marginal distribution is the prior distribution on the parameter. The observed data indicate which row of the table is the row we actually live in, so to speak, hence we conditionalize on that row to find p(theta|D), which is the posterior distribution. In other words, Bayes rule gets us from the marginal (i.e., the prior), to the row-conditional (i.e., the posterior).
I realize that's a pretty terse description, but I hope it helps some. For full details, please see Chapter 5 of DBDA2E. | What's the difference between prior and marginal probabilities?
If you think of a table with columns as possible parameter values and with rows as possible data values (see image below), then the lower marginal distribution is the prior distribution on the paramet |
52,872 | Repeated measures ANOVA in R: Error(Subject) vs. Error(Subject/Day) | Depending on the contrasts you are using, the R command
aov(Temperature~Day+Error(Subject))
fits a model like
$$y_{ij} = \mu + \beta_j + b_i + \epsilon_{ij},$$
where $y_{ij}$ is the response value for the $i$th individual at the $j$th period (day), $\mu$ is global mean, $\beta_j$ is the effect of $j$th day, $b_i\sim N(0,\sigma_b^2)$ is the Gaussian random effect or random intercept for the $i$th individual and $\epsilon_{ij}\sim N(0,\sigma^2)$ is the Gaussian residual term. The unknown parameters are $(\mu, \beta_j,\sigma_b^2, \sigma^2)$.
On the other hand, the command
aov(Temperature~Day+Error(Subject/Day))
fits the model
$$y_{ijk} = \mu + \beta_j + b_i + b_{ij} + \epsilon_{ijk},$$
where $b_{ij}\sim N(0, \sigma_1^2)$ is a Gaussian random individual-period interaction term. As you can see from the expression, to estimate also $\sigma_1^2$ you need to have replications for each $i$ and $j$, that's the reason for the third index $k$. | Repeated measures ANOVA in R: Error(Subject) vs. Error(Subject/Day) | Depending on the contrasts you are using, the R command
aov(Temperature~Day+Error(Subject))
fits a model like
$$y_{ij} = \mu + \beta_j + b_i + \epsilon_{ij},$$
where $y_{ij}$ is the response value f | Repeated measures ANOVA in R: Error(Subject) vs. Error(Subject/Day)
Depending on the contrasts you are using, the R command
aov(Temperature~Day+Error(Subject))
fits a model like
$$y_{ij} = \mu + \beta_j + b_i + \epsilon_{ij},$$
where $y_{ij}$ is the response value for the $i$th individual at the $j$th period (day), $\mu$ is global mean, $\beta_j$ is the effect of $j$th day, $b_i\sim N(0,\sigma_b^2)$ is the Gaussian random effect or random intercept for the $i$th individual and $\epsilon_{ij}\sim N(0,\sigma^2)$ is the Gaussian residual term. The unknown parameters are $(\mu, \beta_j,\sigma_b^2, \sigma^2)$.
On the other hand, the command
aov(Temperature~Day+Error(Subject/Day))
fits the model
$$y_{ijk} = \mu + \beta_j + b_i + b_{ij} + \epsilon_{ijk},$$
where $b_{ij}\sim N(0, \sigma_1^2)$ is a Gaussian random individual-period interaction term. As you can see from the expression, to estimate also $\sigma_1^2$ you need to have replications for each $i$ and $j$, that's the reason for the third index $k$. | Repeated measures ANOVA in R: Error(Subject) vs. Error(Subject/Day)
Depending on the contrasts you are using, the R command
aov(Temperature~Day+Error(Subject))
fits a model like
$$y_{ij} = \mu + \beta_j + b_i + \epsilon_{ij},$$
where $y_{ij}$ is the response value f |
52,873 | What is this math symbol used in a backpropagation tutorial: $\circ$ | The symbol $\circ$ is often used to denote element-wise multiplication (a.k.a. Hadamard product, Schur product, entrywise product, component-wise multiplication); $\odot$ and $*$ are common alternatives. | What is this math symbol used in a backpropagation tutorial: $\circ$ | The symbol $\circ$ is often used to denote element-wise multiplication (a.k.a. Hadamard product, Schur product, entrywise product, component-wise multiplication); $\odot$ and $*$ are common alternativ | What is this math symbol used in a backpropagation tutorial: $\circ$
The symbol $\circ$ is often used to denote element-wise multiplication (a.k.a. Hadamard product, Schur product, entrywise product, component-wise multiplication); $\odot$ and $*$ are common alternatives. | What is this math symbol used in a backpropagation tutorial: $\circ$
The symbol $\circ$ is often used to denote element-wise multiplication (a.k.a. Hadamard product, Schur product, entrywise product, component-wise multiplication); $\odot$ and $*$ are common alternativ |
52,874 | Calculating standard deviation after log transformation | Several approaches:
(i) you can estimate mean and standard deviation on both the original and the log scale as needed, in the usual fashion. However, they may not necessarily be the most efficient way on the untransformed data (nor will the two sets of estimates necessarily be very consistent with each other)
(ii) via parametric assumption -- you say on the log scale $X=\log Y$ that the distribution is approximately normal. If you assume normality of the logs, you have consistent estimates of the $\mu$ and $\sigma^2$ parameters on the log scale (indeed the usual estimates are maximum likelihood), which parameters are also the $\mu$ and $\sigma^2$ parameters of the lognormal you started with (but these are not the mean and variance of the lognormal).
You can derive the mean and variance of the lognormal in terms of those parameters easily enough, but I'll just give them:
$E(Y) = e^{\mu+\frac12 \sigma^2}$
$\text{Var}(Y) = E(Y)^2\, (e^{\sigma^2}-1)$
You obtain the standard deviation by taking the square root.
As is usually the case with MLE, these estimates are not unbiased (though they're still consistent of course). If you're especially concerned about unbiasedness, you may want some correction for small samples -- though I work with lognormal models a lot and generally just stick with MLE for the main quantities of interest.
So if you use those equations to obtain the estimates of the mean and variance of the original variable you're getting ML estimates of the parameters.
(iii) you can avoid the assumption and use Taylor expansion to get approximate moments in one direction or the other (though if you have near normality in the logs, it makes more sense to do the estimation on the log scale and transform back. | Calculating standard deviation after log transformation | Several approaches:
(i) you can estimate mean and standard deviation on both the original and the log scale as needed, in the usual fashion. However, they may not necessarily be the most efficient way | Calculating standard deviation after log transformation
Several approaches:
(i) you can estimate mean and standard deviation on both the original and the log scale as needed, in the usual fashion. However, they may not necessarily be the most efficient way on the untransformed data (nor will the two sets of estimates necessarily be very consistent with each other)
(ii) via parametric assumption -- you say on the log scale $X=\log Y$ that the distribution is approximately normal. If you assume normality of the logs, you have consistent estimates of the $\mu$ and $\sigma^2$ parameters on the log scale (indeed the usual estimates are maximum likelihood), which parameters are also the $\mu$ and $\sigma^2$ parameters of the lognormal you started with (but these are not the mean and variance of the lognormal).
You can derive the mean and variance of the lognormal in terms of those parameters easily enough, but I'll just give them:
$E(Y) = e^{\mu+\frac12 \sigma^2}$
$\text{Var}(Y) = E(Y)^2\, (e^{\sigma^2}-1)$
You obtain the standard deviation by taking the square root.
As is usually the case with MLE, these estimates are not unbiased (though they're still consistent of course). If you're especially concerned about unbiasedness, you may want some correction for small samples -- though I work with lognormal models a lot and generally just stick with MLE for the main quantities of interest.
So if you use those equations to obtain the estimates of the mean and variance of the original variable you're getting ML estimates of the parameters.
(iii) you can avoid the assumption and use Taylor expansion to get approximate moments in one direction or the other (though if you have near normality in the logs, it makes more sense to do the estimation on the log scale and transform back. | Calculating standard deviation after log transformation
Several approaches:
(i) you can estimate mean and standard deviation on both the original and the log scale as needed, in the usual fashion. However, they may not necessarily be the most efficient way |
52,875 | Interaction between a predictor and its quadratic form? | The interaction term between x and $x^2$ is $x^3$ . So you are just creating a cubic polynomial regression rather than a quadratic polynomial regression. In general one can create degree n polynomial regressions. Will adding the cubic term, or any other degree term, increase the predictive power of the model ? That is a question that CV or train/test will answer. | Interaction between a predictor and its quadratic form? | The interaction term between x and $x^2$ is $x^3$ . So you are just creating a cubic polynomial regression rather than a quadratic polynomial regression. In general one can create degree n polynomial | Interaction between a predictor and its quadratic form?
The interaction term between x and $x^2$ is $x^3$ . So you are just creating a cubic polynomial regression rather than a quadratic polynomial regression. In general one can create degree n polynomial regressions. Will adding the cubic term, or any other degree term, increase the predictive power of the model ? That is a question that CV or train/test will answer. | Interaction between a predictor and its quadratic form?
The interaction term between x and $x^2$ is $x^3$ . So you are just creating a cubic polynomial regression rather than a quadratic polynomial regression. In general one can create degree n polynomial |
52,876 | What is the machine learning technique used by IBM Watson? | Here is a great article by Robert L. Blum: http://www.bobblum.com/ESSAYS/COMPSCI/Watson.html
Among other things it contains useful pointers:
The best high level article on Watson was written by Ferrucci's IBM team (mirror: http://www.aaai.org/ojs/index.php/aimagazine/article/view/2303), and appeared in AI Magazine in 2010. They describe the evolution of their DeepQA architecture from earlier QA systems including IBM's Piquant and NIST's Aquaint. I also recommend KurzweilAI's Amara Angelica's interview with IBM's Eric Brown.
The best tv show on Watson was this PBS NOVA, Smartest Machine on Earth. These brief videos about Watson on IBM's website are also excellent. And, if you missed the Jeopardy shows, you'll want to see this clip of the champs in a preliminary bout.
Also, from https://ai.stackexchange.com/questions/17/what-things-went-into-ibms-watson-supercomputer
Author: iliasfl
Licensed under cc by-sa 3.0 with attribution required
There was a special issue (http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6177717) that discussed lot of internals of Watson. I will try to give a brief overview.
In general the overall architecture was a ensemble of independent "experts" that all were trying to answer the same question in parallel. Each expert was trained on different datasets and/or was based on different algorithms. The overall decision was made by a combination of the responses of the independent experts, like regression where inputs are the experts' responses. This allowed a structured and incremental development of the machinery: researchers proposed new experts, these were added to the system, if overall accuracy was increased they were kept otherwise they were discarded. Some NLP preprocessing was taking place (correct me if I'm wrong) to make questions digestible by the experts. | What is the machine learning technique used by IBM Watson? | Here is a great article by Robert L. Blum: http://www.bobblum.com/ESSAYS/COMPSCI/Watson.html
Among other things it contains useful pointers:
The best high level article on Watson was written by Ferru | What is the machine learning technique used by IBM Watson?
Here is a great article by Robert L. Blum: http://www.bobblum.com/ESSAYS/COMPSCI/Watson.html
Among other things it contains useful pointers:
The best high level article on Watson was written by Ferrucci's IBM team (mirror: http://www.aaai.org/ojs/index.php/aimagazine/article/view/2303), and appeared in AI Magazine in 2010. They describe the evolution of their DeepQA architecture from earlier QA systems including IBM's Piquant and NIST's Aquaint. I also recommend KurzweilAI's Amara Angelica's interview with IBM's Eric Brown.
The best tv show on Watson was this PBS NOVA, Smartest Machine on Earth. These brief videos about Watson on IBM's website are also excellent. And, if you missed the Jeopardy shows, you'll want to see this clip of the champs in a preliminary bout.
Also, from https://ai.stackexchange.com/questions/17/what-things-went-into-ibms-watson-supercomputer
Author: iliasfl
Licensed under cc by-sa 3.0 with attribution required
There was a special issue (http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6177717) that discussed lot of internals of Watson. I will try to give a brief overview.
In general the overall architecture was a ensemble of independent "experts" that all were trying to answer the same question in parallel. Each expert was trained on different datasets and/or was based on different algorithms. The overall decision was made by a combination of the responses of the independent experts, like regression where inputs are the experts' responses. This allowed a structured and incremental development of the machinery: researchers proposed new experts, these were added to the system, if overall accuracy was increased they were kept otherwise they were discarded. Some NLP preprocessing was taking place (correct me if I'm wrong) to make questions digestible by the experts. | What is the machine learning technique used by IBM Watson?
Here is a great article by Robert L. Blum: http://www.bobblum.com/ESSAYS/COMPSCI/Watson.html
Among other things it contains useful pointers:
The best high level article on Watson was written by Ferru |
52,877 | What is the machine learning technique used by IBM Watson? | In this Quora question, an IBM researcher answers that the statistical modeling done by Watson is mostly logistic regression, although this is built into a very complex hierarchy. They also state that the statistical modeling is not necessarily the most challenging part of Watson.
I have also independently heard the same thing reported by another Watson researcher. | What is the machine learning technique used by IBM Watson? | In this Quora question, an IBM researcher answers that the statistical modeling done by Watson is mostly logistic regression, although this is built into a very complex hierarchy. They also state that | What is the machine learning technique used by IBM Watson?
In this Quora question, an IBM researcher answers that the statistical modeling done by Watson is mostly logistic regression, although this is built into a very complex hierarchy. They also state that the statistical modeling is not necessarily the most challenging part of Watson.
I have also independently heard the same thing reported by another Watson researcher. | What is the machine learning technique used by IBM Watson?
In this Quora question, an IBM researcher answers that the statistical modeling done by Watson is mostly logistic regression, although this is built into a very complex hierarchy. They also state that |
52,878 | Chi-squared test: Investigating fruit flies attraction to different colours | You shouldn't be using a 'one-way' or 'goodness-of-fit' chi-squared test here six times over. You should be using a chi-squared test of independence on a two-way contingency table. In addition, as @DJohnson notes below, you need to use the actual counts observed, not average counts (I'm not sure I understand how you say you got $6.67$ flies in the bottom layer, for example.) That is, you need to set up a contingency table like this:
Layer
Color bottom middle top sum
red 7 3 10 20
green # # # 20
blue # # # 20
orange # # # 20
purple # # # 20
yellow # # # 20
Then run your chi-squared test. The degrees of freedom for chi-squared test is $(r-1)(c-1)$ (i.e., the number of rows minus 1 times the number of columns minus 1). In your case that would be: $5\times 2 = 10$.
Update: If you have three repeated versions of this experiment, you have (in some sense) three two-way contingency tables, or (more correctly) a three-way contingency table. You want to test if there is a difference amongst the rows with the iterations taken into account. The general way to analyze mult-way contingency tables is to use the log linear model (which is actually a dressed-up Poisson GLiM). I describe this in more detail here: $\chi^2$ of multidimensional data. Below, I create two fake datasets using R, one I call ".n" (for 'null', because there isn't a relationship between the color and the layer), and the other I call ".a" (for 'alternative', because the relationship you are interested in does exist).
dft = expand.grid(layer=c("bottom","middle","top"),
color=c("blue", "green", "orange", "red", "white", "yellow"),
Repeat=1:3)
dft = dft[,3:1]
dft.n = data.frame(dft, count=c(rep(c( 3,6,11), times=6),
rep(c( 6,7, 7), times=6),
rep(c(11,6, 3), times=6)))
dft.a = data.frame(dft,
count=c(c(3,6,11), c(11,6, 3), c(11,6, 3), c(3,6,11), c(3,6,11), c(11,6, 3),
c(3,6,11), c(11,6, 3), c(11,6, 3), c(3,6,11), c(3,6,11), c(11,6, 3),
c(3,6,11), c(11,6, 3), c(11,6, 3), c(3,6,11), c(3,6,11), c(11,6, 3) ))
tab.n = xtabs(count~color+layer+Repeat, dft.n)
# , , Repeat = 1
# layer
# color bottom middle top
# blue 3 6 11
# green 3 6 11
# orange 3 6 11
# red 3 6 11
# white 3 6 11
# yellow 3 6 11
#
# , , Repeat = 2
# layer
# color bottom middle top
# blue 6 7 7
# green 6 7 7
# orange 6 7 7
# red 6 7 7
# white 6 7 7
# yellow 6 7 7
#
# , , Repeat = 3
# layer
# color bottom middle top
# blue 11 6 3
# green 11 6 3
# orange 11 6 3
# red 11 6 3
# white 11 6 3
# yellow 11 6 3
tab.a = xtabs(count~color+layer+Repeat, dft.a)
# , , Repeat = 1
# layer
# color bottom middle top
# blue 3 6 11
# green 11 6 3
# orange 11 6 3
# red 3 6 11
# white 3 6 11
# yellow 11 6 3
#
# , , Repeat = 2
# layer
# color bottom middle top
# blue 3 6 11
# green 11 6 3
# orange 11 6 3
# red 3 6 11
# white 3 6 11
# yellow 11 6 3
#
# , , Repeat = 3
# layer
# color bottom middle top
# blue 3 6 11
# green 11 6 3
# orange 11 6 3
# red 3 6 11
# white 3 6 11
# yellow 11 6 3
I run a quickie log-linear analysis on both. The models are listed from 0, which is the 'saturated' model, through 2, which has dropped terms. Note that in R it is typical to list models in order from smallest to largest, but the result of the anova() call refers to the nested model as "Model 1", which makes the names not correspond well; try not to be thrown off by this. For the null dataset, Model 2 differs from Model 1 (i.e., m.1.n differs from m.2.n), meaning that the layers are not independent of the Repeats. On the other hand, Model 3 does not differ from Model 2 (i.e., m.0.n differs from m.1.n), meaning that the layer*Repeat pattern does not differ by color. In addition, Model 3 does not differ from the Saturated model (because it is the saturated model).
library(MASS)
m.0.n = loglm(~color*layer*Repeat, tab.n)
m.1.n = loglm(~color+layer*Repeat, tab.n)
m.2.n = loglm(~color+layer+Repeat, tab.n)
anova(m.2.n, m.1.n, m.0.n)
# LR tests for hierarchical log-linear models
#
# Model 1:
# ~color + layer + Repeat
# Model 2:
# ~color + layer * Repeat
# Model 3:
# ~color * layer * Repeat
#
# Deviance df Delta(Dev) Delta(df) P(> Delta(Dev)
# Model 1 59.55075 44
# Model 2 0.00000 40 59.55075 4 0
# Model 3 0.00000 0 0.00000 40 1
# Saturated 0.00000 0 0.00000 0 1
m.0.a = loglm(~color*layer*Repeat, tab.a)
m.1.a = loglm(~color+layer*Repeat, tab.a)
m.2.a = loglm(~color+layer+Repeat, tab.a)
anova(m.2.a, m.1.a, m.0.a)
# LR tests for hierarchical log-linear models
#
# Model 1:
# ~color + layer + Repeat
# Model 2:
# ~color + layer * Repeat
# Model 3:
# ~color * layer * Repeat
#
# Deviance df Delta(Dev) Delta(df) P(> Delta(Dev)
# Model 1 87.47794 44
# Model 2 87.47794 40 0.00000 4 1e+00
# Model 3 0.00000 0 87.47794 40 2e-05
# Saturated 0.00000 0 0.00000 0 1e+00
For the alternative dataset, Model 2 does not differ from Model 1 (i.e., m.1.a differs from m.2.a), meaning that the layers are independent of the Repeats. On the other hand, Model 3 does differ from Model 2 (i.e., m.0.a differs from m.1.a), meaning that the layer*Repeat pattern does differ by color. (And again, Model 3 is the Saturated model.) | Chi-squared test: Investigating fruit flies attraction to different colours | You shouldn't be using a 'one-way' or 'goodness-of-fit' chi-squared test here six times over. You should be using a chi-squared test of independence on a two-way contingency table. In addition, as @ | Chi-squared test: Investigating fruit flies attraction to different colours
You shouldn't be using a 'one-way' or 'goodness-of-fit' chi-squared test here six times over. You should be using a chi-squared test of independence on a two-way contingency table. In addition, as @DJohnson notes below, you need to use the actual counts observed, not average counts (I'm not sure I understand how you say you got $6.67$ flies in the bottom layer, for example.) That is, you need to set up a contingency table like this:
Layer
Color bottom middle top sum
red 7 3 10 20
green # # # 20
blue # # # 20
orange # # # 20
purple # # # 20
yellow # # # 20
Then run your chi-squared test. The degrees of freedom for chi-squared test is $(r-1)(c-1)$ (i.e., the number of rows minus 1 times the number of columns minus 1). In your case that would be: $5\times 2 = 10$.
Update: If you have three repeated versions of this experiment, you have (in some sense) three two-way contingency tables, or (more correctly) a three-way contingency table. You want to test if there is a difference amongst the rows with the iterations taken into account. The general way to analyze mult-way contingency tables is to use the log linear model (which is actually a dressed-up Poisson GLiM). I describe this in more detail here: $\chi^2$ of multidimensional data. Below, I create two fake datasets using R, one I call ".n" (for 'null', because there isn't a relationship between the color and the layer), and the other I call ".a" (for 'alternative', because the relationship you are interested in does exist).
dft = expand.grid(layer=c("bottom","middle","top"),
color=c("blue", "green", "orange", "red", "white", "yellow"),
Repeat=1:3)
dft = dft[,3:1]
dft.n = data.frame(dft, count=c(rep(c( 3,6,11), times=6),
rep(c( 6,7, 7), times=6),
rep(c(11,6, 3), times=6)))
dft.a = data.frame(dft,
count=c(c(3,6,11), c(11,6, 3), c(11,6, 3), c(3,6,11), c(3,6,11), c(11,6, 3),
c(3,6,11), c(11,6, 3), c(11,6, 3), c(3,6,11), c(3,6,11), c(11,6, 3),
c(3,6,11), c(11,6, 3), c(11,6, 3), c(3,6,11), c(3,6,11), c(11,6, 3) ))
tab.n = xtabs(count~color+layer+Repeat, dft.n)
# , , Repeat = 1
# layer
# color bottom middle top
# blue 3 6 11
# green 3 6 11
# orange 3 6 11
# red 3 6 11
# white 3 6 11
# yellow 3 6 11
#
# , , Repeat = 2
# layer
# color bottom middle top
# blue 6 7 7
# green 6 7 7
# orange 6 7 7
# red 6 7 7
# white 6 7 7
# yellow 6 7 7
#
# , , Repeat = 3
# layer
# color bottom middle top
# blue 11 6 3
# green 11 6 3
# orange 11 6 3
# red 11 6 3
# white 11 6 3
# yellow 11 6 3
tab.a = xtabs(count~color+layer+Repeat, dft.a)
# , , Repeat = 1
# layer
# color bottom middle top
# blue 3 6 11
# green 11 6 3
# orange 11 6 3
# red 3 6 11
# white 3 6 11
# yellow 11 6 3
#
# , , Repeat = 2
# layer
# color bottom middle top
# blue 3 6 11
# green 11 6 3
# orange 11 6 3
# red 3 6 11
# white 3 6 11
# yellow 11 6 3
#
# , , Repeat = 3
# layer
# color bottom middle top
# blue 3 6 11
# green 11 6 3
# orange 11 6 3
# red 3 6 11
# white 3 6 11
# yellow 11 6 3
I run a quickie log-linear analysis on both. The models are listed from 0, which is the 'saturated' model, through 2, which has dropped terms. Note that in R it is typical to list models in order from smallest to largest, but the result of the anova() call refers to the nested model as "Model 1", which makes the names not correspond well; try not to be thrown off by this. For the null dataset, Model 2 differs from Model 1 (i.e., m.1.n differs from m.2.n), meaning that the layers are not independent of the Repeats. On the other hand, Model 3 does not differ from Model 2 (i.e., m.0.n differs from m.1.n), meaning that the layer*Repeat pattern does not differ by color. In addition, Model 3 does not differ from the Saturated model (because it is the saturated model).
library(MASS)
m.0.n = loglm(~color*layer*Repeat, tab.n)
m.1.n = loglm(~color+layer*Repeat, tab.n)
m.2.n = loglm(~color+layer+Repeat, tab.n)
anova(m.2.n, m.1.n, m.0.n)
# LR tests for hierarchical log-linear models
#
# Model 1:
# ~color + layer + Repeat
# Model 2:
# ~color + layer * Repeat
# Model 3:
# ~color * layer * Repeat
#
# Deviance df Delta(Dev) Delta(df) P(> Delta(Dev)
# Model 1 59.55075 44
# Model 2 0.00000 40 59.55075 4 0
# Model 3 0.00000 0 0.00000 40 1
# Saturated 0.00000 0 0.00000 0 1
m.0.a = loglm(~color*layer*Repeat, tab.a)
m.1.a = loglm(~color+layer*Repeat, tab.a)
m.2.a = loglm(~color+layer+Repeat, tab.a)
anova(m.2.a, m.1.a, m.0.a)
# LR tests for hierarchical log-linear models
#
# Model 1:
# ~color + layer + Repeat
# Model 2:
# ~color + layer * Repeat
# Model 3:
# ~color * layer * Repeat
#
# Deviance df Delta(Dev) Delta(df) P(> Delta(Dev)
# Model 1 87.47794 44
# Model 2 87.47794 40 0.00000 4 1e+00
# Model 3 0.00000 0 87.47794 40 2e-05
# Saturated 0.00000 0 0.00000 0 1e+00
For the alternative dataset, Model 2 does not differ from Model 1 (i.e., m.1.a differs from m.2.a), meaning that the layers are independent of the Repeats. On the other hand, Model 3 does differ from Model 2 (i.e., m.0.a differs from m.1.a), meaning that the layer*Repeat pattern does differ by color. (And again, Model 3 is the Saturated model.) | Chi-squared test: Investigating fruit flies attraction to different colours
You shouldn't be using a 'one-way' or 'goodness-of-fit' chi-squared test here six times over. You should be using a chi-squared test of independence on a two-way contingency table. In addition, as @ |
52,879 | Chi-squared test: Investigating fruit flies attraction to different colours | Gung's test is a test of independence in the two-way classification table. Additional tests are possible, as described in Wicken's book Multiway Contingency Tables Analysis for the Social Sciences, one of the last, great treatments of this topic before the advent of tensor models. As Wickens notes:
There are three different experimental procedures that generate
two-way tables of frequencies. These lead to different models for the
population of scores, although the actual tests are the same. The
three null hypotheses are referred to as hypotheses of homogeneity, of
independence and of unrelated classification...one way to distinguish
among the three descriptions is to look at the roles of the marginal
frequency distributions (p.22)
In the OP's case, a test of homogeneity seems most appropriate. It is one in which the characteristics of the population are embodied in the row conditional probabilities. Using Gung's table setup, the expected counts for each color by layer cell are conditioned on the "Sum" column:
The null hypothesis tested here is that the distributions of the
responses across the populations are the same...Thus, one speaks of
this as a test of the homogeneity of the populations. More
abstractly, the probabilistic structure underlying these data is a
pair of binomial distributions (p. 23)
This test differs from the test of independence in that a single, fixed marginal is being used -- as opposed to the combined row and column marginals in the test of independence.
The test for unrelatedness involves fixed row and column marginals and isn't appropriate for this data (i.e., all marginals would sum to 20). | Chi-squared test: Investigating fruit flies attraction to different colours | Gung's test is a test of independence in the two-way classification table. Additional tests are possible, as described in Wicken's book Multiway Contingency Tables Analysis for the Social Sciences, on | Chi-squared test: Investigating fruit flies attraction to different colours
Gung's test is a test of independence in the two-way classification table. Additional tests are possible, as described in Wicken's book Multiway Contingency Tables Analysis for the Social Sciences, one of the last, great treatments of this topic before the advent of tensor models. As Wickens notes:
There are three different experimental procedures that generate
two-way tables of frequencies. These lead to different models for the
population of scores, although the actual tests are the same. The
three null hypotheses are referred to as hypotheses of homogeneity, of
independence and of unrelated classification...one way to distinguish
among the three descriptions is to look at the roles of the marginal
frequency distributions (p.22)
In the OP's case, a test of homogeneity seems most appropriate. It is one in which the characteristics of the population are embodied in the row conditional probabilities. Using Gung's table setup, the expected counts for each color by layer cell are conditioned on the "Sum" column:
The null hypothesis tested here is that the distributions of the
responses across the populations are the same...Thus, one speaks of
this as a test of the homogeneity of the populations. More
abstractly, the probabilistic structure underlying these data is a
pair of binomial distributions (p. 23)
This test differs from the test of independence in that a single, fixed marginal is being used -- as opposed to the combined row and column marginals in the test of independence.
The test for unrelatedness involves fixed row and column marginals and isn't appropriate for this data (i.e., all marginals would sum to 20). | Chi-squared test: Investigating fruit flies attraction to different colours
Gung's test is a test of independence in the two-way classification table. Additional tests are possible, as described in Wicken's book Multiway Contingency Tables Analysis for the Social Sciences, on |
52,880 | ReLUs and Gradient Descent for Deep Neural Nets | In practice, it's unlikely that one hidden unit has an input of precisely 0, so it doesn't matter much whether you take 0 or 1 for gradient in that situation. E.g. Theano considers that the gradient at 0 is 0. Tensorflow's playground does the same:
public static RELU: ActivationFunction = {
output: x => Math.max(0, x),
der: x => x <= 0 ? 0 : 1
};
(1) did notice the theoretical issue of non-differentiability:
This paper
shows that rectifying neurons are an
even better model of biological neurons and
yield equal or better performance than hyperbolic
tangent networks in spite of the
hard non-linearity and non-differentiability
at zero, creating sparse representations with
true zeros, which seem remarkably suitable
for naturally sparse data.
but it works anyway.
As a side note, if you use ReLU, you should watch for dead units in the network (= units that never activate). If you see to many dead units as you train your network, you might want to consider switching to leaky ReLU.
(1) Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. "Deep Sparse Rectifier Neural Networks." In Aistats, vol. 15, no. 106, p. 275. 2011. | ReLUs and Gradient Descent for Deep Neural Nets | In practice, it's unlikely that one hidden unit has an input of precisely 0, so it doesn't matter much whether you take 0 or 1 for gradient in that situation. E.g. Theano considers that the gradient a | ReLUs and Gradient Descent for Deep Neural Nets
In practice, it's unlikely that one hidden unit has an input of precisely 0, so it doesn't matter much whether you take 0 or 1 for gradient in that situation. E.g. Theano considers that the gradient at 0 is 0. Tensorflow's playground does the same:
public static RELU: ActivationFunction = {
output: x => Math.max(0, x),
der: x => x <= 0 ? 0 : 1
};
(1) did notice the theoretical issue of non-differentiability:
This paper
shows that rectifying neurons are an
even better model of biological neurons and
yield equal or better performance than hyperbolic
tangent networks in spite of the
hard non-linearity and non-differentiability
at zero, creating sparse representations with
true zeros, which seem remarkably suitable
for naturally sparse data.
but it works anyway.
As a side note, if you use ReLU, you should watch for dead units in the network (= units that never activate). If you see to many dead units as you train your network, you might want to consider switching to leaky ReLU.
(1) Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. "Deep Sparse Rectifier Neural Networks." In Aistats, vol. 15, no. 106, p. 275. 2011. | ReLUs and Gradient Descent for Deep Neural Nets
In practice, it's unlikely that one hidden unit has an input of precisely 0, so it doesn't matter much whether you take 0 or 1 for gradient in that situation. E.g. Theano considers that the gradient a |
52,881 | How do I interpret these results from a paired t-test? | No, we don't "accept $H_o$", instead we fail to reject the null hypothesis. So we are not "rejecting the null", implying that we don't have enough evidence to assume that the difference in means is different from zero.
The confidence interval includes the value zero, and gives us the ranges of values within which the mean difference in the population would lie with a confidence level of $95\%$. In your results, this is a very broad range, but the inclusion of the value zero prevents you from excluding the null. | How do I interpret these results from a paired t-test? | No, we don't "accept $H_o$", instead we fail to reject the null hypothesis. So we are not "rejecting the null", implying that we don't have enough evidence to assume that the difference in means is di | How do I interpret these results from a paired t-test?
No, we don't "accept $H_o$", instead we fail to reject the null hypothesis. So we are not "rejecting the null", implying that we don't have enough evidence to assume that the difference in means is different from zero.
The confidence interval includes the value zero, and gives us the ranges of values within which the mean difference in the population would lie with a confidence level of $95\%$. In your results, this is a very broad range, but the inclusion of the value zero prevents you from excluding the null. | How do I interpret these results from a paired t-test?
No, we don't "accept $H_o$", instead we fail to reject the null hypothesis. So we are not "rejecting the null", implying that we don't have enough evidence to assume that the difference in means is di |
52,882 | How do I interpret these results from a paired t-test? | The alternative hypothesis that you proposed is one-sided but you have used a two-sided t-test. Had you set alt="greater" in the t-test to get the one-sided result you would have had p-value = 0.082.
Do not dichotomise the result into significant and not significant on the basis of comparison of your observed p-value and the unthinkingly arbitrary threshold of 0.05. To do so is to use the "bright-line" thinking that is warned against in the recent American Statistical Association's official statement on p-values. http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108#.VyUnhmR95cw
Instead of dichotomising, look at the result and weigh it in light of what you know and what you want the data to tell you. Your data appear to contain relatively weak evidence against your null hypothesis.
There are many references in the ASA statement that will help you with the distinction between the dichotomous result of a hypothesis test and an evidence-respecting interpretation of a significance test, but I like my own: http://www.ncbi.nlm.nih.gov/pubmed/22394284 | How do I interpret these results from a paired t-test? | The alternative hypothesis that you proposed is one-sided but you have used a two-sided t-test. Had you set alt="greater" in the t-test to get the one-sided result you would have had p-value = 0.082.
| How do I interpret these results from a paired t-test?
The alternative hypothesis that you proposed is one-sided but you have used a two-sided t-test. Had you set alt="greater" in the t-test to get the one-sided result you would have had p-value = 0.082.
Do not dichotomise the result into significant and not significant on the basis of comparison of your observed p-value and the unthinkingly arbitrary threshold of 0.05. To do so is to use the "bright-line" thinking that is warned against in the recent American Statistical Association's official statement on p-values. http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108#.VyUnhmR95cw
Instead of dichotomising, look at the result and weigh it in light of what you know and what you want the data to tell you. Your data appear to contain relatively weak evidence against your null hypothesis.
There are many references in the ASA statement that will help you with the distinction between the dichotomous result of a hypothesis test and an evidence-respecting interpretation of a significance test, but I like my own: http://www.ncbi.nlm.nih.gov/pubmed/22394284 | How do I interpret these results from a paired t-test?
The alternative hypothesis that you proposed is one-sided but you have used a two-sided t-test. Had you set alt="greater" in the t-test to get the one-sided result you would have had p-value = 0.082.
|
52,883 | Zero conditional mean assumption (how can in not hold?) | In a more technical parlance, I believe your asking, is the strict exogeneity assumption ever violated. Where the strict exogeneity assumption is...
$$E(\epsilon|X)=0$$
In practice this happens all the time. As a matter of fact the majority of the field of econometrics is focused on the failure of this assumption. When does this happen...
Let's assume that $\epsilon \sim N(0,1)$, so $E(\epsilon) = 0$. We know that if $\epsilon$ and $X$ are independent then $E(\epsilon|X) = E(\epsilon) = 0 $. However, what if $X$ and $\epsilon$ are correlated such that $Cov(X,\epsilon) = E(X'\epsilon) - E(X)E(\epsilon) = E(X'\epsilon) \neq E(\epsilon) = 0$. This implies that $E(\epsilon|X) \neq 0 $
Clearly the strict exogeneity assumption fails if $X$ and $\epsilon$ are correlated. The question is, does this ever happen? The answer is yes. As a matter of fact, outside of experimental settings, it happens more often then not. The most common example is omitted variable bias. Matthew Gunn's post discusses this. Another pedagogical example is as follows, imagine you run a regression of ice cream sales over time on the number of people wearing shorts over time. You will likely get a very large and significant parameter estimate. However you're not going to go running to Haagen Daz executives telling them they should start running advertisements for summer wear. It is obvious that there is a missing variable, temperature. This is a violation of the strict exogeneity assumption because number of people wearing shorts ($X$) is correlated with our omitted variable temperature which is contained in the error term ($\epsilon$)
Notice that the parameter estimate in our simple ice cream sales on number of shorts model is biased. Once we include the temperature in the model the, the number of shorts parameter will change. More formally:
$$\hat \beta = (X'X)^{-1}X'Y = \beta + (X'X)^{-1}X'\epsilon$$
If $X$ and $\epsilon$ are correlated then ...
$$E(\hat \beta) = \beta + (X'X)^{-1}E(X' \epsilon)$$
So the bias is $(X'X)^{-1}E(X' \epsilon)$ which vanishes if the $E(X' \epsilon)=0$ | Zero conditional mean assumption (how can in not hold?) | In a more technical parlance, I believe your asking, is the strict exogeneity assumption ever violated. Where the strict exogeneity assumption is...
$$E(\epsilon|X)=0$$
In practice this happens all t | Zero conditional mean assumption (how can in not hold?)
In a more technical parlance, I believe your asking, is the strict exogeneity assumption ever violated. Where the strict exogeneity assumption is...
$$E(\epsilon|X)=0$$
In practice this happens all the time. As a matter of fact the majority of the field of econometrics is focused on the failure of this assumption. When does this happen...
Let's assume that $\epsilon \sim N(0,1)$, so $E(\epsilon) = 0$. We know that if $\epsilon$ and $X$ are independent then $E(\epsilon|X) = E(\epsilon) = 0 $. However, what if $X$ and $\epsilon$ are correlated such that $Cov(X,\epsilon) = E(X'\epsilon) - E(X)E(\epsilon) = E(X'\epsilon) \neq E(\epsilon) = 0$. This implies that $E(\epsilon|X) \neq 0 $
Clearly the strict exogeneity assumption fails if $X$ and $\epsilon$ are correlated. The question is, does this ever happen? The answer is yes. As a matter of fact, outside of experimental settings, it happens more often then not. The most common example is omitted variable bias. Matthew Gunn's post discusses this. Another pedagogical example is as follows, imagine you run a regression of ice cream sales over time on the number of people wearing shorts over time. You will likely get a very large and significant parameter estimate. However you're not going to go running to Haagen Daz executives telling them they should start running advertisements for summer wear. It is obvious that there is a missing variable, temperature. This is a violation of the strict exogeneity assumption because number of people wearing shorts ($X$) is correlated with our omitted variable temperature which is contained in the error term ($\epsilon$)
Notice that the parameter estimate in our simple ice cream sales on number of shorts model is biased. Once we include the temperature in the model the, the number of shorts parameter will change. More formally:
$$\hat \beta = (X'X)^{-1}X'Y = \beta + (X'X)^{-1}X'\epsilon$$
If $X$ and $\epsilon$ are correlated then ...
$$E(\hat \beta) = \beta + (X'X)^{-1}E(X' \epsilon)$$
So the bias is $(X'X)^{-1}E(X' \epsilon)$ which vanishes if the $E(X' \epsilon)=0$ | Zero conditional mean assumption (how can in not hold?)
In a more technical parlance, I believe your asking, is the strict exogeneity assumption ever violated. Where the strict exogeneity assumption is...
$$E(\epsilon|X)=0$$
In practice this happens all t |
52,884 | Zero conditional mean assumption (how can in not hold?) | In American football, the total score is given by:
Total football score = 6 * (Touchdowns) + 1 * (ExtraPoints) + 2 * (TwoPointConversions) + 2 * (safeties) + 3 * field goals.
But if you ran the regression:
TotalFootBallScore = b1 * touchdowns + b2 * fieldgoals + e
You wouldn't estimate a value of 6 for b1. Regress the total football score on number of touchdowns and field goals, and you would almost certainly estimate that touchdowns are worth more 7 or more points rather than 6. Your error term e in this case contains the points scored from extra points and two points conversions, and those are almost certainly not zero conditional on knowing the number of touchdowns. | Zero conditional mean assumption (how can in not hold?) | In American football, the total score is given by:
Total football score = 6 * (Touchdowns) + 1 * (ExtraPoints) + 2 * (TwoPointConversions) + 2 * (safeties) + 3 * field goals.
But if you ran the regres | Zero conditional mean assumption (how can in not hold?)
In American football, the total score is given by:
Total football score = 6 * (Touchdowns) + 1 * (ExtraPoints) + 2 * (TwoPointConversions) + 2 * (safeties) + 3 * field goals.
But if you ran the regression:
TotalFootBallScore = b1 * touchdowns + b2 * fieldgoals + e
You wouldn't estimate a value of 6 for b1. Regress the total football score on number of touchdowns and field goals, and you would almost certainly estimate that touchdowns are worth more 7 or more points rather than 6. Your error term e in this case contains the points scored from extra points and two points conversions, and those are almost certainly not zero conditional on knowing the number of touchdowns. | Zero conditional mean assumption (how can in not hold?)
In American football, the total score is given by:
Total football score = 6 * (Touchdowns) + 1 * (ExtraPoints) + 2 * (TwoPointConversions) + 2 * (safeties) + 3 * field goals.
But if you ran the regres |
52,885 | Alternatives to multilevel model with log transformed outcome | This is a great use case for the inverse hyperbolic sine transformation [1], [2]:
$$
\log\left( y + \sqrt{y^2 + 1} \right)
$$
Except for very small values of y, the inverse sine is approximately equal to log(2yi) or log(2)+log(yi), and so it can be interpreted in exactly the same way as a standard logarithmic dependent variable. ... But unlike a log variable, the inverse hyperbolic sine is defined at zero.
http://chrisblattman.com/2011/11/15/if-you-know-what-ln1income-is-and-why-its-a-headache-you-should-read-this-post
http://worthwhile.typepad.com/worthwhile_canadian_initi/2011/07/a-rant-on-inverse-hyperbolic-sine-transformations.html | Alternatives to multilevel model with log transformed outcome | This is a great use case for the inverse hyperbolic sine transformation [1], [2]:
$$
\log\left( y + \sqrt{y^2 + 1} \right)
$$
Except for very small values of y, the inverse sine is approximately equa | Alternatives to multilevel model with log transformed outcome
This is a great use case for the inverse hyperbolic sine transformation [1], [2]:
$$
\log\left( y + \sqrt{y^2 + 1} \right)
$$
Except for very small values of y, the inverse sine is approximately equal to log(2yi) or log(2)+log(yi), and so it can be interpreted in exactly the same way as a standard logarithmic dependent variable. ... But unlike a log variable, the inverse hyperbolic sine is defined at zero.
http://chrisblattman.com/2011/11/15/if-you-know-what-ln1income-is-and-why-its-a-headache-you-should-read-this-post
http://worthwhile.typepad.com/worthwhile_canadian_initi/2011/07/a-rant-on-inverse-hyperbolic-sine-transformations.html | Alternatives to multilevel model with log transformed outcome
This is a great use case for the inverse hyperbolic sine transformation [1], [2]:
$$
\log\left( y + \sqrt{y^2 + 1} \right)
$$
Except for very small values of y, the inverse sine is approximately equa |
52,886 | Alternatives to multilevel model with log transformed outcome | It is not uncommon to offset a variable by a constant prior to taking its log when it cannot be log-transformed directly (for example because some observations are 0 and would map to $-\infty$).
For example, you could offset all of the observations for the variable by 1 and therefore have a log-transformed variable that is a positive reel. All the observation that are 0 in your original dataset will become $log(0 + 1) = 0$
You can find a thorough description of this issue in this related question. | Alternatives to multilevel model with log transformed outcome | It is not uncommon to offset a variable by a constant prior to taking its log when it cannot be log-transformed directly (for example because some observations are 0 and would map to $-\infty$).
For | Alternatives to multilevel model with log transformed outcome
It is not uncommon to offset a variable by a constant prior to taking its log when it cannot be log-transformed directly (for example because some observations are 0 and would map to $-\infty$).
For example, you could offset all of the observations for the variable by 1 and therefore have a log-transformed variable that is a positive reel. All the observation that are 0 in your original dataset will become $log(0 + 1) = 0$
You can find a thorough description of this issue in this related question. | Alternatives to multilevel model with log transformed outcome
It is not uncommon to offset a variable by a constant prior to taking its log when it cannot be log-transformed directly (for example because some observations are 0 and would map to $-\infty$).
For |
52,887 | Unit root tests ambiguous - is time series stationary? | I am not surprised by these results. I got them very often. The KPSS test for some reason is very sensitive, if not overly so, as it rejects the vast majority of variables as stationary. In other words, it diagnoses almost everything as non-stationary. Because of that, I have stopped using the KPSS test for stationary diagnostics. And, I rely on the other tests that seem fairer and more accurate on this issue. The two other tests you use (PP and ADF) generate far more reliable results on this count. Also, visually you can tell that your variable appears pretty stationary.
I am revising my answer from 2017. The KPSS test after all does not reject stationarity as often as I thought it did. If you run the test in R, it almost always gives you a p-value of 0.1 making it a bit difficult to accept the null hypothesis that the variable is stationary. But, underneath the calculation you see a red written warning that states "... p-value is greater than the printed value shown." It means that whether the p-value is 0.11 or 0.99, the test result will show it as 0.1. In other words, the KPSS does not reject that a variable is stationary nearly as often as I thought it did. And, now I pretty much use it again all the time in such circumstance. It is nice to run a stationarity test that runs in the opposite direction of the others as its null is that the variable is stationary instead of non-stationary as in all the other tests.
This does not detract that different tests will give you often contradicting results. I typically run three tests, and if the variable passes 2 out of 3 tests, I deem it as adequate evidence that the variable is stationary.
Last but not least, when I look at your time series graph your variable appears stationary enough so that it should not render any model that you build using it "spurious." | Unit root tests ambiguous - is time series stationary? | I am not surprised by these results. I got them very often. The KPSS test for some reason is very sensitive, if not overly so, as it rejects the vast majority of variables as stationary. In other w | Unit root tests ambiguous - is time series stationary?
I am not surprised by these results. I got them very often. The KPSS test for some reason is very sensitive, if not overly so, as it rejects the vast majority of variables as stationary. In other words, it diagnoses almost everything as non-stationary. Because of that, I have stopped using the KPSS test for stationary diagnostics. And, I rely on the other tests that seem fairer and more accurate on this issue. The two other tests you use (PP and ADF) generate far more reliable results on this count. Also, visually you can tell that your variable appears pretty stationary.
I am revising my answer from 2017. The KPSS test after all does not reject stationarity as often as I thought it did. If you run the test in R, it almost always gives you a p-value of 0.1 making it a bit difficult to accept the null hypothesis that the variable is stationary. But, underneath the calculation you see a red written warning that states "... p-value is greater than the printed value shown." It means that whether the p-value is 0.11 or 0.99, the test result will show it as 0.1. In other words, the KPSS does not reject that a variable is stationary nearly as often as I thought it did. And, now I pretty much use it again all the time in such circumstance. It is nice to run a stationarity test that runs in the opposite direction of the others as its null is that the variable is stationary instead of non-stationary as in all the other tests.
This does not detract that different tests will give you often contradicting results. I typically run three tests, and if the variable passes 2 out of 3 tests, I deem it as adequate evidence that the variable is stationary.
Last but not least, when I look at your time series graph your variable appears stationary enough so that it should not render any model that you build using it "spurious." | Unit root tests ambiguous - is time series stationary?
I am not surprised by these results. I got them very often. The KPSS test for some reason is very sensitive, if not overly so, as it rejects the vast majority of variables as stationary. In other w |
52,888 | Unit root tests ambiguous - is time series stationary? | ADF and PP tests address a specific form of nonstationarity, i.e. unit-root nonstationarity. Apparently, you reject that form. However, unit-root nonstationarity it is not the only possible form of nostationarity. Hence, you do not conclude that the data is stationary (you only conclude that there is not enough evidence for unit-root nonstationarity). Combine this with an indication of nonstationarity by the KPSS test, and you are even more convinced the data is nonstationary.
Your data might have some level shift (the first half of the sample does not have quite the same level as the second half) and perhaps some heteroskedasticity (the oscilations in the first half of the sample seem a little larger than those in the second half).
So no, I would not feel comfortable "going for stationarity". | Unit root tests ambiguous - is time series stationary? | ADF and PP tests address a specific form of nonstationarity, i.e. unit-root nonstationarity. Apparently, you reject that form. However, unit-root nonstationarity it is not the only possible form of no | Unit root tests ambiguous - is time series stationary?
ADF and PP tests address a specific form of nonstationarity, i.e. unit-root nonstationarity. Apparently, you reject that form. However, unit-root nonstationarity it is not the only possible form of nostationarity. Hence, you do not conclude that the data is stationary (you only conclude that there is not enough evidence for unit-root nonstationarity). Combine this with an indication of nonstationarity by the KPSS test, and you are even more convinced the data is nonstationary.
Your data might have some level shift (the first half of the sample does not have quite the same level as the second half) and perhaps some heteroskedasticity (the oscilations in the first half of the sample seem a little larger than those in the second half).
So no, I would not feel comfortable "going for stationarity". | Unit root tests ambiguous - is time series stationary?
ADF and PP tests address a specific form of nonstationarity, i.e. unit-root nonstationarity. Apparently, you reject that form. However, unit-root nonstationarity it is not the only possible form of no |
52,889 | Unit root tests ambiguous - is time series stationary? | There are six different unit root test available in Eviews:
The Augmented Dickey-Fuller (ADF) Test
Dickey-Fuller Test with GLS Detrending (DFGLS)
The Phillips-Perron (PP) Test
The Kwiatkowski, Phillips, Schmidt, and Shin (KPSS) Test
Elliot, Rothenberg, and Stock Point Optimal (ERS) Test
Ng and Perron (NP) Tests
Each test has some assumptions, for example, ADF test should be used is series has auto-correlation but it is homoskedastic.
and PP test should be used if series has both issues suto-correlation and presence of heteroskedasticity.
It is better to select the suitable test for stationarity, instead of applying all.
You can read more about these test at
http://www.eviews.com/help/helpintro.html#page/content/advtimeser-Unit_Root_Testing.html
All the best! | Unit root tests ambiguous - is time series stationary? | There are six different unit root test available in Eviews:
The Augmented Dickey-Fuller (ADF) Test
Dickey-Fuller Test with GLS Detrending (DFGLS)
The Phillips-Perron (PP) Test
The Kwiatkowski, P | Unit root tests ambiguous - is time series stationary?
There are six different unit root test available in Eviews:
The Augmented Dickey-Fuller (ADF) Test
Dickey-Fuller Test with GLS Detrending (DFGLS)
The Phillips-Perron (PP) Test
The Kwiatkowski, Phillips, Schmidt, and Shin (KPSS) Test
Elliot, Rothenberg, and Stock Point Optimal (ERS) Test
Ng and Perron (NP) Tests
Each test has some assumptions, for example, ADF test should be used is series has auto-correlation but it is homoskedastic.
and PP test should be used if series has both issues suto-correlation and presence of heteroskedasticity.
It is better to select the suitable test for stationarity, instead of applying all.
You can read more about these test at
http://www.eviews.com/help/helpintro.html#page/content/advtimeser-Unit_Root_Testing.html
All the best! | Unit root tests ambiguous - is time series stationary?
There are six different unit root test available in Eviews:
The Augmented Dickey-Fuller (ADF) Test
Dickey-Fuller Test with GLS Detrending (DFGLS)
The Phillips-Perron (PP) Test
The Kwiatkowski, P |
52,890 | What is the difference between Machine Learning and Deep Learning? | Starting with the first page of Goolge Scholar, one finds some promising abstracts.
I. Arel,D. C. Rose, T. P. Karnowski Deep Machine Learning - A New Frontier in Artificial Intelligence Research
This article provides an overview of the mainstream deep learning approaches and research directions proposed over the past decade. It is important to emphasize that each approach has strengths and "weaknesses, depending on the application and context in "which it is being used. Thus, this article presents a summary on the current state of the deep machine learning field and some perspective into how it may evolve. Convolutional Neural Networks (CNNs) and Deep Belief Networks (DBNs) (and their respective variations) are focused on primarily because they are well established in the deep learning field and show great promise for future work.
Yann LeCun, Yoshua Bengio & Geoffrey Hinton, Deep Learning, Nature
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
@frankov suggested adding this diagram which summarizes one interpretation of the different flavors of machine-learning. | What is the difference between Machine Learning and Deep Learning? | Starting with the first page of Goolge Scholar, one finds some promising abstracts.
I. Arel,D. C. Rose, T. P. Karnowski Deep Machine Learning - A New Frontier in Artificial Intelligence Research
This | What is the difference between Machine Learning and Deep Learning?
Starting with the first page of Goolge Scholar, one finds some promising abstracts.
I. Arel,D. C. Rose, T. P. Karnowski Deep Machine Learning - A New Frontier in Artificial Intelligence Research
This article provides an overview of the mainstream deep learning approaches and research directions proposed over the past decade. It is important to emphasize that each approach has strengths and "weaknesses, depending on the application and context in "which it is being used. Thus, this article presents a summary on the current state of the deep machine learning field and some perspective into how it may evolve. Convolutional Neural Networks (CNNs) and Deep Belief Networks (DBNs) (and their respective variations) are focused on primarily because they are well established in the deep learning field and show great promise for future work.
Yann LeCun, Yoshua Bengio & Geoffrey Hinton, Deep Learning, Nature
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
@frankov suggested adding this diagram which summarizes one interpretation of the different flavors of machine-learning. | What is the difference between Machine Learning and Deep Learning?
Starting with the first page of Goolge Scholar, one finds some promising abstracts.
I. Arel,D. C. Rose, T. P. Karnowski Deep Machine Learning - A New Frontier in Artificial Intelligence Research
This |
52,891 | What is the difference between Machine Learning and Deep Learning? | What is Machine Learning?
Machine Learning is a way in which scientists build algorithms that can learn.
Choose any Task T, such as driving a car.
Then your algorithm can display a certain level of performance P.
If this improves with experience E, then the algorithm is said to have...
learned!
So what is Deep Learning?
Well, its a bit more complex, but basically let's say your trying to approximate a function. You have inputs and you have outputs. But you don't know what goes on in the middle. For instance, we know humans can drive cars well, they don't crash (often) but we don't really know how the mind is working to achieve this...
So let's think about a neural network model that attempts to drive a car. It is a bit like the brain. So there are many layers to the model. In fact, its a function within a function within a function etc. Because there are many functions nested within one another... we use the term deep to describe it!
In one line:
Deep learning is a kind of neural network, and a neural network is a kind of machine learner. That's it :-)
I recommend this textbook. | What is the difference between Machine Learning and Deep Learning? | What is Machine Learning?
Machine Learning is a way in which scientists build algorithms that can learn.
Choose any Task T, such as driving a car.
Then your algorithm can display a certain level of p | What is the difference between Machine Learning and Deep Learning?
What is Machine Learning?
Machine Learning is a way in which scientists build algorithms that can learn.
Choose any Task T, such as driving a car.
Then your algorithm can display a certain level of performance P.
If this improves with experience E, then the algorithm is said to have...
learned!
So what is Deep Learning?
Well, its a bit more complex, but basically let's say your trying to approximate a function. You have inputs and you have outputs. But you don't know what goes on in the middle. For instance, we know humans can drive cars well, they don't crash (often) but we don't really know how the mind is working to achieve this...
So let's think about a neural network model that attempts to drive a car. It is a bit like the brain. So there are many layers to the model. In fact, its a function within a function within a function etc. Because there are many functions nested within one another... we use the term deep to describe it!
In one line:
Deep learning is a kind of neural network, and a neural network is a kind of machine learner. That's it :-)
I recommend this textbook. | What is the difference between Machine Learning and Deep Learning?
What is Machine Learning?
Machine Learning is a way in which scientists build algorithms that can learn.
Choose any Task T, such as driving a car.
Then your algorithm can display a certain level of p |
52,892 | What is the difference between Machine Learning and Deep Learning? | Google "machine learning" and you will find a lot of definitions. What is deep learning might be just slightly harder to put a finger on. Everyone will agree that neural nets is a method that is part of machine learning. However 'traditional' neural nets tended to work very poorly ('overtrain') on nets with a large number of hidden layers. Nets with a large number of hidden layers have been (up to now anyway) necessary for solving complicated problems such as image recognition. Deep learning is the name given to a number of techniques for finding the optimal constants in the hidden layers of a neural net so that overtraining doesn't occur in nets with a large number of hidden layers. | What is the difference between Machine Learning and Deep Learning? | Google "machine learning" and you will find a lot of definitions. What is deep learning might be just slightly harder to put a finger on. Everyone will agree that neural nets is a method that is par | What is the difference between Machine Learning and Deep Learning?
Google "machine learning" and you will find a lot of definitions. What is deep learning might be just slightly harder to put a finger on. Everyone will agree that neural nets is a method that is part of machine learning. However 'traditional' neural nets tended to work very poorly ('overtrain') on nets with a large number of hidden layers. Nets with a large number of hidden layers have been (up to now anyway) necessary for solving complicated problems such as image recognition. Deep learning is the name given to a number of techniques for finding the optimal constants in the hidden layers of a neural net so that overtraining doesn't occur in nets with a large number of hidden layers. | What is the difference between Machine Learning and Deep Learning?
Google "machine learning" and you will find a lot of definitions. What is deep learning might be just slightly harder to put a finger on. Everyone will agree that neural nets is a method that is par |
52,893 | What is the difference between Machine Learning and Deep Learning? | Artificial Intelligence is the greater area, a field of knowledge. Machine Learning is a sub-field that consists of, for example, several techniques such as supervised and unsupervised methods. In supervised learning, one famous approach is called artificial neural networks. An artificial neural network with a certain set of characteristics is called a deep neural network. It's a bit strange that deep learning became such a buzzword, like machine learning, which indeed makes it a bit confusing. One possible reason for the emergence of such a buzzword is that the feasibility to train deep neural networks allowed the field of machine learning, and even AI in general, to solve many problems that before seemed very hard. Some people today refer to them as low-hanging fruits, but most people agreed they were not trivial at all in the past. So deep learning in a way became this area of knowledge around these types of neural networks which gave origin to a whole ecosystem with a lot of active researchers. | What is the difference between Machine Learning and Deep Learning? | Artificial Intelligence is the greater area, a field of knowledge. Machine Learning is a sub-field that consists of, for example, several techniques such as supervised and unsupervised methods. In sup | What is the difference between Machine Learning and Deep Learning?
Artificial Intelligence is the greater area, a field of knowledge. Machine Learning is a sub-field that consists of, for example, several techniques such as supervised and unsupervised methods. In supervised learning, one famous approach is called artificial neural networks. An artificial neural network with a certain set of characteristics is called a deep neural network. It's a bit strange that deep learning became such a buzzword, like machine learning, which indeed makes it a bit confusing. One possible reason for the emergence of such a buzzword is that the feasibility to train deep neural networks allowed the field of machine learning, and even AI in general, to solve many problems that before seemed very hard. Some people today refer to them as low-hanging fruits, but most people agreed they were not trivial at all in the past. So deep learning in a way became this area of knowledge around these types of neural networks which gave origin to a whole ecosystem with a lot of active researchers. | What is the difference between Machine Learning and Deep Learning?
Artificial Intelligence is the greater area, a field of knowledge. Machine Learning is a sub-field that consists of, for example, several techniques such as supervised and unsupervised methods. In sup |
52,894 | predict seasonality and trend combined, better approach? | There are several methods and models for this kind of analysis, for example: exponential smoothing, ARIMA time series models or structural time series models. The topic is too broad to be covered here. Below, I give some examples in R just for illustration. For further details, you may start for example looking at this online textbook (Forecasting: principles and practice by R. Hyndman and G. Athanasopoulos).
As regards your second question, my recommendation is in line with Winks's answer. I would stick to fit a model using the sample data and then apply one-step ahead forecasts. If any, you can remove the last year from the sample and get forecasts for that year upon different methods or models. Then, you can choose the model with the best performance according to some accuracy measure such as the Mean Absolute Error (minimum MAE).
The monthly totals of airline passengers (1949 to 1960) is a common
example used to illustrate time series models and methods (this series has also some resemblance to the patterns that you show).
In R, you can obtain one-year-ahead forecasts (blue line) by means of the Holt and Winters filter as follows:
x <- log(AirPassengers)
res1 <- HoltWinters(x)
pred1 <- predict(res1, 12)
plot(cbind(x, pred1), plot.type = "single", ylab = "", col = c("black", "blue"))
Obtaining confidence intervals for the forecasts is easier by means of parametric methods. An ARIMA model can be used to obtain forecasts and 95% confidence intervals (red dotted lines) as follows:
require("forecast")
res2 <- auto.arima(x)
pred2 <- predict(res2, 12)
plot(cbind(x, pred2$pred, pred2$pred + 1.96 * pred2$se, pred2$pred - 1.96 * pred2$se),
plot.type = "single", ylab = "", col = c("black", "blue", "red", "red"), lty = c(1,1,2,2))
The basic structural time series model is suited for the kind of components that you describe. In addition to forecasts and confidence intervals, an estimate of the trend and seasonal components is also obtained (this post gives an example that includes forecasts for the components):
require("stsm")
mod <- stsm.model(model = "BSM", y = x, transPars = "StructTS")
res3 <- maxlik.td.optim(mod, KF.version = "KFKSDS",
KF.args = list(P0cov = TRUE), method = "L-BFGS-B", gr = "numerical")
pred3 <- predict(char2numeric(res3$model), x, n.ahead = 12)
plot(cbind(x, pred3$pred, pred3$pred + 1.96 * pred3$se, pred3$pred - 1.96 * pred3$se),
plot.type = "single", ylab = "", col = c("black", "blue", "red", "red"), lty = c(1,1,2,2)) | predict seasonality and trend combined, better approach? | There are several methods and models for this kind of analysis, for example: exponential smoothing, ARIMA time series models or structural time series models. The topic is too broad to be covered here | predict seasonality and trend combined, better approach?
There are several methods and models for this kind of analysis, for example: exponential smoothing, ARIMA time series models or structural time series models. The topic is too broad to be covered here. Below, I give some examples in R just for illustration. For further details, you may start for example looking at this online textbook (Forecasting: principles and practice by R. Hyndman and G. Athanasopoulos).
As regards your second question, my recommendation is in line with Winks's answer. I would stick to fit a model using the sample data and then apply one-step ahead forecasts. If any, you can remove the last year from the sample and get forecasts for that year upon different methods or models. Then, you can choose the model with the best performance according to some accuracy measure such as the Mean Absolute Error (minimum MAE).
The monthly totals of airline passengers (1949 to 1960) is a common
example used to illustrate time series models and methods (this series has also some resemblance to the patterns that you show).
In R, you can obtain one-year-ahead forecasts (blue line) by means of the Holt and Winters filter as follows:
x <- log(AirPassengers)
res1 <- HoltWinters(x)
pred1 <- predict(res1, 12)
plot(cbind(x, pred1), plot.type = "single", ylab = "", col = c("black", "blue"))
Obtaining confidence intervals for the forecasts is easier by means of parametric methods. An ARIMA model can be used to obtain forecasts and 95% confidence intervals (red dotted lines) as follows:
require("forecast")
res2 <- auto.arima(x)
pred2 <- predict(res2, 12)
plot(cbind(x, pred2$pred, pred2$pred + 1.96 * pred2$se, pred2$pred - 1.96 * pred2$se),
plot.type = "single", ylab = "", col = c("black", "blue", "red", "red"), lty = c(1,1,2,2))
The basic structural time series model is suited for the kind of components that you describe. In addition to forecasts and confidence intervals, an estimate of the trend and seasonal components is also obtained (this post gives an example that includes forecasts for the components):
require("stsm")
mod <- stsm.model(model = "BSM", y = x, transPars = "StructTS")
res3 <- maxlik.td.optim(mod, KF.version = "KFKSDS",
KF.args = list(P0cov = TRUE), method = "L-BFGS-B", gr = "numerical")
pred3 <- predict(char2numeric(res3$model), x, n.ahead = 12)
plot(cbind(x, pred3$pred, pred3$pred + 1.96 * pred3$se, pred3$pred - 1.96 * pred3$se),
plot.type = "single", ylab = "", col = c("black", "blue", "red", "red"), lty = c(1,1,2,2)) | predict seasonality and trend combined, better approach?
There are several methods and models for this kind of analysis, for example: exponential smoothing, ARIMA time series models or structural time series models. The topic is too broad to be covered here |
52,895 | predict seasonality and trend combined, better approach? | If your data are simple, and you are just toying around, the decomposition into seasonal and overall trend should already be pretty good. But if you want to dig deeper, there is a more formal approach: Using Kernels to define the relations between your points.
The best exemple I have seen of this kind of task is on the Mauna Loa CO2 concentration dataset (Wikimedia Commons illustration)
A possible model for this dataset is shown in Gaussian Processes for Machine Learning, by Rasmussen & Williams (Second edition available online), in section 5.4.3.
Warning: Gaussian Processes (Wikipedia) are not easy to understand, especially if you do not have a background in Kernels. In a very simplified way, it is like doing Nearest Neighbour regression (a point has the value of close points), but using Kernels (Wikipedia) to encode prior knowledge about the data and the form of the function. What kernels allow you to do is specify how "related" two points are.
In this case, you can build a kernel that defines "relatedness" by how close in month we are, and how close in years, such that 12/2015 is related to 01/2016 and 12/2014, but not so much related to 06/2015 and not to 06/2004.
Since you mention Support Vector Regression, you could probably do something similar using such a Kernel there.
Considering your second question, it would be best to leave predicted data outside of your training data. Doing this does not add information, and if your model is biased, the predicted data will only reinforce this bias and lead to poorer prediction. This is true for all methods. | predict seasonality and trend combined, better approach? | If your data are simple, and you are just toying around, the decomposition into seasonal and overall trend should already be pretty good. But if you want to dig deeper, there is a more formal approach | predict seasonality and trend combined, better approach?
If your data are simple, and you are just toying around, the decomposition into seasonal and overall trend should already be pretty good. But if you want to dig deeper, there is a more formal approach: Using Kernels to define the relations between your points.
The best exemple I have seen of this kind of task is on the Mauna Loa CO2 concentration dataset (Wikimedia Commons illustration)
A possible model for this dataset is shown in Gaussian Processes for Machine Learning, by Rasmussen & Williams (Second edition available online), in section 5.4.3.
Warning: Gaussian Processes (Wikipedia) are not easy to understand, especially if you do not have a background in Kernels. In a very simplified way, it is like doing Nearest Neighbour regression (a point has the value of close points), but using Kernels (Wikipedia) to encode prior knowledge about the data and the form of the function. What kernels allow you to do is specify how "related" two points are.
In this case, you can build a kernel that defines "relatedness" by how close in month we are, and how close in years, such that 12/2015 is related to 01/2016 and 12/2014, but not so much related to 06/2015 and not to 06/2004.
Since you mention Support Vector Regression, you could probably do something similar using such a Kernel there.
Considering your second question, it would be best to leave predicted data outside of your training data. Doing this does not add information, and if your model is biased, the predicted data will only reinforce this bias and lead to poorer prediction. This is true for all methods. | predict seasonality and trend combined, better approach?
If your data are simple, and you are just toying around, the decomposition into seasonal and overall trend should already be pretty good. But if you want to dig deeper, there is a more formal approach |
52,896 | Variation or variance | Variation is a general term which express the dispersion. Variation is not precisely defined, it talks only about the quality of some process which produces imprecise results and express the imprecision. Variance is a measure for variation, which is defined as the second central moment. There are many measures of variation like range, standard deviation and so on. However in the context of r squared variation could be only variance.
[Later edit to address the comment]
You mean that actually we measure variation by using variance like we measure weight by using kilogram or ounce or pound.If I am wrong please correct me.
Well, it's not quite right. The kilograms, ounce or pounds are measurement units, all of them have a precise meaning which is the weight. Staying closer to your example I would say that variance is to variation like the weight of your object is to how big an object is. In this case an object can be big because it has many kilograms, a big volume, a large surface area, an so on. Notice that in "how big an object is?" the notion of big is not precise, and can be understand slightly different, depending on what you are interested in.
Going back to variation, the point is that you can interpret that variation in different ways. Consider a sample which comes from a distribution. The variation of the values from the sample is talking about how the values are spread. Depending on your context the characteristics of that spreading which interests you can be different. You might use variance because you perhaps suppose a normal distribution and you want to compare with a statistical test if two samples have a significantly different mean. You also can use their range (maximum value minus minimum value) because perhaps you want to display them on a screen. Also you might use variance to compute R-squared for linear models to have an idea about how well the model fits the data. Obviously variance and standard deviation are prevalent due to their statistical meaning which helps in conducting inferences. | Variation or variance | Variation is a general term which express the dispersion. Variation is not precisely defined, it talks only about the quality of some process which produces imprecise results and express the imprecisi | Variation or variance
Variation is a general term which express the dispersion. Variation is not precisely defined, it talks only about the quality of some process which produces imprecise results and express the imprecision. Variance is a measure for variation, which is defined as the second central moment. There are many measures of variation like range, standard deviation and so on. However in the context of r squared variation could be only variance.
[Later edit to address the comment]
You mean that actually we measure variation by using variance like we measure weight by using kilogram or ounce or pound.If I am wrong please correct me.
Well, it's not quite right. The kilograms, ounce or pounds are measurement units, all of them have a precise meaning which is the weight. Staying closer to your example I would say that variance is to variation like the weight of your object is to how big an object is. In this case an object can be big because it has many kilograms, a big volume, a large surface area, an so on. Notice that in "how big an object is?" the notion of big is not precise, and can be understand slightly different, depending on what you are interested in.
Going back to variation, the point is that you can interpret that variation in different ways. Consider a sample which comes from a distribution. The variation of the values from the sample is talking about how the values are spread. Depending on your context the characteristics of that spreading which interests you can be different. You might use variance because you perhaps suppose a normal distribution and you want to compare with a statistical test if two samples have a significantly different mean. You also can use their range (maximum value minus minimum value) because perhaps you want to display them on a screen. Also you might use variance to compute R-squared for linear models to have an idea about how well the model fits the data. Obviously variance and standard deviation are prevalent due to their statistical meaning which helps in conducting inferences. | Variation or variance
Variation is a general term which express the dispersion. Variation is not precisely defined, it talks only about the quality of some process which produces imprecise results and express the imprecisi |
52,897 | Comparing two or more treatments with inverse probablity of treatment weighting | You can use twang with more than 2 treatment levels -- I use it all the time to obtain propensity scores for multiple (i.e. >2) treatments and it's one of my all time favorite R packages because there is no need to guess the functional relationships between your treatments and covariates. Since twang uses gradient boosted regression, it can fit nonlinear relationships and interactions automatically. All you need to do is assess the balance statistics after obtaining the propensity scores using the get.weights function. Using the mnps function in twang instead of the ps function, you can obtain propensity score weights for multiple treatments. The good folks at RAND have even prepared a nice tutorial for multiple treatment propensity score weighting.
One word of caution is in order however with twang. It isn't the most efficiently written program and it cannot be easily parrallelized, so if you have a lot of data (e.g. > 100,000 observations), it can be quite slow. Also, sometimes weights can be assigned which are too large and therefore influence your results too much. I perform a simple check on the weights to make sure that no weight is overly large (e.g. no weight accounts for more than say 5% of the total of all weights). Note too that the twang propensity scores weights are not standardized so they do not add to one. They can be through some manipulation, but I rarely find that I need to do this.
Lastly, work by Elizabeth Stuart has shown propensity scores built with gradient boosting methods outperforms other methods. Therefore, I'd strongly advocate the use of twang. | Comparing two or more treatments with inverse probablity of treatment weighting | You can use twang with more than 2 treatment levels -- I use it all the time to obtain propensity scores for multiple (i.e. >2) treatments and it's one of my all time favorite R packages because there | Comparing two or more treatments with inverse probablity of treatment weighting
You can use twang with more than 2 treatment levels -- I use it all the time to obtain propensity scores for multiple (i.e. >2) treatments and it's one of my all time favorite R packages because there is no need to guess the functional relationships between your treatments and covariates. Since twang uses gradient boosted regression, it can fit nonlinear relationships and interactions automatically. All you need to do is assess the balance statistics after obtaining the propensity scores using the get.weights function. Using the mnps function in twang instead of the ps function, you can obtain propensity score weights for multiple treatments. The good folks at RAND have even prepared a nice tutorial for multiple treatment propensity score weighting.
One word of caution is in order however with twang. It isn't the most efficiently written program and it cannot be easily parrallelized, so if you have a lot of data (e.g. > 100,000 observations), it can be quite slow. Also, sometimes weights can be assigned which are too large and therefore influence your results too much. I perform a simple check on the weights to make sure that no weight is overly large (e.g. no weight accounts for more than say 5% of the total of all weights). Note too that the twang propensity scores weights are not standardized so they do not add to one. They can be through some manipulation, but I rarely find that I need to do this.
Lastly, work by Elizabeth Stuart has shown propensity scores built with gradient boosting methods outperforms other methods. Therefore, I'd strongly advocate the use of twang. | Comparing two or more treatments with inverse probablity of treatment weighting
You can use twang with more than 2 treatment levels -- I use it all the time to obtain propensity scores for multiple (i.e. >2) treatments and it's one of my all time favorite R packages because there |
52,898 | Comparing two or more treatments with inverse probablity of treatment weighting | Imbens (2000) and Imbens and Wooldridge (2009) outline a method of adjusting for treatment choice based on pre-treatment characteristics in situations where treatment takes on more than two values. This approach is termed generalized propensity scores. The first step is to predict treatment category using multinomial logistic regression and then you predict the model-based probabilities of being in each of the treatment groups. Iptw is based on the inverse of the probability value for the treatment condition each individual actually was in. As Dimitry mentions, there are baked-in Stata routines for doing this in the teffects set of programs.
Imbens, G. (2000). The role of propensity score in estimating dose-response in observational studies for causal effect. Biometrika, 3, 706–710. doi: 10.1093/biomet/87.3.706
Imbens, G. W., & Wooldridge, J. M. (2009). Recent developments in the econometrics of program evaluation. Journal of Economic Literature, 47(1), 5–86. doi:10.1257/jel.47.1.5 | Comparing two or more treatments with inverse probablity of treatment weighting | Imbens (2000) and Imbens and Wooldridge (2009) outline a method of adjusting for treatment choice based on pre-treatment characteristics in situations where treatment takes on more than two values. Th | Comparing two or more treatments with inverse probablity of treatment weighting
Imbens (2000) and Imbens and Wooldridge (2009) outline a method of adjusting for treatment choice based on pre-treatment characteristics in situations where treatment takes on more than two values. This approach is termed generalized propensity scores. The first step is to predict treatment category using multinomial logistic regression and then you predict the model-based probabilities of being in each of the treatment groups. Iptw is based on the inverse of the probability value for the treatment condition each individual actually was in. As Dimitry mentions, there are baked-in Stata routines for doing this in the teffects set of programs.
Imbens, G. (2000). The role of propensity score in estimating dose-response in observational studies for causal effect. Biometrika, 3, 706–710. doi: 10.1093/biomet/87.3.706
Imbens, G. W., & Wooldridge, J. M. (2009). Recent developments in the econometrics of program evaluation. Journal of Economic Literature, 47(1), 5–86. doi:10.1257/jel.47.1.5 | Comparing two or more treatments with inverse probablity of treatment weighting
Imbens (2000) and Imbens and Wooldridge (2009) outline a method of adjusting for treatment choice based on pre-treatment characteristics in situations where treatment takes on more than two values. Th |
52,899 | Expected Value of Gamma Distribution | I would go about it the lazy way: by starting with a definition and looking hard at what ensues, in order to see whether somebody has already shown me the answer. In what follows no calculations are needed at all, and only the very simplest rules (of exponents and integrals) are required to follow the algebra.
Let's begin with the Gamma distribution. Choose a unit of measurement of $X$ in which $\beta = 1$, so that we may fairly say $X$ has a $\Gamma(\alpha)$ distribution. This means the density is positive only for positive values, where the probability density element is given by
$$f_\alpha(x)dx = \frac{1}{\Gamma(\alpha)} x^{\alpha} e^{-x} \frac{dx}{x}.$$
(If you're curious, the expression $dx/x$ is explained at https://stats.stackexchange.com/a/185709. If you don't like it, replace $x^\alpha dx/x$ by $x^{\alpha-1} dx$.)
Recall that the normalizing constant is there to make the integral of $f_\alpha(x) dx$ unity, whence we can deduce that
$$\begin{aligned}
\Gamma(\alpha) &= \Gamma(\alpha)(1) = \Gamma(\alpha)\int_0^\infty f_\alpha(x) dx = \frac{\Gamma(\alpha)}{\Gamma(\alpha)}\int_0^\infty x^{\alpha} e^{-x} \frac{dx}{x} \\
&= \int_0^\infty x^{\alpha} e^{-x} \frac{dx}{x}.
\end{aligned}\tag{1}$$
It doesn't matter what number $\Gamma(\alpha)$ actually is. It suffices to see that it is well-defined and finite provided $\alpha\gt 0$ and otherwise diverges.
Now let's turn to the rules for expectation. The "law of the unconscious statistician" says the expectation of any function of $X$, such as $X^p$ for some power $p$ (which is usually positive but can be negative and even complex), is obtained by integrating that function of $x$ against the density:
$$E[X^p] = \int_0^\infty x^p \frac{1}{\Gamma(\alpha)} x^{\alpha} e^{-x} \frac{dx}{x}.$$
It's time to stare. Ignoring the integral, the integrand is a simple enough expression. Let's rewrite it using the rules of algebra and, in the process, move that constant value of $1/\Gamma(\alpha)$ out of the integral:
$$E[X^p] = \frac{1}{\Gamma(\alpha)} \int_0^\infty x^{p+\alpha} e^{-x} \frac{dx}{x}.\tag{2}$$
That should look awfully familiar: it's just like another Gamma distribution density function, but with the power $p+\alpha$ instead of $\alpha$. Equation $(1)$ tells us immediately, with no further thinking or calculation, that
$$\int_0^\infty x^{p+\alpha} e^{-x} \frac{dx}{x} = \Gamma(p+\alpha).$$
Plugging this into the right hand side of $(2)$ yields
$$E[X^p] = \frac{\Gamma(p+\alpha)}{\Gamma(\alpha)}.$$
It looks like we had better have (the real part of) $p+\alpha \gt 0$ in order for this to converge, as noted previously.
As a double-check, we may use our formula to compute the first few moments and compare them to, say, what Wikipedia says. For the mean we obtain
$$E\left(X^1\right) = \frac{\Gamma(1+\alpha)}{\Gamma(\alpha)} = \alpha$$
and for the second (raw) moment,
$$E\left(X^2\right) = \frac{\Gamma(2+\alpha)}{\Gamma(\alpha)} = \alpha(\alpha+1).$$
Consequently the variance is $$E\left(X^2\right) - E(X)^2 = \alpha(\alpha+1) - \alpha^2 = \alpha.$$
These results agree perfectly with the authority. There are no convergence problems because since $\alpha\gt 0$, both $\alpha+1 \gt 0$ and $\alpha+2 \gt 0$.
You may now safely plug in $p=-2$ and draw your conclusions about the original question. Remember to check the conditions under which the answer exists. And don't forget to change the units of $X$ back to the original ones: that will multiply your answer by $\beta^p$ (or $\beta^{-p}$, depending on whether you think $\beta$ is a scale or a rate). | Expected Value of Gamma Distribution | I would go about it the lazy way: by starting with a definition and looking hard at what ensues, in order to see whether somebody has already shown me the answer. In what follows no calculations are | Expected Value of Gamma Distribution
I would go about it the lazy way: by starting with a definition and looking hard at what ensues, in order to see whether somebody has already shown me the answer. In what follows no calculations are needed at all, and only the very simplest rules (of exponents and integrals) are required to follow the algebra.
Let's begin with the Gamma distribution. Choose a unit of measurement of $X$ in which $\beta = 1$, so that we may fairly say $X$ has a $\Gamma(\alpha)$ distribution. This means the density is positive only for positive values, where the probability density element is given by
$$f_\alpha(x)dx = \frac{1}{\Gamma(\alpha)} x^{\alpha} e^{-x} \frac{dx}{x}.$$
(If you're curious, the expression $dx/x$ is explained at https://stats.stackexchange.com/a/185709. If you don't like it, replace $x^\alpha dx/x$ by $x^{\alpha-1} dx$.)
Recall that the normalizing constant is there to make the integral of $f_\alpha(x) dx$ unity, whence we can deduce that
$$\begin{aligned}
\Gamma(\alpha) &= \Gamma(\alpha)(1) = \Gamma(\alpha)\int_0^\infty f_\alpha(x) dx = \frac{\Gamma(\alpha)}{\Gamma(\alpha)}\int_0^\infty x^{\alpha} e^{-x} \frac{dx}{x} \\
&= \int_0^\infty x^{\alpha} e^{-x} \frac{dx}{x}.
\end{aligned}\tag{1}$$
It doesn't matter what number $\Gamma(\alpha)$ actually is. It suffices to see that it is well-defined and finite provided $\alpha\gt 0$ and otherwise diverges.
Now let's turn to the rules for expectation. The "law of the unconscious statistician" says the expectation of any function of $X$, such as $X^p$ for some power $p$ (which is usually positive but can be negative and even complex), is obtained by integrating that function of $x$ against the density:
$$E[X^p] = \int_0^\infty x^p \frac{1}{\Gamma(\alpha)} x^{\alpha} e^{-x} \frac{dx}{x}.$$
It's time to stare. Ignoring the integral, the integrand is a simple enough expression. Let's rewrite it using the rules of algebra and, in the process, move that constant value of $1/\Gamma(\alpha)$ out of the integral:
$$E[X^p] = \frac{1}{\Gamma(\alpha)} \int_0^\infty x^{p+\alpha} e^{-x} \frac{dx}{x}.\tag{2}$$
That should look awfully familiar: it's just like another Gamma distribution density function, but with the power $p+\alpha$ instead of $\alpha$. Equation $(1)$ tells us immediately, with no further thinking or calculation, that
$$\int_0^\infty x^{p+\alpha} e^{-x} \frac{dx}{x} = \Gamma(p+\alpha).$$
Plugging this into the right hand side of $(2)$ yields
$$E[X^p] = \frac{\Gamma(p+\alpha)}{\Gamma(\alpha)}.$$
It looks like we had better have (the real part of) $p+\alpha \gt 0$ in order for this to converge, as noted previously.
As a double-check, we may use our formula to compute the first few moments and compare them to, say, what Wikipedia says. For the mean we obtain
$$E\left(X^1\right) = \frac{\Gamma(1+\alpha)}{\Gamma(\alpha)} = \alpha$$
and for the second (raw) moment,
$$E\left(X^2\right) = \frac{\Gamma(2+\alpha)}{\Gamma(\alpha)} = \alpha(\alpha+1).$$
Consequently the variance is $$E\left(X^2\right) - E(X)^2 = \alpha(\alpha+1) - \alpha^2 = \alpha.$$
These results agree perfectly with the authority. There are no convergence problems because since $\alpha\gt 0$, both $\alpha+1 \gt 0$ and $\alpha+2 \gt 0$.
You may now safely plug in $p=-2$ and draw your conclusions about the original question. Remember to check the conditions under which the answer exists. And don't forget to change the units of $X$ back to the original ones: that will multiply your answer by $\beta^p$ (or $\beta^{-p}$, depending on whether you think $\beta$ is a scale or a rate). | Expected Value of Gamma Distribution
I would go about it the lazy way: by starting with a definition and looking hard at what ensues, in order to see whether somebody has already shown me the answer. In what follows no calculations are |
52,900 | Expected Value of Gamma Distribution | Assuming you're concerning random variable of Gamma distribution with shape $\alpha > 0$ and rate $\beta > 0$ parameters, that is $X \sim Gamma(\alpha,\beta)$, you can find $\mathbb{E}[\frac{1}{X^2}]$ in the following manner:
For any random variable X of continuous distribution (like Gamma) for which $f$ denotes its probability density function (in your example
$f(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 1}e^{- \beta x}$) and for any function $g$ of this variable (in your case $g(x) = \frac{1}{x^2} = x^{-2}$), it holds:
$$ \mathbb{E}[g(x)] = \int\limits_{-\infty}^{+ \infty}g(x)f(x)dx$$
In your example, it simplifies very much (pay attention on $-3$):
$$g(x)f(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 3}e^{- \beta x}$$
The fraction doesn't depend on $x$, so it can be put outside an integral.
By the way, for discrete distribution it's very similar:
$$ \mathbb{E}[g(x)] = \sum\limits_{x \in \mathcal{X}} g(x)f(x),
~~\text{where}~\mathcal{X}~\text{denotes support for X (set of values it can take)}$$
I won't keep you in suspense any longer. First of all, recall that $\Gamma(\alpha+1) = \alpha \cdot \Gamma(\alpha)$.
Let $f_{\alpha}(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 1}e^{- \beta x}$. Combining these two results in a straightforward observation:
$$ x \cdot f_{\alpha}(x) = \frac{\alpha}{\beta} \cdot f_{\alpha+1}(x)$$
Consecutively:
$$ \frac{f_{\alpha+1}(x)}{x} = \frac{\beta}{\alpha} \cdot f_{\alpha}(x)$$
Using this twice, you will get the result:
$$
\frac{f_{\alpha}(x)}{x^2} = \frac{\beta}{\alpha-1} \cdot
\frac{f_{\alpha-1}(x)}{x} = \frac{\beta}{\alpha-1} \cdot \frac{\beta}{\alpha-2} \cdot f_{\alpha-2}(x)
$$
Ultimately (as $f_{\alpha-2}(x)$ is also PDF which integral equals $1$):
$$
\mathbb{E}(\frac{1}{X^2}) = \int\limits_{-\infty}^{+\infty}
\frac{f_{\alpha}(x)}{x^2} dx = \frac{\beta}{\alpha-1} \cdot \frac{\beta}{\alpha-2} \cdot \int\limits_{-\infty}^{+\infty} f_{\alpha-2}(x) dx =
\frac{\beta}{\alpha-1} \cdot \frac{\beta}{\alpha-2}
$$
This solution above is for this particular case, but as whuber pointed out, the more general case for any real and positive $p \in \mathbb{R},~p >0$ it holds:
$$ \mathbb{E}(X^p) = \beta^p \cdot \frac{\Gamma(\alpha + p)}{\Gamma(\alpha)}$$ | Expected Value of Gamma Distribution | Assuming you're concerning random variable of Gamma distribution with shape $\alpha > 0$ and rate $\beta > 0$ parameters, that is $X \sim Gamma(\alpha,\beta)$, you can find $\mathbb{E}[\frac{1}{X^2}]$ | Expected Value of Gamma Distribution
Assuming you're concerning random variable of Gamma distribution with shape $\alpha > 0$ and rate $\beta > 0$ parameters, that is $X \sim Gamma(\alpha,\beta)$, you can find $\mathbb{E}[\frac{1}{X^2}]$ in the following manner:
For any random variable X of continuous distribution (like Gamma) for which $f$ denotes its probability density function (in your example
$f(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 1}e^{- \beta x}$) and for any function $g$ of this variable (in your case $g(x) = \frac{1}{x^2} = x^{-2}$), it holds:
$$ \mathbb{E}[g(x)] = \int\limits_{-\infty}^{+ \infty}g(x)f(x)dx$$
In your example, it simplifies very much (pay attention on $-3$):
$$g(x)f(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 3}e^{- \beta x}$$
The fraction doesn't depend on $x$, so it can be put outside an integral.
By the way, for discrete distribution it's very similar:
$$ \mathbb{E}[g(x)] = \sum\limits_{x \in \mathcal{X}} g(x)f(x),
~~\text{where}~\mathcal{X}~\text{denotes support for X (set of values it can take)}$$
I won't keep you in suspense any longer. First of all, recall that $\Gamma(\alpha+1) = \alpha \cdot \Gamma(\alpha)$.
Let $f_{\alpha}(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 1}e^{- \beta x}$. Combining these two results in a straightforward observation:
$$ x \cdot f_{\alpha}(x) = \frac{\alpha}{\beta} \cdot f_{\alpha+1}(x)$$
Consecutively:
$$ \frac{f_{\alpha+1}(x)}{x} = \frac{\beta}{\alpha} \cdot f_{\alpha}(x)$$
Using this twice, you will get the result:
$$
\frac{f_{\alpha}(x)}{x^2} = \frac{\beta}{\alpha-1} \cdot
\frac{f_{\alpha-1}(x)}{x} = \frac{\beta}{\alpha-1} \cdot \frac{\beta}{\alpha-2} \cdot f_{\alpha-2}(x)
$$
Ultimately (as $f_{\alpha-2}(x)$ is also PDF which integral equals $1$):
$$
\mathbb{E}(\frac{1}{X^2}) = \int\limits_{-\infty}^{+\infty}
\frac{f_{\alpha}(x)}{x^2} dx = \frac{\beta}{\alpha-1} \cdot \frac{\beta}{\alpha-2} \cdot \int\limits_{-\infty}^{+\infty} f_{\alpha-2}(x) dx =
\frac{\beta}{\alpha-1} \cdot \frac{\beta}{\alpha-2}
$$
This solution above is for this particular case, but as whuber pointed out, the more general case for any real and positive $p \in \mathbb{R},~p >0$ it holds:
$$ \mathbb{E}(X^p) = \beta^p \cdot \frac{\Gamma(\alpha + p)}{\Gamma(\alpha)}$$ | Expected Value of Gamma Distribution
Assuming you're concerning random variable of Gamma distribution with shape $\alpha > 0$ and rate $\beta > 0$ parameters, that is $X \sim Gamma(\alpha,\beta)$, you can find $\mathbb{E}[\frac{1}{X^2}]$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.