idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
53,201
Do propensity scores reflect the probability of treatment or outcome?
The propensity is for treatment assigned, not outcome. While there are natural situations where propensity strongly mimics randomization, there are more scenarios where treatment is determined in the most non-random ways possible. Given a sufficiently large sample, searches for the probability of treatment assignment will be successful. If treatment assignment can be perfectly determined from the data, those variables should be scrutinized as likely representing treatment bias (guilty until proven otherwise). If the propensity is measuring the latent variable of disease severity, the underlying estimates obtained from propensity matching or regression are likely biased.
Do propensity scores reflect the probability of treatment or outcome?
The propensity is for treatment assigned, not outcome. While there are natural situations where propensity strongly mimics randomization, there are more scenarios where treatment is determined in the
Do propensity scores reflect the probability of treatment or outcome? The propensity is for treatment assigned, not outcome. While there are natural situations where propensity strongly mimics randomization, there are more scenarios where treatment is determined in the most non-random ways possible. Given a sufficiently large sample, searches for the probability of treatment assignment will be successful. If treatment assignment can be perfectly determined from the data, those variables should be scrutinized as likely representing treatment bias (guilty until proven otherwise). If the propensity is measuring the latent variable of disease severity, the underlying estimates obtained from propensity matching or regression are likely biased.
Do propensity scores reflect the probability of treatment or outcome? The propensity is for treatment assigned, not outcome. While there are natural situations where propensity strongly mimics randomization, there are more scenarios where treatment is determined in the
53,202
Do propensity scores reflect the probability of treatment or outcome?
The propensity score was developed for the most part by Donald Rubin. Here's the abstract to his 1983 paper with Rosenbaum from Biometrika. The propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: (i) matched sampling on the univariate propensity score, which is a generalization of discriminant matching, (ii) multivariate adjustment by subclassification on the propensity score where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and (iii) visual representation of multivariate covariance adjustment by a two- dimensional plot. PAUL R. ROSENBAUM, DONALD B. RUBIN; The central role of the propensity score in observational studies for causal effects, Biometrika, Volume 70, Issue 1, 1 April 1983, Pages 41–55, https://doi.org/10.1093/biomet/70.1.41 There is a strong connection between propensity scores and confounding adjustment. Confounders predict the outcome and receipt of treatment$^1$, so candidate factors which are confounders are subsets of candidates propensity factors. Thus, when you select covariates for developing a propensity score, it is often the case that they also predict the outcome. That's not surprising. Compare cancer treatments vs. survival. People with advanced cancers may opt for more aggressive treatment, so when you compare survival, cancer stage at diagnosis is a very important confounder. $^1$ they're a bit more subtle than that, see Pearl, Causality 2nd edition.
Do propensity scores reflect the probability of treatment or outcome?
The propensity score was developed for the most part by Donald Rubin. Here's the abstract to his 1983 paper with Rosenbaum from Biometrika. The propensity score is the conditional probability of assi
Do propensity scores reflect the probability of treatment or outcome? The propensity score was developed for the most part by Donald Rubin. Here's the abstract to his 1983 paper with Rosenbaum from Biometrika. The propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: (i) matched sampling on the univariate propensity score, which is a generalization of discriminant matching, (ii) multivariate adjustment by subclassification on the propensity score where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and (iii) visual representation of multivariate covariance adjustment by a two- dimensional plot. PAUL R. ROSENBAUM, DONALD B. RUBIN; The central role of the propensity score in observational studies for causal effects, Biometrika, Volume 70, Issue 1, 1 April 1983, Pages 41–55, https://doi.org/10.1093/biomet/70.1.41 There is a strong connection between propensity scores and confounding adjustment. Confounders predict the outcome and receipt of treatment$^1$, so candidate factors which are confounders are subsets of candidates propensity factors. Thus, when you select covariates for developing a propensity score, it is often the case that they also predict the outcome. That's not surprising. Compare cancer treatments vs. survival. People with advanced cancers may opt for more aggressive treatment, so when you compare survival, cancer stage at diagnosis is a very important confounder. $^1$ they're a bit more subtle than that, see Pearl, Causality 2nd edition.
Do propensity scores reflect the probability of treatment or outcome? The propensity score was developed for the most part by Donald Rubin. Here's the abstract to his 1983 paper with Rosenbaum from Biometrika. The propensity score is the conditional probability of assi
53,203
Do propensity scores reflect the probability of treatment or outcome?
As both others have said, propensity scores represent the probability of receiving treatment. From the Stata manual for its native propensity score matching command (emphasis mine): Propensity-score matching uses an average of the outcomes of similar subjects who get the other treatment level to impute the missing potential outcome for each subject. The ATE is computed by taking the average of the difference between the observed and potential outcomes for each subject. teffects psmatch determines how near subjects are to each other by using estimated treatment probabilities, known as propensity scores. This type of matching is known as propensity-score matching (PSM). So, propensity score matching is used to calculate the average treatment effect or the average treatment effect among the treated, but it does so by matching individual observations on the propensity score. Which, as you see above, is the probability of receiving treatment. Now, do note that you can use propensity scoring with a continuous or binary outcome of interest (or count, or whatever else you can imagine). Maybe the outcome in your case is binary and this is the source of the misunderstanding? Either way, the propensity score itself is, as has been said ad nauseam, the probability of receiving treatment, and if the senior statistician seriously thinks that it is the probability of receiving the outcome, then this person is not qualified to be a senior statistician. I'm betting on a misunderstanding.
Do propensity scores reflect the probability of treatment or outcome?
As both others have said, propensity scores represent the probability of receiving treatment. From the Stata manual for its native propensity score matching command (emphasis mine): Propensity-score
Do propensity scores reflect the probability of treatment or outcome? As both others have said, propensity scores represent the probability of receiving treatment. From the Stata manual for its native propensity score matching command (emphasis mine): Propensity-score matching uses an average of the outcomes of similar subjects who get the other treatment level to impute the missing potential outcome for each subject. The ATE is computed by taking the average of the difference between the observed and potential outcomes for each subject. teffects psmatch determines how near subjects are to each other by using estimated treatment probabilities, known as propensity scores. This type of matching is known as propensity-score matching (PSM). So, propensity score matching is used to calculate the average treatment effect or the average treatment effect among the treated, but it does so by matching individual observations on the propensity score. Which, as you see above, is the probability of receiving treatment. Now, do note that you can use propensity scoring with a continuous or binary outcome of interest (or count, or whatever else you can imagine). Maybe the outcome in your case is binary and this is the source of the misunderstanding? Either way, the propensity score itself is, as has been said ad nauseam, the probability of receiving treatment, and if the senior statistician seriously thinks that it is the probability of receiving the outcome, then this person is not qualified to be a senior statistician. I'm betting on a misunderstanding.
Do propensity scores reflect the probability of treatment or outcome? As both others have said, propensity scores represent the probability of receiving treatment. From the Stata manual for its native propensity score matching command (emphasis mine): Propensity-score
53,204
Gradient in Gradient Boosting
In short answer, the gradient here refers to the gradient of loss function, and it is the target value for each new tree to predict. Suppose you have a true value $y$ and a predicted value $\hat{y}$. The predicted value is constructed from some existing trees. Then you are trying to construct the next tree which gives a prediction $z$. Then your final prediction will be $\hat{y}+z$. The correct choice of $z$ is $z = y - \hat{y}$. Therefore, you are now constructing trees to predict $y - \hat{y}$. It turns out this is a special case of gradient boosting when your loss function is $L = \frac{1}{2} (y - \hat{y})^2$, and your prediction target for this new tree is the gradient of this loss function as $y - \hat{y} = - \frac{\partial L}{\partial \hat{y}}$. In a more formal definition, if you already have a prediction $\hat{y}$, and you are trying to add a new prediction $z$ from a new tree to it, then the loss function can be expanded by Taylor's expansion near $\hat{y}$ as $$L = L_0 + \frac{\partial L}{\partial \hat{y}} z$$ With the spirit of gradient descent, we want $z$ to be along negative gradient direction, hence $z \sim - \frac{\partial L}{\partial \hat{y}}$. In that way, you set the target response to be predicted by new tree. All that is left to do is to construct a tree, so that the output of the tree on each input data is the negative gradient.
Gradient in Gradient Boosting
In short answer, the gradient here refers to the gradient of loss function, and it is the target value for each new tree to predict. Suppose you have a true value $y$ and a predicted value $\hat{y}$.
Gradient in Gradient Boosting In short answer, the gradient here refers to the gradient of loss function, and it is the target value for each new tree to predict. Suppose you have a true value $y$ and a predicted value $\hat{y}$. The predicted value is constructed from some existing trees. Then you are trying to construct the next tree which gives a prediction $z$. Then your final prediction will be $\hat{y}+z$. The correct choice of $z$ is $z = y - \hat{y}$. Therefore, you are now constructing trees to predict $y - \hat{y}$. It turns out this is a special case of gradient boosting when your loss function is $L = \frac{1}{2} (y - \hat{y})^2$, and your prediction target for this new tree is the gradient of this loss function as $y - \hat{y} = - \frac{\partial L}{\partial \hat{y}}$. In a more formal definition, if you already have a prediction $\hat{y}$, and you are trying to add a new prediction $z$ from a new tree to it, then the loss function can be expanded by Taylor's expansion near $\hat{y}$ as $$L = L_0 + \frac{\partial L}{\partial \hat{y}} z$$ With the spirit of gradient descent, we want $z$ to be along negative gradient direction, hence $z \sim - \frac{\partial L}{\partial \hat{y}}$. In that way, you set the target response to be predicted by new tree. All that is left to do is to construct a tree, so that the output of the tree on each input data is the negative gradient.
Gradient in Gradient Boosting In short answer, the gradient here refers to the gradient of loss function, and it is the target value for each new tree to predict. Suppose you have a true value $y$ and a predicted value $\hat{y}$.
53,205
Lagrangian dual of SVM: derivation
First, let's calculate the norm $||w||^2$. $$||w||^2 = \sum_i \alpha_iy_i\big(\sum_j\alpha_jy_j\langle x_i,x_j\rangle\big)$$ which evidently can be rearranged to $\sum_i\sum_j\alpha_i\alpha_jy_iy_j\langle x_i,x_j\rangle$. The $\langle x_i, x_j\rangle$ construct is present because it's assumed that the norm is defined in terms of the inner product - every inner product induces a norm by the formula $||z||^2 = \langle z,z \rangle$ - so when we calculate $||w||^2$ (making the desired substitution from above) we use $\langle x_i,x_j \rangle$. The reason we don't get something like: $$\sum_i \sum_j \langle \alpha_i y_i x_i, \alpha_j y_j x_j \rangle$$ is because the inner product is defined on $x$, and everything else is just a scalar multiplier, so, by basic properties of inner products, can get moved outside of the $\langle \rangle$. Now, substituting into $\sum_i\alpha_i[y_i(\langle w, x_i\rangle+b)-1]$ can be done in parts: $$\sum_i\alpha_i[y_i(\langle w, x_i\rangle+b)-1] = \sum_i\alpha_iy_i\langle w, x_i\rangle + b\sum_i\alpha_iy_i - \sum_i\alpha_i$$ The last term on the r.h.s. evidently equals $-\sum_ia_i$, and the middle term equals $0$, as the second constraint is that $\sum_i\alpha_iy_i = 0$. Substituting in for the first term gives: $$\sum_i\alpha_iy_i\langle w, x_i\rangle =\sum_i\alpha_iy_i\langle \sum_j\alpha_jy_jx_j, x_i\rangle = \sum_i\sum_j\alpha_iy_i\alpha_jy_j\langle x_i, x_j \rangle$$ where the step to the last term is by basic properties of inner products. Having gotten this far, we need to (remember to) a) multiply $||w||^2$ by $1/2$, b) multiply the long second term by $-1$, and c) combine them: $${1\over 2}\sum_i \sum_j \alpha_iy_i\alpha_jy_j\langle x_i,x_j\rangle - \sum_i \sum_j \alpha_iy_i\alpha_jy_j\langle x_i,x_j\rangle - 0 + \sum_i\alpha_i $$ which evidently reduces to the desired result $$-{1\over 2}\sum_i \sum_j \alpha_iy_i\alpha_jy_j\langle x_i,x_j\rangle + \sum_i\alpha_i $$
Lagrangian dual of SVM: derivation
First, let's calculate the norm $||w||^2$. $$||w||^2 = \sum_i \alpha_iy_i\big(\sum_j\alpha_jy_j\langle x_i,x_j\rangle\big)$$ which evidently can be rearranged to $\sum_i\sum_j\alpha_i\alpha_jy_iy_j\la
Lagrangian dual of SVM: derivation First, let's calculate the norm $||w||^2$. $$||w||^2 = \sum_i \alpha_iy_i\big(\sum_j\alpha_jy_j\langle x_i,x_j\rangle\big)$$ which evidently can be rearranged to $\sum_i\sum_j\alpha_i\alpha_jy_iy_j\langle x_i,x_j\rangle$. The $\langle x_i, x_j\rangle$ construct is present because it's assumed that the norm is defined in terms of the inner product - every inner product induces a norm by the formula $||z||^2 = \langle z,z \rangle$ - so when we calculate $||w||^2$ (making the desired substitution from above) we use $\langle x_i,x_j \rangle$. The reason we don't get something like: $$\sum_i \sum_j \langle \alpha_i y_i x_i, \alpha_j y_j x_j \rangle$$ is because the inner product is defined on $x$, and everything else is just a scalar multiplier, so, by basic properties of inner products, can get moved outside of the $\langle \rangle$. Now, substituting into $\sum_i\alpha_i[y_i(\langle w, x_i\rangle+b)-1]$ can be done in parts: $$\sum_i\alpha_i[y_i(\langle w, x_i\rangle+b)-1] = \sum_i\alpha_iy_i\langle w, x_i\rangle + b\sum_i\alpha_iy_i - \sum_i\alpha_i$$ The last term on the r.h.s. evidently equals $-\sum_ia_i$, and the middle term equals $0$, as the second constraint is that $\sum_i\alpha_iy_i = 0$. Substituting in for the first term gives: $$\sum_i\alpha_iy_i\langle w, x_i\rangle =\sum_i\alpha_iy_i\langle \sum_j\alpha_jy_jx_j, x_i\rangle = \sum_i\sum_j\alpha_iy_i\alpha_jy_j\langle x_i, x_j \rangle$$ where the step to the last term is by basic properties of inner products. Having gotten this far, we need to (remember to) a) multiply $||w||^2$ by $1/2$, b) multiply the long second term by $-1$, and c) combine them: $${1\over 2}\sum_i \sum_j \alpha_iy_i\alpha_jy_j\langle x_i,x_j\rangle - \sum_i \sum_j \alpha_iy_i\alpha_jy_j\langle x_i,x_j\rangle - 0 + \sum_i\alpha_i $$ which evidently reduces to the desired result $$-{1\over 2}\sum_i \sum_j \alpha_iy_i\alpha_jy_j\langle x_i,x_j\rangle + \sum_i\alpha_i $$
Lagrangian dual of SVM: derivation First, let's calculate the norm $||w||^2$. $$||w||^2 = \sum_i \alpha_iy_i\big(\sum_j\alpha_jy_j\langle x_i,x_j\rangle\big)$$ which evidently can be rearranged to $\sum_i\sum_j\alpha_i\alpha_jy_iy_j\la
53,206
Is the sample mean always an unbiased estimator of the expected value?
Answered in comments: The first question is answered immediately using the linearity of expectation. The second conclusion is true only when the underlying distribution has finite variance, in which case it follows with a simple computation of the variance. – whuber The second conclusion even follows without assuming finite variance, since you assumed the mean $\mu$ exists. The strong law of large numbers then give the result, it can be proved without assuming finite variance.
Is the sample mean always an unbiased estimator of the expected value?
Answered in comments: The first question is answered immediately using the linearity of expectation. The second conclusion is true only when the underlying distribution has finite variance, in whi
Is the sample mean always an unbiased estimator of the expected value? Answered in comments: The first question is answered immediately using the linearity of expectation. The second conclusion is true only when the underlying distribution has finite variance, in which case it follows with a simple computation of the variance. – whuber The second conclusion even follows without assuming finite variance, since you assumed the mean $\mu$ exists. The strong law of large numbers then give the result, it can be proved without assuming finite variance.
Is the sample mean always an unbiased estimator of the expected value? Answered in comments: The first question is answered immediately using the linearity of expectation. The second conclusion is true only when the underlying distribution has finite variance, in whi
53,207
Is the sample mean always an unbiased estimator of the expected value?
One case in which $\hat \mu$ may be a biased estimator of $\mu$: the samples $x_1,..., x_n$ are not uniformly randomly sampled from the population of interest. This is really two problems: (1) Some values in the population are more likely to be sampled than others. A classic example of this is when we are looking at a voluntary polls of opinion. It seems very probably that people with strong opinions are more likely to complete the survey than people who are more indifferent. (2) We have samples, but they are not from the population of interest. This somewhat seems like a cop-out, but in practice this is extremely common. For example, when we look at polls before an election, the population of interest is the actual votes cast on election night. Clearly, it's not possible to get any of those samples before the election, so we look at a population that has a distribution we assume/hope is close to the real distribution we care about and can be sampled: polls taken ahead of the election...and then we also either hope that we have a uniform random sample from that population, or we use methods to attempt to rebalance that estimator due to over/under representation of various groups. We may also try to adjust for potential changes in opinion of the population over time to account for the fact that we don't have samples from the population of interest, but may be able to model some of the relation between the population of interest and population we can sample. I suspect this may not be the type of answer you were looking for: i.e., I think the OP may have been curious about the case when $x_i$ were uniformly sampled from the correct distribution, but the estimator could have been inconsistent (as noted, this can happen if the variance is undefined). I presented this answer as I think it's a much more common issue in practice and often should be considered more closely during applied analyses.
Is the sample mean always an unbiased estimator of the expected value?
One case in which $\hat \mu$ may be a biased estimator of $\mu$: the samples $x_1,..., x_n$ are not uniformly randomly sampled from the population of interest. This is really two problems: (1) Some v
Is the sample mean always an unbiased estimator of the expected value? One case in which $\hat \mu$ may be a biased estimator of $\mu$: the samples $x_1,..., x_n$ are not uniformly randomly sampled from the population of interest. This is really two problems: (1) Some values in the population are more likely to be sampled than others. A classic example of this is when we are looking at a voluntary polls of opinion. It seems very probably that people with strong opinions are more likely to complete the survey than people who are more indifferent. (2) We have samples, but they are not from the population of interest. This somewhat seems like a cop-out, but in practice this is extremely common. For example, when we look at polls before an election, the population of interest is the actual votes cast on election night. Clearly, it's not possible to get any of those samples before the election, so we look at a population that has a distribution we assume/hope is close to the real distribution we care about and can be sampled: polls taken ahead of the election...and then we also either hope that we have a uniform random sample from that population, or we use methods to attempt to rebalance that estimator due to over/under representation of various groups. We may also try to adjust for potential changes in opinion of the population over time to account for the fact that we don't have samples from the population of interest, but may be able to model some of the relation between the population of interest and population we can sample. I suspect this may not be the type of answer you were looking for: i.e., I think the OP may have been curious about the case when $x_i$ were uniformly sampled from the correct distribution, but the estimator could have been inconsistent (as noted, this can happen if the variance is undefined). I presented this answer as I think it's a much more common issue in practice and often should be considered more closely during applied analyses.
Is the sample mean always an unbiased estimator of the expected value? One case in which $\hat \mu$ may be a biased estimator of $\mu$: the samples $x_1,..., x_n$ are not uniformly randomly sampled from the population of interest. This is really two problems: (1) Some v
53,208
Proving OLS unbiasedness without conditional zero error expectation?
You can't, because the statement is not true under the weaker assumption. Consider for example the autoregressive model \begin{equation*} y_{t}=\beta y_{t-1}+\epsilon _{t}, \end{equation*} in which the strict exogeneity $E(\epsilon|X)$ is violated even under the assumption $E(\epsilon_{t}y_{t-1})=0$: we have that \begin{equation*} E(\epsilon_ty_{t})=E(\epsilon_t(\beta y_{t-1}+\epsilon _{t}))=E(\epsilon_{t}^{2})\neq 0. \end{equation*} But, as $y_{t+1}=\beta y_{t}+\epsilon_{t+1}$, $y_t$ is also a regressor for $y_{t+1}$ and hence, it is impossible in this model that the error term is also uncorrelated with future regressors. Now, it is also well-known that OLS is biased for the coefficient of an AR(1)-model, see Why is OLS estimator of AR(1) coefficient biased?
Proving OLS unbiasedness without conditional zero error expectation?
You can't, because the statement is not true under the weaker assumption. Consider for example the autoregressive model \begin{equation*} y_{t}=\beta y_{t-1}+\epsilon _{t}, \end{equation*} in which th
Proving OLS unbiasedness without conditional zero error expectation? You can't, because the statement is not true under the weaker assumption. Consider for example the autoregressive model \begin{equation*} y_{t}=\beta y_{t-1}+\epsilon _{t}, \end{equation*} in which the strict exogeneity $E(\epsilon|X)$ is violated even under the assumption $E(\epsilon_{t}y_{t-1})=0$: we have that \begin{equation*} E(\epsilon_ty_{t})=E(\epsilon_t(\beta y_{t-1}+\epsilon _{t}))=E(\epsilon_{t}^{2})\neq 0. \end{equation*} But, as $y_{t+1}=\beta y_{t}+\epsilon_{t+1}$, $y_t$ is also a regressor for $y_{t+1}$ and hence, it is impossible in this model that the error term is also uncorrelated with future regressors. Now, it is also well-known that OLS is biased for the coefficient of an AR(1)-model, see Why is OLS estimator of AR(1) coefficient biased?
Proving OLS unbiasedness without conditional zero error expectation? You can't, because the statement is not true under the weaker assumption. Consider for example the autoregressive model \begin{equation*} y_{t}=\beta y_{t-1}+\epsilon _{t}, \end{equation*} in which th
53,209
Proving OLS unbiasedness without conditional zero error expectation?
For this question we can make use of a simple decomposition of the OLS estimator: $$\begin{equation} \begin{aligned} \hat{\boldsymbol{\beta}} = (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{Y} &= (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} (\mathbf{X} \boldsymbol{\beta} + \mathbf{\epsilon}) \\[6pt] &= \boldsymbol{\beta} + (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{\epsilon}. \\[6pt] \end{aligned} \end{equation}$$ This useful decomposition follows directly from the form of the OLS estimator and the underlying regression equation, so it is not dependent on any assumptions about the behaviour of the error terms. From this decomposition, the conditional bias (taking the regressors as fixed) is: $$\text{Bias}(\hat{\boldsymbol{\beta}}|\mathbf{x}) = \mathbb{E}(\hat{\boldsymbol{\beta}} | \mathbf{x}) - \boldsymbol{\beta} = (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \mathbb{E}(\mathbf{\epsilon}| \mathbf{x}).$$ The unconditional (marginal) bias (taking the regressors as random variables) is: $$\text{Bias}(\hat{\boldsymbol{\beta}}) = \mathbb{E}(\hat{\boldsymbol{\beta}}) - \boldsymbol{\beta} = \mathbb{E}((\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{\epsilon}).$$ In both cases, the condition $\mathbb{E}(\mathbf{\epsilon}| \mathbf{x}) = \mathbf{0}$ is sufficient for unbiasedness, but in the latter case, the weaker condition $\mathbb{E}((\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{\epsilon}) = \mathbf{0}$ is sufficient. The condition $\mathbb{E}( \mathbf{X}^\text{T} \mathbf{\epsilon}) = \mathbf{0}$ is not sufficient for unbiasedness in either case.
Proving OLS unbiasedness without conditional zero error expectation?
For this question we can make use of a simple decomposition of the OLS estimator: $$\begin{equation} \begin{aligned} \hat{\boldsymbol{\beta}} = (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T
Proving OLS unbiasedness without conditional zero error expectation? For this question we can make use of a simple decomposition of the OLS estimator: $$\begin{equation} \begin{aligned} \hat{\boldsymbol{\beta}} = (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{Y} &= (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} (\mathbf{X} \boldsymbol{\beta} + \mathbf{\epsilon}) \\[6pt] &= \boldsymbol{\beta} + (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{\epsilon}. \\[6pt] \end{aligned} \end{equation}$$ This useful decomposition follows directly from the form of the OLS estimator and the underlying regression equation, so it is not dependent on any assumptions about the behaviour of the error terms. From this decomposition, the conditional bias (taking the regressors as fixed) is: $$\text{Bias}(\hat{\boldsymbol{\beta}}|\mathbf{x}) = \mathbb{E}(\hat{\boldsymbol{\beta}} | \mathbf{x}) - \boldsymbol{\beta} = (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \mathbb{E}(\mathbf{\epsilon}| \mathbf{x}).$$ The unconditional (marginal) bias (taking the regressors as random variables) is: $$\text{Bias}(\hat{\boldsymbol{\beta}}) = \mathbb{E}(\hat{\boldsymbol{\beta}}) - \boldsymbol{\beta} = \mathbb{E}((\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{\epsilon}).$$ In both cases, the condition $\mathbb{E}(\mathbf{\epsilon}| \mathbf{x}) = \mathbf{0}$ is sufficient for unbiasedness, but in the latter case, the weaker condition $\mathbb{E}((\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T} \mathbf{\epsilon}) = \mathbf{0}$ is sufficient. The condition $\mathbb{E}( \mathbf{X}^\text{T} \mathbf{\epsilon}) = \mathbf{0}$ is not sufficient for unbiasedness in either case.
Proving OLS unbiasedness without conditional zero error expectation? For this question we can make use of a simple decomposition of the OLS estimator: $$\begin{equation} \begin{aligned} \hat{\boldsymbol{\beta}} = (\mathbf{X}^\text{T} \mathbf{X})^{-1} \mathbf{X}^\text{T
53,210
Why decision boundary is of (D-1) dimensions?
The line is a 1-D boundary in 2-D space. If you think of yourself as a point on the decision boundary, the number of (non-parallel nor anti-parallel) directions you could travel on the boundary will be its dimensions. With a line you can go forward or backward (which is anti-parallel to forward), so there is only one dimension. A 2-D plane would let you go forward-backwards or left-right which makes it 2-D. In the images below the decision boundaries for 2D and 3D input space are shown in blue. The orange lines illustrate that these decision boundaries have D$-1$ dimensions themselves. Think about rotating the image. You can rotate it so that the blue line becomes horizontal (i.e. super clearly 1D) but no matter how you rotate it the plane created by the black axes will still be 2D as seen in this image. This is because the blue line is a 1D slice in 2D space:
Why decision boundary is of (D-1) dimensions?
The line is a 1-D boundary in 2-D space. If you think of yourself as a point on the decision boundary, the number of (non-parallel nor anti-parallel) directions you could travel on the boundary will b
Why decision boundary is of (D-1) dimensions? The line is a 1-D boundary in 2-D space. If you think of yourself as a point on the decision boundary, the number of (non-parallel nor anti-parallel) directions you could travel on the boundary will be its dimensions. With a line you can go forward or backward (which is anti-parallel to forward), so there is only one dimension. A 2-D plane would let you go forward-backwards or left-right which makes it 2-D. In the images below the decision boundaries for 2D and 3D input space are shown in blue. The orange lines illustrate that these decision boundaries have D$-1$ dimensions themselves. Think about rotating the image. You can rotate it so that the blue line becomes horizontal (i.e. super clearly 1D) but no matter how you rotate it the plane created by the black axes will still be 2D as seen in this image. This is because the blue line is a 1D slice in 2D space:
Why decision boundary is of (D-1) dimensions? The line is a 1-D boundary in 2-D space. If you think of yourself as a point on the decision boundary, the number of (non-parallel nor anti-parallel) directions you could travel on the boundary will b
53,211
Why decision boundary is of (D-1) dimensions?
I think the answer of @Dan is sufficient, but for someone who have basic Linear Algebra, this is another explanation. It is obvious the discriminant function f is a linear transformation, and vector in the boundary decision form the null-space. It is known that dim(V) = dim(null-space) + dim(f(V)), with V is the input space. Since dim(f(V)) = 1, dim(null-space) = D - 1.
Why decision boundary is of (D-1) dimensions?
I think the answer of @Dan is sufficient, but for someone who have basic Linear Algebra, this is another explanation. It is obvious the discriminant function f is a linear transformation, and vector i
Why decision boundary is of (D-1) dimensions? I think the answer of @Dan is sufficient, but for someone who have basic Linear Algebra, this is another explanation. It is obvious the discriminant function f is a linear transformation, and vector in the boundary decision form the null-space. It is known that dim(V) = dim(null-space) + dim(f(V)), with V is the input space. Since dim(f(V)) = 1, dim(null-space) = D - 1.
Why decision boundary is of (D-1) dimensions? I think the answer of @Dan is sufficient, but for someone who have basic Linear Algebra, this is another explanation. It is obvious the discriminant function f is a linear transformation, and vector i
53,212
What is the probability that a best of seven series goes to the seventh game with negative binomial
Summary: the negative-binomial based approach in the question ignores that either team can win Game 7. After correcting for this the results agree. Assumption Not explicitly stated in the question, but it seems we are assuming the games are iid. with probability 0.5 for either team to win (a sequence of fair coin flips). The probability of Game 7 happening, using the negative binomial distribution The series goes to Game $7$ if and only if either team obtains its $4$th win in the $7$th game. The events "A wins $4$th time in Game $7$`" and "B wins $4$th time in Game $7$" are mutually exclusive, so \begin{equation} \mathbb{P}(\textrm{Game $7$ is played}) = \\ \mathbb{P}(\textrm{A obtain its $4$th win in the $7$th game}) + \mathbb{P}(\textrm{B obtains its $4$th win in the $7$th game}). \end{equation} Each term on the right-hand-side is the probability of the case team obtaining $4$th wins after $3$ losses. Or, equivalently, the probability of the losing team winning (exactly) $3$ games before the $4$th loss. As reasoned in the question, this is the probability of $3$ in a negative-binomial distribution with parameters $p=0.5,~r=4$ (where $r$ is the number of failures). Thus, the total probability \begin{equation} = 2 {4+3-1 \choose 3 }\,0.5^4\,0.5^3 \approx 0.31, \end{equation} exactly the same answer as derived in the question using the binomial distribution. Additional remark With a general series played to $r$ wins going to $2r-1$th game, the approaches can be seen to give the same result: the binomial coefficient is the same and the factor $2$ in the negative-binomial approach cancels one $0.5$ factor. This cancellation is very closely related to the fact that the binomial distribution approach "naturally" ignores the result of the final game while in the negative-binomial distribution approach both cases need to be taken into account.
What is the probability that a best of seven series goes to the seventh game with negative binomial
Summary: the negative-binomial based approach in the question ignores that either team can win Game 7. After correcting for this the results agree. Assumption Not explicitly stated in the question, b
What is the probability that a best of seven series goes to the seventh game with negative binomial Summary: the negative-binomial based approach in the question ignores that either team can win Game 7. After correcting for this the results agree. Assumption Not explicitly stated in the question, but it seems we are assuming the games are iid. with probability 0.5 for either team to win (a sequence of fair coin flips). The probability of Game 7 happening, using the negative binomial distribution The series goes to Game $7$ if and only if either team obtains its $4$th win in the $7$th game. The events "A wins $4$th time in Game $7$`" and "B wins $4$th time in Game $7$" are mutually exclusive, so \begin{equation} \mathbb{P}(\textrm{Game $7$ is played}) = \\ \mathbb{P}(\textrm{A obtain its $4$th win in the $7$th game}) + \mathbb{P}(\textrm{B obtains its $4$th win in the $7$th game}). \end{equation} Each term on the right-hand-side is the probability of the case team obtaining $4$th wins after $3$ losses. Or, equivalently, the probability of the losing team winning (exactly) $3$ games before the $4$th loss. As reasoned in the question, this is the probability of $3$ in a negative-binomial distribution with parameters $p=0.5,~r=4$ (where $r$ is the number of failures). Thus, the total probability \begin{equation} = 2 {4+3-1 \choose 3 }\,0.5^4\,0.5^3 \approx 0.31, \end{equation} exactly the same answer as derived in the question using the binomial distribution. Additional remark With a general series played to $r$ wins going to $2r-1$th game, the approaches can be seen to give the same result: the binomial coefficient is the same and the factor $2$ in the negative-binomial approach cancels one $0.5$ factor. This cancellation is very closely related to the fact that the binomial distribution approach "naturally" ignores the result of the final game while in the negative-binomial distribution approach both cases need to be taken into account.
What is the probability that a best of seven series goes to the seventh game with negative binomial Summary: the negative-binomial based approach in the question ignores that either team can win Game 7. After correcting for this the results agree. Assumption Not explicitly stated in the question, b
53,213
What is the probability that a best of seven series goes to the seventh game with negative binomial
The negative binomial would be appropriate if you wanted to know how many games it would take before team A won 4 games. However, that might be, say, 104 games, in which case team B would have won 100 games. Obviously that's not the way an actual seven game series works! Your calculation - $P(X=7 | r=4)$ - using the negative binomial distribution calculates the probability that team B would have won exactly 3 games before team A wins 4, given that team B might have won any number of games before team A wins 4. It ignores the possibility that team B is the one that wins 4 games first and the impossibility of either team winning 5 or more games.
What is the probability that a best of seven series goes to the seventh game with negative binomial
The negative binomial would be appropriate if you wanted to know how many games it would take before team A won 4 games. However, that might be, say, 104 games, in which case team B would have won 10
What is the probability that a best of seven series goes to the seventh game with negative binomial The negative binomial would be appropriate if you wanted to know how many games it would take before team A won 4 games. However, that might be, say, 104 games, in which case team B would have won 100 games. Obviously that's not the way an actual seven game series works! Your calculation - $P(X=7 | r=4)$ - using the negative binomial distribution calculates the probability that team B would have won exactly 3 games before team A wins 4, given that team B might have won any number of games before team A wins 4. It ignores the possibility that team B is the one that wins 4 games first and the impossibility of either team winning 5 or more games.
What is the probability that a best of seven series goes to the seventh game with negative binomial The negative binomial would be appropriate if you wanted to know how many games it would take before team A won 4 games. However, that might be, say, 104 games, in which case team B would have won 10
53,214
Why is a GARCH model useful?
GARCH can be used for what you call predictions. The question is: predictions of what? Predictions of volatility. The reason why GARCH is useful is because it may better explain the volatility of certain series, particularly in finance. For instance, look at the graph below. It shows daily log differences of S&P 500 series. Clearly, the volatility is lower lately than it was a few years ago, and moreover it seems to be clustered. Notice, the bursts of high volatility. This kind of series is not well explained by a standard random walk series where the variance is constant. Therefore, we have more complicated models such as GARCH that seem to explain the volatility better, and where volatility is not constant. Estimation of volatility is very important in risk management and option pricing. We may not have a good predictor of where the prices go, but it doesn't prevent us from assessing of risks of holding these assets and valuing them.
Why is a GARCH model useful?
GARCH can be used for what you call predictions. The question is: predictions of what? Predictions of volatility. The reason why GARCH is useful is because it may better explain the volatility of cert
Why is a GARCH model useful? GARCH can be used for what you call predictions. The question is: predictions of what? Predictions of volatility. The reason why GARCH is useful is because it may better explain the volatility of certain series, particularly in finance. For instance, look at the graph below. It shows daily log differences of S&P 500 series. Clearly, the volatility is lower lately than it was a few years ago, and moreover it seems to be clustered. Notice, the bursts of high volatility. This kind of series is not well explained by a standard random walk series where the variance is constant. Therefore, we have more complicated models such as GARCH that seem to explain the volatility better, and where volatility is not constant. Estimation of volatility is very important in risk management and option pricing. We may not have a good predictor of where the prices go, but it doesn't prevent us from assessing of risks of holding these assets and valuing them.
Why is a GARCH model useful? GARCH can be used for what you call predictions. The question is: predictions of what? Predictions of volatility. The reason why GARCH is useful is because it may better explain the volatility of cert
53,215
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
Hint: In general, \begin{align} \rho_{A,B} &= \frac{\operatorname{cov}(A,B)}{\sqrt{\operatorname{var}(A)\operatorname{var}(B)}},\\ \operatorname{var}(X\pm Y)&= \operatorname{var}(X)+\operatorname{var}(Y) \pm 2\operatorname{cov}(X,Y),\\ \text{and}\qquad\operatorname{cov}(X+Y,X-Y)&=\operatorname{var}(X)-\operatorname{var}(Y)\end{align} So, work out what $\rho_{X+Y,X-Y}$ is in general, and in the special case when $Y = aX+b$. You might be surprised at the result.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
Hint: In general, \begin{align} \rho_{A,B} &= \frac{\operatorname{cov}(A,B)}{\sqrt{\operatorname{var}(A)\operatorname{var}(B)}},\\ \operatorname{var}(X\pm Y)&= \operatorname{var}(X)+\operatorname{var}
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? Hint: In general, \begin{align} \rho_{A,B} &= \frac{\operatorname{cov}(A,B)}{\sqrt{\operatorname{var}(A)\operatorname{var}(B)}},\\ \operatorname{var}(X\pm Y)&= \operatorname{var}(X)+\operatorname{var}(Y) \pm 2\operatorname{cov}(X,Y),\\ \text{and}\qquad\operatorname{cov}(X+Y,X-Y)&=\operatorname{var}(X)-\operatorname{var}(Y)\end{align} So, work out what $\rho_{X+Y,X-Y}$ is in general, and in the special case when $Y = aX+b$. You might be surprised at the result.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? Hint: In general, \begin{align} \rho_{A,B} &= \frac{\operatorname{cov}(A,B)}{\sqrt{\operatorname{var}(A)\operatorname{var}(B)}},\\ \operatorname{var}(X\pm Y)&= \operatorname{var}(X)+\operatorname{var}
53,216
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
I'll treat this as self-study, and I'd encourage you to read its wiki and add the tag. Your argument is already very good. Here are a few pointers. Feel free to write a comment so we can discuss and work towards a good answer. I assume you are looking at Pearson's correlation, right? (Does your argument work for other measures of correlation?) What does a perfect correlation mean graphically? What will $X+Y$ and $X-Y$ look like graphically if $X$ and $Y$ are perfectly Pearson-correlated?
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
I'll treat this as self-study, and I'd encourage you to read its wiki and add the tag. Your argument is already very good. Here are a few pointers. Feel free to write a comment so we can discuss and w
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? I'll treat this as self-study, and I'd encourage you to read its wiki and add the tag. Your argument is already very good. Here are a few pointers. Feel free to write a comment so we can discuss and work towards a good answer. I assume you are looking at Pearson's correlation, right? (Does your argument work for other measures of correlation?) What does a perfect correlation mean graphically? What will $X+Y$ and $X-Y$ look like graphically if $X$ and $Y$ are perfectly Pearson-correlated?
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? I'll treat this as self-study, and I'd encourage you to read its wiki and add the tag. Your argument is already very good. Here are a few pointers. Feel free to write a comment so we can discuss and w
53,217
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
I see this already has an accepted answer, but I've always liked simulations more than equations, and this seemed like a fun question. I generated a variable $x$ from a distribution $N(0, 1)$ whose sample size $n$ was drawn from $U(100, 10000)$. To make $y$, I simply added a constant—drawn from $U(1, 100)$—to $x$. I then calculated the correlation between $X + Y$ and $X - Y$. I did this 10,000 times: set.seed(1839) cors <- sapply(1:10000, function(placeholder) { n <- runif(1, 100, 10000) b0 <- runif(1, 1, 100) x <- rnorm(n) y <- b0 + x cor(x + y, x - y) }) You'll get a ton of warnings. I run warnings()[1:5] to get the first five: Warning messages: 1: In cor(x + y, x - y) : the standard deviation is zero 2: In cor(x + y, x - y) : the standard deviation is zero 3: In cor(x + y, x - y) : the standard deviation is zero 4: In cor(x + y, x - y) : the standard deviation is zero 5: In cor(x + y, x - y) : the standard deviation is zero We can still look at the histogram of correlations that were defined, calling hist(cors[!is.na(cors)]): We can also look at how many of those simulations had the $Var(X - Y) = 0$: set.seed(1839) sd_is_zero <- sapply(1:10000, function(placeholder) { n <- runif(1, 100, 10000) b0 <- runif(1, 1, 100) x <- rnorm(n) y <- b0 + x ifelse(var(x - y) == 0, TRUE, FALSE) }) Then we can call prop.table(table(var_is_zero)) to see how many simulations generated $Var(X - Y) = 0$: var_is_zero FALSE TRUE 0.1698 0.8302 But why were some defined and some undefined? Was it related to the sample size or was it related to the constant? set.seed(1839) dat <- as.data.frame(matrix(nrow = 10000, ncol = 3)) colnames(dat) <- c("var_is_zero", "n", "b0") for (i in 1:10000) { n <- runif(1, 100, 10000) b0 <- runif(1, 1, 100) x <- rnorm(n) y <- b0 + x dat$var_is_zero[i] = ifelse(var(x - y) == 0, TRUE, FALSE) dat$n[i] = n dat$b0[i] = b0 } We can now predict whether or not the variance was zero from the sample size, the constant, and the interaction, looking at the result with summary(glm(var_is_zero ~ n * b0, data = dat, family = binomial())): Call: glm(formula = var_is_zero ~ n * b0, family = binomial(), data = dat) Deviance Residuals: Min 1Q Median 3Q Max -2.5154 0.1628 0.3039 0.5997 1.2894 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.613e-01 1.038e-01 -2.518 0.0118 * n -2.114e-05 1.785e-05 -1.184 0.2362 b0 5.323e-02 2.973e-03 17.907 <2e-16 *** n:b0 -4.011e-07 4.968e-07 -0.807 0.4195 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 9111.4 on 9999 degrees of freedom Residual deviance: 7148.0 on 9996 degrees of freedom AIC: 7156 Number of Fisher Scoring iterations: 6 It looks like the larger the intercept, the more likely the correlation is to be undefined.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
I see this already has an accepted answer, but I've always liked simulations more than equations, and this seemed like a fun question. I generated a variable $x$ from a distribution $N(0, 1)$ whose sa
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? I see this already has an accepted answer, but I've always liked simulations more than equations, and this seemed like a fun question. I generated a variable $x$ from a distribution $N(0, 1)$ whose sample size $n$ was drawn from $U(100, 10000)$. To make $y$, I simply added a constant—drawn from $U(1, 100)$—to $x$. I then calculated the correlation between $X + Y$ and $X - Y$. I did this 10,000 times: set.seed(1839) cors <- sapply(1:10000, function(placeholder) { n <- runif(1, 100, 10000) b0 <- runif(1, 1, 100) x <- rnorm(n) y <- b0 + x cor(x + y, x - y) }) You'll get a ton of warnings. I run warnings()[1:5] to get the first five: Warning messages: 1: In cor(x + y, x - y) : the standard deviation is zero 2: In cor(x + y, x - y) : the standard deviation is zero 3: In cor(x + y, x - y) : the standard deviation is zero 4: In cor(x + y, x - y) : the standard deviation is zero 5: In cor(x + y, x - y) : the standard deviation is zero We can still look at the histogram of correlations that were defined, calling hist(cors[!is.na(cors)]): We can also look at how many of those simulations had the $Var(X - Y) = 0$: set.seed(1839) sd_is_zero <- sapply(1:10000, function(placeholder) { n <- runif(1, 100, 10000) b0 <- runif(1, 1, 100) x <- rnorm(n) y <- b0 + x ifelse(var(x - y) == 0, TRUE, FALSE) }) Then we can call prop.table(table(var_is_zero)) to see how many simulations generated $Var(X - Y) = 0$: var_is_zero FALSE TRUE 0.1698 0.8302 But why were some defined and some undefined? Was it related to the sample size or was it related to the constant? set.seed(1839) dat <- as.data.frame(matrix(nrow = 10000, ncol = 3)) colnames(dat) <- c("var_is_zero", "n", "b0") for (i in 1:10000) { n <- runif(1, 100, 10000) b0 <- runif(1, 1, 100) x <- rnorm(n) y <- b0 + x dat$var_is_zero[i] = ifelse(var(x - y) == 0, TRUE, FALSE) dat$n[i] = n dat$b0[i] = b0 } We can now predict whether or not the variance was zero from the sample size, the constant, and the interaction, looking at the result with summary(glm(var_is_zero ~ n * b0, data = dat, family = binomial())): Call: glm(formula = var_is_zero ~ n * b0, family = binomial(), data = dat) Deviance Residuals: Min 1Q Median 3Q Max -2.5154 0.1628 0.3039 0.5997 1.2894 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.613e-01 1.038e-01 -2.518 0.0118 * n -2.114e-05 1.785e-05 -1.184 0.2362 b0 5.323e-02 2.973e-03 17.907 <2e-16 *** n:b0 -4.011e-07 4.968e-07 -0.807 0.4195 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 9111.4 on 9999 degrees of freedom Residual deviance: 7148.0 on 9996 degrees of freedom AIC: 7156 Number of Fisher Scoring iterations: 6 It looks like the larger the intercept, the more likely the correlation is to be undefined.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? I see this already has an accepted answer, but I've always liked simulations more than equations, and this seemed like a fun question. I generated a variable $x$ from a distribution $N(0, 1)$ whose sa
53,218
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
If $X$ is a linear function of $Y$ (definition of perfect correlation), then both $X-Y$ and $X+Y$ will be linear functions of $Y$, and therefore are linear functions of each other. So, $X-Y$ and $X+Y$ are perfectly correlated.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
If $X$ is a linear function of $Y$ (definition of perfect correlation), then both $X-Y$ and $X+Y$ will be linear functions of $Y$, and therefore are linear functions of each other. So, $X-Y$ and $X+Y
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? If $X$ is a linear function of $Y$ (definition of perfect correlation), then both $X-Y$ and $X+Y$ will be linear functions of $Y$, and therefore are linear functions of each other. So, $X-Y$ and $X+Y$ are perfectly correlated.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? If $X$ is a linear function of $Y$ (definition of perfect correlation), then both $X-Y$ and $X+Y$ will be linear functions of $Y$, and therefore are linear functions of each other. So, $X-Y$ and $X+Y
53,219
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
$X,Y$ perfectly correlate $\implies Cov(X,Y)=\sqrt{Var(X)Var(Y)}$ $Cov(X+Y,X-Y)=Cov(X,X)-Cov(X,Y)+Cov(Y,X)-Cov(Y,Y)=Var(X)-Var(Y)$ $Var(X+Y)Var(X-Y)=[Var(X)+Var(Y)+2Cov(X,Y)][Var(X)+Var(Y)-2Cov(X,Y)]=[Var(X)+Var(Y)]^2-4Cov(X,Y)Cov(X,Y)=_{\rho_{X,Y}=1}=[Var(X)+Var(Y)]^2-4Var(X)Var(Y)=[Var(X)-Var(Y)]^2$ Hence, $\rho_{X+Y,X-Y}=Cov(X+Y,X-Y)/\sqrt{Var(X+Y)Var(X-Y)}=\pm1$. As @whuber pointed out, when $Var(X)=Var(Y)$, $\rho_{X+Y,X-Y}$ is undefined. @Dilip Sarwate has provided the answer in the comment to his post. I added some details. Actually, if $Var(X)=Var(Y)$, given they are perfectly correlated, $Y=\pm X+b$, hence either $X+Y\equiv0$ or $X-Y\equiv0$. Therefore, $\rho=0$.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y?
$X,Y$ perfectly correlate $\implies Cov(X,Y)=\sqrt{Var(X)Var(Y)}$ $Cov(X+Y,X-Y)=Cov(X,X)-Cov(X,Y)+Cov(Y,X)-Cov(Y,Y)=Var(X)-Var(Y)$ $Var(X+Y)Var(X-Y)=[Var(X)+Var(Y)+2Cov(X,Y)][Var(X)+Var(Y)-2Cov(X,Y)]=
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? $X,Y$ perfectly correlate $\implies Cov(X,Y)=\sqrt{Var(X)Var(Y)}$ $Cov(X+Y,X-Y)=Cov(X,X)-Cov(X,Y)+Cov(Y,X)-Cov(Y,Y)=Var(X)-Var(Y)$ $Var(X+Y)Var(X-Y)=[Var(X)+Var(Y)+2Cov(X,Y)][Var(X)+Var(Y)-2Cov(X,Y)]=[Var(X)+Var(Y)]^2-4Cov(X,Y)Cov(X,Y)=_{\rho_{X,Y}=1}=[Var(X)+Var(Y)]^2-4Var(X)Var(Y)=[Var(X)-Var(Y)]^2$ Hence, $\rho_{X+Y,X-Y}=Cov(X+Y,X-Y)/\sqrt{Var(X+Y)Var(X-Y)}=\pm1$. As @whuber pointed out, when $Var(X)=Var(Y)$, $\rho_{X+Y,X-Y}$ is undefined. @Dilip Sarwate has provided the answer in the comment to his post. I added some details. Actually, if $Var(X)=Var(Y)$, given they are perfectly correlated, $Y=\pm X+b$, hence either $X+Y\equiv0$ or $X-Y\equiv0$. Therefore, $\rho=0$.
If X and Y are perfectly correlated, what is the correlation of X+Y and X-Y? $X,Y$ perfectly correlate $\implies Cov(X,Y)=\sqrt{Var(X)Var(Y)}$ $Cov(X+Y,X-Y)=Cov(X,X)-Cov(X,Y)+Cov(Y,X)-Cov(Y,Y)=Var(X)-Var(Y)$ $Var(X+Y)Var(X-Y)=[Var(X)+Var(Y)+2Cov(X,Y)][Var(X)+Var(Y)-2Cov(X,Y)]=
53,220
Does the likelihood function for the Poisson distribution integrate to 1
Besides my comment, the claim is true if you replace the sum with an integral (which makes more sense). Indeed, one can show that for all $k \in \mathbb{N}$: $$\int_0^\infty P(X=k|\lambda)\,d\lambda = \int_0^\infty \frac{\lambda^kexp(-\lambda)}{k!}\,d\lambda = \frac{\Gamma(k+1)}{k!} = \frac{k!}{k!} =1.$$ In fact, the gamma function is already defined by $\Gamma(x) = \int_0^\infty \lambda^{x-1}\exp(-\lambda)\, d\lambda$. Moreover it is well known that $\Gamma(k+1) = k!$ if $k \in \mathbb{N}$.
Does the likelihood function for the Poisson distribution integrate to 1
Besides my comment, the claim is true if you replace the sum with an integral (which makes more sense). Indeed, one can show that for all $k \in \mathbb{N}$: $$\int_0^\infty P(X=k|\lambda)\,d\lambda
Does the likelihood function for the Poisson distribution integrate to 1 Besides my comment, the claim is true if you replace the sum with an integral (which makes more sense). Indeed, one can show that for all $k \in \mathbb{N}$: $$\int_0^\infty P(X=k|\lambda)\,d\lambda = \int_0^\infty \frac{\lambda^kexp(-\lambda)}{k!}\,d\lambda = \frac{\Gamma(k+1)}{k!} = \frac{k!}{k!} =1.$$ In fact, the gamma function is already defined by $\Gamma(x) = \int_0^\infty \lambda^{x-1}\exp(-\lambda)\, d\lambda$. Moreover it is well known that $\Gamma(k+1) = k!$ if $k \in \mathbb{N}$.
Does the likelihood function for the Poisson distribution integrate to 1 Besides my comment, the claim is true if you replace the sum with an integral (which makes more sense). Indeed, one can show that for all $k \in \mathbb{N}$: $$\int_0^\infty P(X=k|\lambda)\,d\lambda
53,221
Explain non-uniform p-values for small sample t-tests in R
This isn't a bug in R. Welch-Satterthwaite type t-tests (the default two sample t-test in R) don't actually have a t-distribution. The t-with-fractional-d.f. you get is an approximation to the null distribution. The Welch-Satterthwaite tests work well in a variety of situations, but even when all the assumptions hold the null distribution of p-values will be somewhat non-uniform (this will impact significance levels; you won't have quite the significance level you were aiming for). There are effectively 3 parameters that control the null distribution -- the ratio of population variances, and the two sample sizes. The test uses an approximation to make it just a function of a single parameter (the Welch-Satterthwaite d.f.). For some choices of variance ratio and sample-size ratio the distribution of p-values will tend to be somewhat skewed to lower values and for other choices it will tend to be skewed a bit to higher values. This will tend to be more noticeable at small sample sizes, but occurs quite generally. It's possible to use simulation at your specific n's and variance ratio rather than the t-approximation to get better control of significance levels and so more accurate p-values, if that's necessary. However, if your sample sizes are equal (as it looks like they are in your simulation), an equal-variance t-test has little problem with control of significance level even when the variances are unequal, so that may actually be a reasonable default choice when you have equal sample sizes.
Explain non-uniform p-values for small sample t-tests in R
This isn't a bug in R. Welch-Satterthwaite type t-tests (the default two sample t-test in R) don't actually have a t-distribution. The t-with-fractional-d.f. you get is an approximation to the null
Explain non-uniform p-values for small sample t-tests in R This isn't a bug in R. Welch-Satterthwaite type t-tests (the default two sample t-test in R) don't actually have a t-distribution. The t-with-fractional-d.f. you get is an approximation to the null distribution. The Welch-Satterthwaite tests work well in a variety of situations, but even when all the assumptions hold the null distribution of p-values will be somewhat non-uniform (this will impact significance levels; you won't have quite the significance level you were aiming for). There are effectively 3 parameters that control the null distribution -- the ratio of population variances, and the two sample sizes. The test uses an approximation to make it just a function of a single parameter (the Welch-Satterthwaite d.f.). For some choices of variance ratio and sample-size ratio the distribution of p-values will tend to be somewhat skewed to lower values and for other choices it will tend to be skewed a bit to higher values. This will tend to be more noticeable at small sample sizes, but occurs quite generally. It's possible to use simulation at your specific n's and variance ratio rather than the t-approximation to get better control of significance levels and so more accurate p-values, if that's necessary. However, if your sample sizes are equal (as it looks like they are in your simulation), an equal-variance t-test has little problem with control of significance level even when the variances are unequal, so that may actually be a reasonable default choice when you have equal sample sizes.
Explain non-uniform p-values for small sample t-tests in R This isn't a bug in R. Welch-Satterthwaite type t-tests (the default two sample t-test in R) don't actually have a t-distribution. The t-with-fractional-d.f. you get is an approximation to the null
53,222
Unbiased Estimator of the Variance of the Sample Variance
The question is to find an unbiased estimator of: $$\text{Var}(S^2)=\frac{\mu_4}{n}-\frac{(n-3)}{n(n-1)} {\mu_2^2}$$ ... where $\mu_r$ denotes the $r^\text{th}$ central moment of the population. This requires finding unbiased estimators of $\mu_4$ and of $\mu_2^2$. An unbiased estimator of $\mu_4$ By defn, an unbiased estimator of the $r^\text{th}$ central moment is the $r^\text{th}$ h-statistic: $$\mathbb{E}[h_r] = \mu_r$$ The $4^\text{th}$ h-statistic is given by: where: i) I am using the HStatistic function from the mathStatica package for Mathematica ii) $s_r$ denotes the $r^\text{th}$ power sum $$s_r=\sum _{i=1}^n X_i^r$$ Alternative: The OP asked about finding an unbiased solution in terms of sample central moments $m_r=\frac{1}{n} \sum _{i=1}^n \left(X_i-\bar{X}\right)^r$. An unbiased estimator of $\mu_4$ in terms of $m_i$ is: An unbiased estimator of $\mu_2^2$ An unbiased estimator of a product of central moments (here, $\mu_2 \times \mu_2$)is known as a polyache (play on poly-h). An unbiased estimator of $\mu_2^2$ is given by: where: i) I am using the PolyH function from the mathStatica package for Mathematica ii) For more detail on polyaches, see section 7.2B of Chapter 7 of Rose and Smith, Mathematical Statistics with Mathematica (am one of the authors), a free download of which is available here. While mathStatica does not have an automated converter to express PolyH in terms of sample central moments $m_i$ (nice idea), doing that conversion yields: Putting it all together: An unbiased estimator of $\frac{\mu_4}{n}-\frac{(n-3)}{n(n-1)} {\mu_2^2}$ is thus: or, more compactly, in terms of sample central moments $m_i$: ........... And as a check, we can run the expectations operator over the above (the $1^\text{st}$ RawMoment of sol), expressing the solution in terms of Central moments of the population: ... and all is good.
Unbiased Estimator of the Variance of the Sample Variance
The question is to find an unbiased estimator of: $$\text{Var}(S^2)=\frac{\mu_4}{n}-\frac{(n-3)}{n(n-1)} {\mu_2^2}$$ ... where $\mu_r$ denotes the $r^\text{th}$ central moment of the population. Thi
Unbiased Estimator of the Variance of the Sample Variance The question is to find an unbiased estimator of: $$\text{Var}(S^2)=\frac{\mu_4}{n}-\frac{(n-3)}{n(n-1)} {\mu_2^2}$$ ... where $\mu_r$ denotes the $r^\text{th}$ central moment of the population. This requires finding unbiased estimators of $\mu_4$ and of $\mu_2^2$. An unbiased estimator of $\mu_4$ By defn, an unbiased estimator of the $r^\text{th}$ central moment is the $r^\text{th}$ h-statistic: $$\mathbb{E}[h_r] = \mu_r$$ The $4^\text{th}$ h-statistic is given by: where: i) I am using the HStatistic function from the mathStatica package for Mathematica ii) $s_r$ denotes the $r^\text{th}$ power sum $$s_r=\sum _{i=1}^n X_i^r$$ Alternative: The OP asked about finding an unbiased solution in terms of sample central moments $m_r=\frac{1}{n} \sum _{i=1}^n \left(X_i-\bar{X}\right)^r$. An unbiased estimator of $\mu_4$ in terms of $m_i$ is: An unbiased estimator of $\mu_2^2$ An unbiased estimator of a product of central moments (here, $\mu_2 \times \mu_2$)is known as a polyache (play on poly-h). An unbiased estimator of $\mu_2^2$ is given by: where: i) I am using the PolyH function from the mathStatica package for Mathematica ii) For more detail on polyaches, see section 7.2B of Chapter 7 of Rose and Smith, Mathematical Statistics with Mathematica (am one of the authors), a free download of which is available here. While mathStatica does not have an automated converter to express PolyH in terms of sample central moments $m_i$ (nice idea), doing that conversion yields: Putting it all together: An unbiased estimator of $\frac{\mu_4}{n}-\frac{(n-3)}{n(n-1)} {\mu_2^2}$ is thus: or, more compactly, in terms of sample central moments $m_i$: ........... And as a check, we can run the expectations operator over the above (the $1^\text{st}$ RawMoment of sol), expressing the solution in terms of Central moments of the population: ... and all is good.
Unbiased Estimator of the Variance of the Sample Variance The question is to find an unbiased estimator of: $$\text{Var}(S^2)=\frac{\mu_4}{n}-\frac{(n-3)}{n(n-1)} {\mu_2^2}$$ ... where $\mu_r$ denotes the $r^\text{th}$ central moment of the population. Thi
53,223
What is the difference between Universe and Population?
I just took stats last year. Population is, as you described, a complete set of elements (persons or objects) that possess some common characteristic defined by the sampling criteria established by the researcher. In statistics, Universe is a synonym of Population. Source: population. (n.d.) Collins English Dictionary – Complete and Unabridged, 12th Edition 2014. (1991, 1994, 1998, 2000, 2003, 2006, 2007, 2009, 2011, 2014). Retrieved October 20 2017 from https://www.thefreedictionary.com/population Confirming the use of Universe and Population, as synonyms in modern data science: https://stats.oecd.org/glossary/detail.asp?ID=2087
What is the difference between Universe and Population?
I just took stats last year. Population is, as you described, a complete set of elements (persons or objects) that possess some common characteristic defined by the sampling criteria established by t
What is the difference between Universe and Population? I just took stats last year. Population is, as you described, a complete set of elements (persons or objects) that possess some common characteristic defined by the sampling criteria established by the researcher. In statistics, Universe is a synonym of Population. Source: population. (n.d.) Collins English Dictionary – Complete and Unabridged, 12th Edition 2014. (1991, 1994, 1998, 2000, 2003, 2006, 2007, 2009, 2011, 2014). Retrieved October 20 2017 from https://www.thefreedictionary.com/population Confirming the use of Universe and Population, as synonyms in modern data science: https://stats.oecd.org/glossary/detail.asp?ID=2087
What is the difference between Universe and Population? I just took stats last year. Population is, as you described, a complete set of elements (persons or objects) that possess some common characteristic defined by the sampling criteria established by t
53,224
What is the difference between Universe and Population?
The term 'universe', while it has a well-established meaning in set theory and other related mathematical fields, in my experience is rarely used in statistics as a synonym of the term 'population'. Indeed, all classical statistics textbooks, exclusively use the term 'population', usually defined as entire group of individuals (not necessarily people) about which we want information. On the other hand, the 'sample' can be defined to be A sample is the part of the population from which we actually collect information. We use a sample to draw conclusions about the entire population. Conceptually the two terms mean essentially the same thing, i.e. the set of all possible statistical units, but past statisticians decided to call this set 'population' so we adhere to this convention.
What is the difference between Universe and Population?
The term 'universe', while it has a well-established meaning in set theory and other related mathematical fields, in my experience is rarely used in statistics as a synonym of the term 'population'. I
What is the difference between Universe and Population? The term 'universe', while it has a well-established meaning in set theory and other related mathematical fields, in my experience is rarely used in statistics as a synonym of the term 'population'. Indeed, all classical statistics textbooks, exclusively use the term 'population', usually defined as entire group of individuals (not necessarily people) about which we want information. On the other hand, the 'sample' can be defined to be A sample is the part of the population from which we actually collect information. We use a sample to draw conclusions about the entire population. Conceptually the two terms mean essentially the same thing, i.e. the set of all possible statistical units, but past statisticians decided to call this set 'population' so we adhere to this convention.
What is the difference between Universe and Population? The term 'universe', while it has a well-established meaning in set theory and other related mathematical fields, in my experience is rarely used in statistics as a synonym of the term 'population'. I
53,225
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. The collection of all elements possessing common characteristics that comprise universe is known as the population. A subgroup of the members of population chosen for participation in the study is called sample. The population consists of each and every element of the entire group.
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. The collection of all elements possessing common characteristics that comprise universe is known as the population. A subgroup of the members of population chosen for participation in the study is called sample. The population consists of each and every element of the entire group.
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
53,226
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. Universe is the set all experimental units, from which a sample is to be drawn. Population is the set of all values of the variables to be studied from those experimental units. Thus, a U-sample contains experimental units, whereas a P-sample contains data.
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. Universe is the set all experimental units, from which a sample is to be drawn. Population is the set of all values of the variables to be studied from those experimental units. Thus, a U-sample contains experimental units, whereas a P-sample contains data.
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
53,227
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. The collection of all elements possessing common characteristic that comprise (Univers) is known as the population. And the subgroup of the member of population chosen for participating in the studybis called (Sample).
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. The collection of all elements possessing common characteristic that comprise (Univers) is known as the population. And the subgroup of the member of population chosen for participating in the studybis called (Sample).
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
53,228
What is the difference between Universe and Population?
Universe, population and sample must be understood together. Universe and population can refer to same thing and can be considered as synonym if only the population you use while choosing your samples includes all the members of universe. If you have data for all the members of universe then your population is universe and you are actually sampling from the universe. However, if you have data for only some members of universe then your population is those members of universe only and you are sampling from those members of universe whose data you have access. For instance, let's say you are doing a survey research on 10 million workers in country X. Your universe is all the workers. If you have access to social security number of all the workers where you can draw your sample of 10 thousand workers, then your universe and population are the same. If you have access to social security numbers of only 1 million workers then your universe is 10 million workers, your population is 1 million workers, and your sample is 10 thousand workers.
What is the difference between Universe and Population?
Universe, population and sample must be understood together. Universe and population can refer to same thing and can be considered as synonym if only the population you use while choosing your samples
What is the difference between Universe and Population? Universe, population and sample must be understood together. Universe and population can refer to same thing and can be considered as synonym if only the population you use while choosing your samples includes all the members of universe. If you have data for all the members of universe then your population is universe and you are actually sampling from the universe. However, if you have data for only some members of universe then your population is those members of universe only and you are sampling from those members of universe whose data you have access. For instance, let's say you are doing a survey research on 10 million workers in country X. Your universe is all the workers. If you have access to social security number of all the workers where you can draw your sample of 10 thousand workers, then your universe and population are the same. If you have access to social security numbers of only 1 million workers then your universe is 10 million workers, your population is 1 million workers, and your sample is 10 thousand workers.
What is the difference between Universe and Population? Universe, population and sample must be understood together. Universe and population can refer to same thing and can be considered as synonym if only the population you use while choosing your samples
53,229
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. The universe is broad in its nature. The universe in research is the area of your study while the population is the specific characteristics of the universe and the samples are selected units of the population.
What is the difference between Universe and Population?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. The universe is broad in its nature. The universe in research is the area of your study while the population is the specific characteristics of the universe and the samples are selected units of the population.
What is the difference between Universe and Population? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
53,230
Distribution like hypergeometric distribution, but with false replacements
Suppose an urn of $n$ balls begins with $s$ successes. What are the chances it will end up with $t$ successes ($0 \le t \le s$) after $d$ draws? Ignoring the trivial case $s=0$, this is a Markov chain on the numbers of successes $s$. The chance of a transition from $s$ to $s-1$ is $p(s,s-1)=s/n$; otherwise, the state stays the same: $p(s,s)=1-s/n$. The $n+1\times n+1$ transition matrix $\mathbb{P} = p(s,t)$ (indexing from $0$ through $n$) readily decomposes as $$\mathbb P = \mathbb{V}\operatorname{Diagonal}\left(\frac{n}{n}, \frac{n-1}{n}, \ldots, \frac{1}{n}, \frac{0}{n}\right) \mathbb{V}^{-1}$$ where $$\mathbb{V} = (v_{ij}, i=0,\ldots,n; j=0,\ldots, n) = \left(\binom{i}{j}\right)$$ is Pascal's Triangle and $$\mathbb{V}^{-1} = (v^{-1}_{ij}) = \left((-1)^{i+j}\binom{i}{j}\right)$$ contains the same values but with alternating signs. The chance of a transition from $s$ to $t$ after $d$ steps, $p_n(s,t)$, is found from the $s,t$ entry in $\mathbb{P}^n$ (again indexing from $0$ through $n$). But we easily compute $$\mathbb{P}^n= \mathbb{V}\operatorname{Diagonal}\left(\frac{n^d}{n^d}, \frac{(n-1)^d}{n^d}, \ldots, \frac{1^d}{n^d}, \frac{0^d}{n^d}\right)\mathbb{V}^{-1}.$$ Consequently, omitting all terms obviously zero, $$p_d(s,t) = n^{-d}\sum_{j=t}^s \binom{s}{j} (n-j)^d (-1)^{j+t}\binom{j}{t}.$$ These figures plot the chance of reaching an urn with $t$ successes starting with $s=100$ successes in $n=100$ total. The numbers of draws are 10,50,150,and 250. As they increase, the distribution moves to the left more and more slowly, first spreading and then contracting to zero. In terms of $t$, these are generalized hypergeometric distributions.
Distribution like hypergeometric distribution, but with false replacements
Suppose an urn of $n$ balls begins with $s$ successes. What are the chances it will end up with $t$ successes ($0 \le t \le s$) after $d$ draws? Ignoring the trivial case $s=0$, this is a Markov ch
Distribution like hypergeometric distribution, but with false replacements Suppose an urn of $n$ balls begins with $s$ successes. What are the chances it will end up with $t$ successes ($0 \le t \le s$) after $d$ draws? Ignoring the trivial case $s=0$, this is a Markov chain on the numbers of successes $s$. The chance of a transition from $s$ to $s-1$ is $p(s,s-1)=s/n$; otherwise, the state stays the same: $p(s,s)=1-s/n$. The $n+1\times n+1$ transition matrix $\mathbb{P} = p(s,t)$ (indexing from $0$ through $n$) readily decomposes as $$\mathbb P = \mathbb{V}\operatorname{Diagonal}\left(\frac{n}{n}, \frac{n-1}{n}, \ldots, \frac{1}{n}, \frac{0}{n}\right) \mathbb{V}^{-1}$$ where $$\mathbb{V} = (v_{ij}, i=0,\ldots,n; j=0,\ldots, n) = \left(\binom{i}{j}\right)$$ is Pascal's Triangle and $$\mathbb{V}^{-1} = (v^{-1}_{ij}) = \left((-1)^{i+j}\binom{i}{j}\right)$$ contains the same values but with alternating signs. The chance of a transition from $s$ to $t$ after $d$ steps, $p_n(s,t)$, is found from the $s,t$ entry in $\mathbb{P}^n$ (again indexing from $0$ through $n$). But we easily compute $$\mathbb{P}^n= \mathbb{V}\operatorname{Diagonal}\left(\frac{n^d}{n^d}, \frac{(n-1)^d}{n^d}, \ldots, \frac{1^d}{n^d}, \frac{0^d}{n^d}\right)\mathbb{V}^{-1}.$$ Consequently, omitting all terms obviously zero, $$p_d(s,t) = n^{-d}\sum_{j=t}^s \binom{s}{j} (n-j)^d (-1)^{j+t}\binom{j}{t}.$$ These figures plot the chance of reaching an urn with $t$ successes starting with $s=100$ successes in $n=100$ total. The numbers of draws are 10,50,150,and 250. As they increase, the distribution moves to the left more and more slowly, first spreading and then contracting to zero. In terms of $t$, these are generalized hypergeometric distributions.
Distribution like hypergeometric distribution, but with false replacements Suppose an urn of $n$ balls begins with $s$ successes. What are the chances it will end up with $t$ successes ($0 \le t \le s$) after $d$ draws? Ignoring the trivial case $s=0$, this is a Markov ch
53,231
Distribution like hypergeometric distribution, but with false replacements
There will be a recursion. If $S_{n,g,b}$ is the number of green balls successfully drawn with $n$ attempts starting with $g$ green balls and $b−g$ white balls then $$\mathbb P (S_{n,g,b} = s)= \frac{g}{b} \mathbb P (S_{n-1,g-1,b} = s-1) + \frac{b-g}{b} \mathbb P (S_{n-1,g,b} = s)$$ starting with $\mathbb P (S_{0,g,b} = 0)= 1$, and with $\mathbb P (S_{n,g,b} = s)= 0$ when $s \gt n$ or $s \gt g$ or $s \lt 0$ For example, with $b=15$ balls in total and $n=5$ attempts, the probability of $s$ successes when starting with $g$ green balls is about: Probability table with n=5 attempts, starting with g green and b=15 total balls (rows sum to 1) s (number of successes) 0 1 2 3 4 5 6 g start 0 1 0 0 0 0 0 0 1 0.708246 0.291754 0 0 0 0 0 2 0.488946 0.438600 0.072454 0 0 0 0 3 0.327680 0.483797 0.174104 0.014420 0 0 0 4 0.212084 0.462386 0.274015 0.049462 0.002054 0 0 5 0.131687 0.401982 0.352000 0.104691 0.009481 0.000158 0 6 0.077760 0.323563 0.397037 0.174617 0.026074 0.000948 0 7 0.043151 0.242261 0.405689 0.250272 0.055309 0.003319 0 8 0.022133 0.168149 0.380523 0.320790 0.099556 0.008849 0 9 0.010240 0.107034 0.328533 0.374993 0.159289 0.019911 0 10 0.004115 0.061248 0.259556 0.402963 0.232296 0.039822 0 11 0.001348 0.030434 0.184691 0.397630 0.312889 0.073007 0 12 0.000320 0.012342 0.114726 0.356346 0.391111 0.125156 0 13 0.000042 0.003612 0.058548 0.282469 0.451951 0.203378 0 14 0.000001 0.000572 0.021570 0.186943 0.474548 0.316365 0 15 0 0.000020 0.004148 0.089877 0.431407 0.474548 0 I am not aware of a common named distribution like this
Distribution like hypergeometric distribution, but with false replacements
There will be a recursion. If $S_{n,g,b}$ is the number of green balls successfully drawn with $n$ attempts starting with $g$ green balls and $b−g$ white balls then $$\mathbb P (S_{n,g,b} = s)= \frac
Distribution like hypergeometric distribution, but with false replacements There will be a recursion. If $S_{n,g,b}$ is the number of green balls successfully drawn with $n$ attempts starting with $g$ green balls and $b−g$ white balls then $$\mathbb P (S_{n,g,b} = s)= \frac{g}{b} \mathbb P (S_{n-1,g-1,b} = s-1) + \frac{b-g}{b} \mathbb P (S_{n-1,g,b} = s)$$ starting with $\mathbb P (S_{0,g,b} = 0)= 1$, and with $\mathbb P (S_{n,g,b} = s)= 0$ when $s \gt n$ or $s \gt g$ or $s \lt 0$ For example, with $b=15$ balls in total and $n=5$ attempts, the probability of $s$ successes when starting with $g$ green balls is about: Probability table with n=5 attempts, starting with g green and b=15 total balls (rows sum to 1) s (number of successes) 0 1 2 3 4 5 6 g start 0 1 0 0 0 0 0 0 1 0.708246 0.291754 0 0 0 0 0 2 0.488946 0.438600 0.072454 0 0 0 0 3 0.327680 0.483797 0.174104 0.014420 0 0 0 4 0.212084 0.462386 0.274015 0.049462 0.002054 0 0 5 0.131687 0.401982 0.352000 0.104691 0.009481 0.000158 0 6 0.077760 0.323563 0.397037 0.174617 0.026074 0.000948 0 7 0.043151 0.242261 0.405689 0.250272 0.055309 0.003319 0 8 0.022133 0.168149 0.380523 0.320790 0.099556 0.008849 0 9 0.010240 0.107034 0.328533 0.374993 0.159289 0.019911 0 10 0.004115 0.061248 0.259556 0.402963 0.232296 0.039822 0 11 0.001348 0.030434 0.184691 0.397630 0.312889 0.073007 0 12 0.000320 0.012342 0.114726 0.356346 0.391111 0.125156 0 13 0.000042 0.003612 0.058548 0.282469 0.451951 0.203378 0 14 0.000001 0.000572 0.021570 0.186943 0.474548 0.316365 0 15 0 0.000020 0.004148 0.089877 0.431407 0.474548 0 I am not aware of a common named distribution like this
Distribution like hypergeometric distribution, but with false replacements There will be a recursion. If $S_{n,g,b}$ is the number of green balls successfully drawn with $n$ attempts starting with $g$ green balls and $b−g$ white balls then $$\mathbb P (S_{n,g,b} = s)= \frac
53,232
Number of distinct scatterplots among $p$ variables
Assuming you don't count a plot of $X_3$ vs $X_6$ as distinct from a plot of $X_6$ vs $X_3$ and further assuming you don't care to plot a variable vs itself, then you want the number of distinct pairs $i,j$ for $i<j$ and $i$ and $j$ integers between $1$ and $p$ exclusive. There's $p \times p$ pairs $(i,j)$. We remove the $i=j$ cases, which removes $p$ of those, leaving $p \times (p-1)$. We then take the half that have $i<j$ (the other half have $i>j$ but they're the same set of plots with axes interchanged). This leaves $\frac12 p\times (p-1)$ Alternatively you could just look at it as the number of ways of choosing two distinct variables out of $p$, without regard to order, which is ${p \choose 2}=p(p-1)/2$.
Number of distinct scatterplots among $p$ variables
Assuming you don't count a plot of $X_3$ vs $X_6$ as distinct from a plot of $X_6$ vs $X_3$ and further assuming you don't care to plot a variable vs itself, then you want the number of distinct pairs
Number of distinct scatterplots among $p$ variables Assuming you don't count a plot of $X_3$ vs $X_6$ as distinct from a plot of $X_6$ vs $X_3$ and further assuming you don't care to plot a variable vs itself, then you want the number of distinct pairs $i,j$ for $i<j$ and $i$ and $j$ integers between $1$ and $p$ exclusive. There's $p \times p$ pairs $(i,j)$. We remove the $i=j$ cases, which removes $p$ of those, leaving $p \times (p-1)$. We then take the half that have $i<j$ (the other half have $i>j$ but they're the same set of plots with axes interchanged). This leaves $\frac12 p\times (p-1)$ Alternatively you could just look at it as the number of ways of choosing two distinct variables out of $p$, without regard to order, which is ${p \choose 2}=p(p-1)/2$.
Number of distinct scatterplots among $p$ variables Assuming you don't count a plot of $X_3$ vs $X_6$ as distinct from a plot of $X_6$ vs $X_3$ and further assuming you don't care to plot a variable vs itself, then you want the number of distinct pairs
53,233
Meaning of Min/Max Accuracy of a regression model
Let's break down the code: apply(actuals_preds, 1, min) Takes, for each row, the minimum of the prediction and the result. Similarly, apply(actuals_preds, 1, max) takes the maximum. Suppose the test outcomes are $y_1, \ldots, y_n$, and the predictions are $\hat{y}_1, \ldots, \hat{y}_n$. For any $i$, there are two cases: The first case is $\hat{y}_i = y_i - \epsilon_i$ for some $\epsilon_i \geq 0$. In this case, row $i$ will add to the mean, the term \begin{equation} \frac{y_i - \epsilon_i}{y_i} = 1 - \frac{\epsilon_i}{y_i}. \end{equation} The second case is $\hat{y}_i = y_i + \epsilon_i$ for some $\epsilon_i \geq 0$. In this case, row $i$ will add to the mean, the term \begin{equation} \frac{y_i}{y_i + \epsilon_i} \sim 1 - \frac{\epsilon_i}{y_i}. \end{equation} where the approximation holds for $\epsilon_i < y_i$ due to the series expansion of $\frac{1}{1 + x}$. Finally mean(min(actual, predicted)/max(actual, predicted)) takes the average of all these terms, obviously. The better the prediction, the higher it will be (approx. 1 for a nearly perfect prediction).
Meaning of Min/Max Accuracy of a regression model
Let's break down the code: apply(actuals_preds, 1, min) Takes, for each row, the minimum of the prediction and the result. Similarly, apply(actuals_preds, 1, max) takes the maximum. Suppose the tes
Meaning of Min/Max Accuracy of a regression model Let's break down the code: apply(actuals_preds, 1, min) Takes, for each row, the minimum of the prediction and the result. Similarly, apply(actuals_preds, 1, max) takes the maximum. Suppose the test outcomes are $y_1, \ldots, y_n$, and the predictions are $\hat{y}_1, \ldots, \hat{y}_n$. For any $i$, there are two cases: The first case is $\hat{y}_i = y_i - \epsilon_i$ for some $\epsilon_i \geq 0$. In this case, row $i$ will add to the mean, the term \begin{equation} \frac{y_i - \epsilon_i}{y_i} = 1 - \frac{\epsilon_i}{y_i}. \end{equation} The second case is $\hat{y}_i = y_i + \epsilon_i$ for some $\epsilon_i \geq 0$. In this case, row $i$ will add to the mean, the term \begin{equation} \frac{y_i}{y_i + \epsilon_i} \sim 1 - \frac{\epsilon_i}{y_i}. \end{equation} where the approximation holds for $\epsilon_i < y_i$ due to the series expansion of $\frac{1}{1 + x}$. Finally mean(min(actual, predicted)/max(actual, predicted)) takes the average of all these terms, obviously. The better the prediction, the higher it will be (approx. 1 for a nearly perfect prediction).
Meaning of Min/Max Accuracy of a regression model Let's break down the code: apply(actuals_preds, 1, min) Takes, for each row, the minimum of the prediction and the result. Similarly, apply(actuals_preds, 1, max) takes the maximum. Suppose the tes
53,234
Meaning of Min/Max Accuracy of a regression model
MinMax tells you how far the model's prediction is off. For a perfect model, this measure is 1.0. The lower the measure, the worse the model, based on out-of-sample performance. Just look at the formula and how it's implemented in R. If predict (the column predicteds in your data frame) exactly equals actual (actuals) for every instance of the test set, the row minimum would be the same as the row maximum, so the ratio would be 1.0 for all rows. If your model is terrible, sometimes its prediction is too high, other time too low, the min/max ratio would be much less than 1.0. So the average of that would be less than 1.0.
Meaning of Min/Max Accuracy of a regression model
MinMax tells you how far the model's prediction is off. For a perfect model, this measure is 1.0. The lower the measure, the worse the model, based on out-of-sample performance. Just look at the form
Meaning of Min/Max Accuracy of a regression model MinMax tells you how far the model's prediction is off. For a perfect model, this measure is 1.0. The lower the measure, the worse the model, based on out-of-sample performance. Just look at the formula and how it's implemented in R. If predict (the column predicteds in your data frame) exactly equals actual (actuals) for every instance of the test set, the row minimum would be the same as the row maximum, so the ratio would be 1.0 for all rows. If your model is terrible, sometimes its prediction is too high, other time too low, the min/max ratio would be much less than 1.0. So the average of that would be less than 1.0.
Meaning of Min/Max Accuracy of a regression model MinMax tells you how far the model's prediction is off. For a perfect model, this measure is 1.0. The lower the measure, the worse the model, based on out-of-sample performance. Just look at the form
53,235
Meaning of Min/Max Accuracy of a regression model
Actuals and predict both are in same dataset. Min_Max_accuracy will find out accuracy rate of each row. it can be considered accuracy rate of the model. it would less than zero like .69034, then accuracy percentage is 69%.
Meaning of Min/Max Accuracy of a regression model
Actuals and predict both are in same dataset. Min_Max_accuracy will find out accuracy rate of each row. it can be considered accuracy rate of the model. it would less than zero like .69034, then accur
Meaning of Min/Max Accuracy of a regression model Actuals and predict both are in same dataset. Min_Max_accuracy will find out accuracy rate of each row. it can be considered accuracy rate of the model. it would less than zero like .69034, then accuracy percentage is 69%.
Meaning of Min/Max Accuracy of a regression model Actuals and predict both are in same dataset. Min_Max_accuracy will find out accuracy rate of each row. it can be considered accuracy rate of the model. it would less than zero like .69034, then accur
53,236
Classification accuracy based on probability
Classifier metrics that compare the predicted probabilities to the true classes go by the name of proper scoring rules. The two most popular are the log-loss $$ L = \sum_i y_i \log(p_i) + (1 - y_i) \log(1 - p_i) $$ and the brier score $$ L = \sum_i (y_i - p_i)^2 $$ The log-loss is used more in practice, as it is the log likelihood of the Bernoulli distribution. It is good practice to fit and compare models using proper scoring rules, as this ensures your predicted probabilities are fit well and calibrated to the data. Once you have a well fit probability model, it can be used to answer a multitude of questions that cannot be answered with only class assignments. Additionally, the AUC is a popular metric. It is not a proper scoring rule, but it can be used to evaluate any probabilistic classifier in terms of an average performance across a range of hard classification thresholds. The AUC is the probability that a randomly chosen true positive class receives a greater predicted probability than a randomly chosen negative class.
Classification accuracy based on probability
Classifier metrics that compare the predicted probabilities to the true classes go by the name of proper scoring rules. The two most popular are the log-loss $$ L = \sum_i y_i \log(p_i) + (1 - y_i) \
Classification accuracy based on probability Classifier metrics that compare the predicted probabilities to the true classes go by the name of proper scoring rules. The two most popular are the log-loss $$ L = \sum_i y_i \log(p_i) + (1 - y_i) \log(1 - p_i) $$ and the brier score $$ L = \sum_i (y_i - p_i)^2 $$ The log-loss is used more in practice, as it is the log likelihood of the Bernoulli distribution. It is good practice to fit and compare models using proper scoring rules, as this ensures your predicted probabilities are fit well and calibrated to the data. Once you have a well fit probability model, it can be used to answer a multitude of questions that cannot be answered with only class assignments. Additionally, the AUC is a popular metric. It is not a proper scoring rule, but it can be used to evaluate any probabilistic classifier in terms of an average performance across a range of hard classification thresholds. The AUC is the probability that a randomly chosen true positive class receives a greater predicted probability than a randomly chosen negative class.
Classification accuracy based on probability Classifier metrics that compare the predicted probabilities to the true classes go by the name of proper scoring rules. The two most popular are the log-loss $$ L = \sum_i y_i \log(p_i) + (1 - y_i) \
53,237
Classification accuracy based on probability
It might be the case that one model (say M1 on your case) leads to more extreme predictions compared to the other (M2), meaning that "certainty" (I think this concept is misleading) will be also higher for correctly predicted/classified events. Instead of reporting % of correctly classified events, you could simply compute average predicted proba over the sample and compare this stat between the 2 models. However I would not try to over-interpret the diff in predicted proba. What really matters (in terms of accuracy) is whether events are correctly predicted/classified or not. The predicted proba are not necessarily 100% meaningful. For example if you use Normal errors (instead of Logistic ones) and then estimate a Probit model, you would obtain diff predicted proba.
Classification accuracy based on probability
It might be the case that one model (say M1 on your case) leads to more extreme predictions compared to the other (M2), meaning that "certainty" (I think this concept is misleading) will be also highe
Classification accuracy based on probability It might be the case that one model (say M1 on your case) leads to more extreme predictions compared to the other (M2), meaning that "certainty" (I think this concept is misleading) will be also higher for correctly predicted/classified events. Instead of reporting % of correctly classified events, you could simply compute average predicted proba over the sample and compare this stat between the 2 models. However I would not try to over-interpret the diff in predicted proba. What really matters (in terms of accuracy) is whether events are correctly predicted/classified or not. The predicted proba are not necessarily 100% meaningful. For example if you use Normal errors (instead of Logistic ones) and then estimate a Probit model, you would obtain diff predicted proba.
Classification accuracy based on probability It might be the case that one model (say M1 on your case) leads to more extreme predictions compared to the other (M2), meaning that "certainty" (I think this concept is misleading) will be also highe
53,238
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)?
What does the 2-d PCA data/plot mean? The 2-d PCA data/plot represent two "compound features" which PCA created to capture as much of the variance in your original 13 features as possible. Assuming your 13 features are linearly independent (e.g. one feature is not just another feature times 2, for every row in your data), it would take 13 dimensions to capture 100% of the variation in your 13 raw features. However, often PCA can capture e.g. 98% of the variation in your data in just a few PCA dimensions. To see how much of the variance is explained by each PCA dimension for your problem, print x_pca.explained_variance_ratio_ after you fit() your x_pca object. When PCA can capture a large amount of the variance of your features in just 2 dimensions, that's especially convenient because then you can plot those 2 PCA dimensions as you have, and know that any groupings which show up on the 2-d plot correspond to natural groupings in your 13-dimensional data. What does the decision boundary mean? The decision boundary in your code is a prediction of your target variable, using as features (independent variables) the first two PCA dimensions of your 13-dimension original feature set. Why is your decision boundary not in the obvious gap? Remember the PCA dimensions were formed just based on your 13 independent variables, without looking at your target. The decision boundary is not a decision boundary between PCA clusters, it's a decision boundary using the PCA dimensions to predict target. So the fact that the decision boundary is not totally between the clusters means PCA's first two dimensions of your 13 features do a good job of separating your target classes, but not a perfect job. How to improve your plot What you really care about is target, right? So in your plot, don't plot all the points as red. Color them by target class. Then you will have a plot that shows you how well PCA and the information in your original features (represented by distance/space between clusters on your plot) distinguishes between target classes (which would be colors on the new plot).
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)?
What does the 2-d PCA data/plot mean? The 2-d PCA data/plot represent two "compound features" which PCA created to capture as much of the variance in your original 13 features as possible. Assuming y
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)? What does the 2-d PCA data/plot mean? The 2-d PCA data/plot represent two "compound features" which PCA created to capture as much of the variance in your original 13 features as possible. Assuming your 13 features are linearly independent (e.g. one feature is not just another feature times 2, for every row in your data), it would take 13 dimensions to capture 100% of the variation in your 13 raw features. However, often PCA can capture e.g. 98% of the variation in your data in just a few PCA dimensions. To see how much of the variance is explained by each PCA dimension for your problem, print x_pca.explained_variance_ratio_ after you fit() your x_pca object. When PCA can capture a large amount of the variance of your features in just 2 dimensions, that's especially convenient because then you can plot those 2 PCA dimensions as you have, and know that any groupings which show up on the 2-d plot correspond to natural groupings in your 13-dimensional data. What does the decision boundary mean? The decision boundary in your code is a prediction of your target variable, using as features (independent variables) the first two PCA dimensions of your 13-dimension original feature set. Why is your decision boundary not in the obvious gap? Remember the PCA dimensions were formed just based on your 13 independent variables, without looking at your target. The decision boundary is not a decision boundary between PCA clusters, it's a decision boundary using the PCA dimensions to predict target. So the fact that the decision boundary is not totally between the clusters means PCA's first two dimensions of your 13 features do a good job of separating your target classes, but not a perfect job. How to improve your plot What you really care about is target, right? So in your plot, don't plot all the points as red. Color them by target class. Then you will have a plot that shows you how well PCA and the information in your original features (represented by distance/space between clusters on your plot) distinguishes between target classes (which would be colors on the new plot).
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)? What does the 2-d PCA data/plot mean? The 2-d PCA data/plot represent two "compound features" which PCA created to capture as much of the variance in your original 13 features as possible. Assuming y
53,239
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)?
From your question: I train the logistic regression model on the 2-d data from the PCA. I plot the decision boundary using the intercept and coefficient and it does linearly separate the data. Logistic regresssion is not a classifier. Its coefficients certainly do not represent a "decision boundary." You have a model that's $$ \log \frac{p}{1-p} = \beta_0 + \beta_1x_1 + \beta_2x_2 $$ Where $p$ is the probability of your outcome, and the $x$-s are your principal components. In the below code, from your notebook, you're using $\beta_0$ and $\beta_1$ as the coefficients to a line in the original predictor space, and it's just dumb luck that it's anywhere near the gap in your data points. new_model.fit(x_pca, target) y_intercept = new_model.intercept_ # <- this is beta_0 slope = new_model.coef_[0][0] # <- this is beta_1 x_axis = np.linspace(-65, 113, 178) A decision would be based on where $\log p = \log (1-p)$ or some other threshold like that. This is what's not making sense with your plot, and how much variance the PCs capture is a secondary concern. Assuming you want your decision to be at $\log p = \log (1-p)$, this translates to $$\beta_0 + \beta_1x_1 + \beta_2x_2 = 0.$$ Let's say $x_1$ is the first principal component, and $x_2$ is the second. Then $x_2$ is the $y$-axis in your original plot. Rewriting the above, the boundary should be $$x_2 = -\frac{\beta_0}{\beta_2} - \frac{\beta_1}{\beta_2}x_1.$$ That is to say that the intercept you want is $-\frac{\beta_0}{\beta_2}$ and the slope is $-\frac{\beta_1}{\beta_2}$. Note that a such a decision is subjective, and while using $\log p = \log (1-p)$ might fit with your particular view of risk, others might like the raw probability estimate to make their own decision.
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)?
From your question: I train the logistic regression model on the 2-d data from the PCA. I plot the decision boundary using the intercept and coefficient and it does linearly separate the data. L
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)? From your question: I train the logistic regression model on the 2-d data from the PCA. I plot the decision boundary using the intercept and coefficient and it does linearly separate the data. Logistic regresssion is not a classifier. Its coefficients certainly do not represent a "decision boundary." You have a model that's $$ \log \frac{p}{1-p} = \beta_0 + \beta_1x_1 + \beta_2x_2 $$ Where $p$ is the probability of your outcome, and the $x$-s are your principal components. In the below code, from your notebook, you're using $\beta_0$ and $\beta_1$ as the coefficients to a line in the original predictor space, and it's just dumb luck that it's anywhere near the gap in your data points. new_model.fit(x_pca, target) y_intercept = new_model.intercept_ # <- this is beta_0 slope = new_model.coef_[0][0] # <- this is beta_1 x_axis = np.linspace(-65, 113, 178) A decision would be based on where $\log p = \log (1-p)$ or some other threshold like that. This is what's not making sense with your plot, and how much variance the PCs capture is a secondary concern. Assuming you want your decision to be at $\log p = \log (1-p)$, this translates to $$\beta_0 + \beta_1x_1 + \beta_2x_2 = 0.$$ Let's say $x_1$ is the first principal component, and $x_2$ is the second. Then $x_2$ is the $y$-axis in your original plot. Rewriting the above, the boundary should be $$x_2 = -\frac{\beta_0}{\beta_2} - \frac{\beta_1}{\beta_2}x_1.$$ That is to say that the intercept you want is $-\frac{\beta_0}{\beta_2}$ and the slope is $-\frac{\beta_1}{\beta_2}$. Note that a such a decision is subjective, and while using $\log p = \log (1-p)$ might fit with your particular view of risk, others might like the raw probability estimate to make their own decision.
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)? From your question: I train the logistic regression model on the 2-d data from the PCA. I plot the decision boundary using the intercept and coefficient and it does linearly separate the data. L
53,240
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)?
@MaxPower has a good answer, and I want to elaborate on his point A) in his comment: This means either one of two things: either A) your 2 PCA dimensions don't capture enough of the information contained in your raw 13 features, or B) your problem is very hard to predict, even with all the information from your 13 features. In your question, you don't show how much of the initial 13 variables is represented by the first two principal components. One thing that is easy to forget when doing PCA is the fact that there are more components left than just the first two. If for instance the 13 original variables are relatively uncorrelated, the first two principle components will only capture a part of the data. The rest will be stored in components 3 through 13. Why is this relevant? This is relevant, because your target variable might actually be explained by the third principal component. In that case, you wouldn't be able to see that using the plots you have used now. What is the takeaway? Before interpreting the PCA plot of PC1 and PC2, first take a look at the variance explained by these two components. If they together explain a lot (>90%) of the variance in the data, you can quite safely ignore the rest, but if it only explains part of the variance, you should look at the other components as well. Further remarks The link to your jupyter notebook is dead, so I can't see exactly what model you used to predict. If you used the entire PCA data, so all 13 principal components, for your prediction, it is likely that your problem falls under B). That means that there more likely wouldn't be a PC3-PC13 that does predict your target well. Because if there was a good predictor, the predicted values in your last plot likely would've been less wrong than they are now. So either: You predicted the target on just PC1 and PC2, which you cannot really do without first checking the cumulative variance explained. Your data just does not predict the target well enough. Another remark: I get a data that is linearly separable which is interesting to me since I'm doing a binary logistic regression. I'm writing an article and showing the data with a decision boundary is a good image to show that the model worked. This data is not linearly separable. At least not in the plots that you show. Yes there are two clear groups, but they are not related to your target variable. Linearly separable would be if you can divide the yellow dots from the purple dots with a linear line. This is not the case here, as you can see that the purple and yellow groups overlap. Furthermore, the decision boundary as is, does little to nothing to actually predict the correct targets, as you can see in your last plot.
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)?
@MaxPower has a good answer, and I want to elaborate on his point A) in his comment: This means either one of two things: either A) your 2 PCA dimensions don't capture enough of the information conta
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)? @MaxPower has a good answer, and I want to elaborate on his point A) in his comment: This means either one of two things: either A) your 2 PCA dimensions don't capture enough of the information contained in your raw 13 features, or B) your problem is very hard to predict, even with all the information from your 13 features. In your question, you don't show how much of the initial 13 variables is represented by the first two principal components. One thing that is easy to forget when doing PCA is the fact that there are more components left than just the first two. If for instance the 13 original variables are relatively uncorrelated, the first two principle components will only capture a part of the data. The rest will be stored in components 3 through 13. Why is this relevant? This is relevant, because your target variable might actually be explained by the third principal component. In that case, you wouldn't be able to see that using the plots you have used now. What is the takeaway? Before interpreting the PCA plot of PC1 and PC2, first take a look at the variance explained by these two components. If they together explain a lot (>90%) of the variance in the data, you can quite safely ignore the rest, but if it only explains part of the variance, you should look at the other components as well. Further remarks The link to your jupyter notebook is dead, so I can't see exactly what model you used to predict. If you used the entire PCA data, so all 13 principal components, for your prediction, it is likely that your problem falls under B). That means that there more likely wouldn't be a PC3-PC13 that does predict your target well. Because if there was a good predictor, the predicted values in your last plot likely would've been less wrong than they are now. So either: You predicted the target on just PC1 and PC2, which you cannot really do without first checking the cumulative variance explained. Your data just does not predict the target well enough. Another remark: I get a data that is linearly separable which is interesting to me since I'm doing a binary logistic regression. I'm writing an article and showing the data with a decision boundary is a good image to show that the model worked. This data is not linearly separable. At least not in the plots that you show. Yes there are two clear groups, but they are not related to your target variable. Linearly separable would be if you can divide the yellow dots from the purple dots with a linear line. This is not the case here, as you can see that the purple and yellow groups overlap. Furthermore, the decision boundary as is, does little to nothing to actually predict the correct targets, as you can see in your last plot.
How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)? @MaxPower has a good answer, and I want to elaborate on his point A) in his comment: This means either one of two things: either A) your 2 PCA dimensions don't capture enough of the information conta
53,241
Does uniform conditional distribution imply independence?
Independence would mean that knowing the value of $Y$ gives no information on the value of $X$. So here $X$ will be independent of $Y$ only if $X$ has a uniform marginal distribution on $[0,1]$, and the conditional distribution $X|Y$ is uniform on $[0,1]$ independent of the value of $Y$. An example be a uniform (joint) distribution over the unit square. Here are some examples using Tetris blocks: For the "S" block we have $p[X|Y=\text{middle}]=p[X]=\text{uniform}$, but $X$ is certainly not independent of $Y$. While for the "O" block we have $p[X|Y]=p[X]=\text{uniform}$, so $X$ is independent of $Y$.
Does uniform conditional distribution imply independence?
Independence would mean that knowing the value of $Y$ gives no information on the value of $X$. So here $X$ will be independent of $Y$ only if $X$ has a uniform marginal distribution on $[0,1]$, and t
Does uniform conditional distribution imply independence? Independence would mean that knowing the value of $Y$ gives no information on the value of $X$. So here $X$ will be independent of $Y$ only if $X$ has a uniform marginal distribution on $[0,1]$, and the conditional distribution $X|Y$ is uniform on $[0,1]$ independent of the value of $Y$. An example be a uniform (joint) distribution over the unit square. Here are some examples using Tetris blocks: For the "S" block we have $p[X|Y=\text{middle}]=p[X]=\text{uniform}$, but $X$ is certainly not independent of $Y$. While for the "O" block we have $p[X|Y]=p[X]=\text{uniform}$, so $X$ is independent of $Y$.
Does uniform conditional distribution imply independence? Independence would mean that knowing the value of $Y$ gives no information on the value of $X$. So here $X$ will be independent of $Y$ only if $X$ has a uniform marginal distribution on $[0,1]$, and t
53,242
Does uniform conditional distribution imply independence?
If the conditional pdf of $X$ given $Y$ is the same density function for all values of $Y$ (in the support of $f_Y(y)$), that is, $f_{X\mid Y}(x\mid y)$ equals $g(x)$ where the value of $g$ does not depend on $y$ at all, then $$f_X(x) = \int f_{X,Y}(x,y) \mathrm dy = \int f_{X\mid Y}(x\mid y)\cdot f_Y(y)\mathrm dy = g(x)\int f_Y(y)\mathrm dy = g(x),$$ that is, the unconditional pdf of $X$ is the same as the common conditional pdf of $X$ given $Y$. In particular, uniformity does not have anything to do with it at all: what we need is that it is always the same density function regardless of the value of $y$. Suppose that $f_{X,Y}(x,y)$ has value $1$ on (the interior of) the unit square. Then $f_{X\mid Y}(x\mid y) \sim U(0,1)$ and $X$ and $Y$ are both independent $U(0,1)$ random variables. Suppose that $f_{X,Y}(x,y)$ has value $2y$ on (the interior of) the unit square. Then $f_{X\mid Y}(x\mid y) \sim U(0,1)$, and $X \sim U(0,1)$ also. Note that $X$ and $Y$ are independent random variables but $Y$ is not a $U(0,1)$ random variable. Suppose that $f_{X,Y}(x,y)$ has value $2x$ on (the interior of) the unit square. Then $f_{X\mid Y}(x\mid y) = 2x\mathbf 1_{\{x\colon x \in (0,1)\}}$ and the unconditional density of $X$ is the same density. $X$ and $Y$ are independent random variables but $X$ is not a $U(0,1)$ random variable. Uniform distribution of $X$ is not needed: what is needed is that $f_{X\mid Y}(x\mid y)$ is the same for all choices of $y$.
Does uniform conditional distribution imply independence?
If the conditional pdf of $X$ given $Y$ is the same density function for all values of $Y$ (in the support of $f_Y(y)$), that is, $f_{X\mid Y}(x\mid y)$ equals $g(x)$ where the value of $g$ does not d
Does uniform conditional distribution imply independence? If the conditional pdf of $X$ given $Y$ is the same density function for all values of $Y$ (in the support of $f_Y(y)$), that is, $f_{X\mid Y}(x\mid y)$ equals $g(x)$ where the value of $g$ does not depend on $y$ at all, then $$f_X(x) = \int f_{X,Y}(x,y) \mathrm dy = \int f_{X\mid Y}(x\mid y)\cdot f_Y(y)\mathrm dy = g(x)\int f_Y(y)\mathrm dy = g(x),$$ that is, the unconditional pdf of $X$ is the same as the common conditional pdf of $X$ given $Y$. In particular, uniformity does not have anything to do with it at all: what we need is that it is always the same density function regardless of the value of $y$. Suppose that $f_{X,Y}(x,y)$ has value $1$ on (the interior of) the unit square. Then $f_{X\mid Y}(x\mid y) \sim U(0,1)$ and $X$ and $Y$ are both independent $U(0,1)$ random variables. Suppose that $f_{X,Y}(x,y)$ has value $2y$ on (the interior of) the unit square. Then $f_{X\mid Y}(x\mid y) \sim U(0,1)$, and $X \sim U(0,1)$ also. Note that $X$ and $Y$ are independent random variables but $Y$ is not a $U(0,1)$ random variable. Suppose that $f_{X,Y}(x,y)$ has value $2x$ on (the interior of) the unit square. Then $f_{X\mid Y}(x\mid y) = 2x\mathbf 1_{\{x\colon x \in (0,1)\}}$ and the unconditional density of $X$ is the same density. $X$ and $Y$ are independent random variables but $X$ is not a $U(0,1)$ random variable. Uniform distribution of $X$ is not needed: what is needed is that $f_{X\mid Y}(x\mid y)$ is the same for all choices of $y$.
Does uniform conditional distribution imply independence? If the conditional pdf of $X$ given $Y$ is the same density function for all values of $Y$ (in the support of $f_Y(y)$), that is, $f_{X\mid Y}(x\mid y)$ equals $g(x)$ where the value of $g$ does not d
53,243
What is the heaviest tail possible for a continuous normalizable distribution?
There is no distribution which is more heavy-tailed than any other distribution. Proof: Assume $f$ is any PDF, and its CDF is $F$. We can always construct another distribution $$G(x) = 1 - \sqrt{1 - F(x)}, \quad g(x) = \frac{f(x)}{2\sqrt{1 - F(x)}}$$ which has havier tails, since: $$\int_x^\infty f(t)\, dt = 1 - F(x) < \sqrt{1 - F(x)} = 1 - G(x) = \int_x^\infty g(t) \, dt$$ for each $x$.
What is the heaviest tail possible for a continuous normalizable distribution?
There is no distribution which is more heavy-tailed than any other distribution. Proof: Assume $f$ is any PDF, and its CDF is $F$. We can always construct another distribution $$G(x) = 1 - \sqrt{1 - F
What is the heaviest tail possible for a continuous normalizable distribution? There is no distribution which is more heavy-tailed than any other distribution. Proof: Assume $f$ is any PDF, and its CDF is $F$. We can always construct another distribution $$G(x) = 1 - \sqrt{1 - F(x)}, \quad g(x) = \frac{f(x)}{2\sqrt{1 - F(x)}}$$ which has havier tails, since: $$\int_x^\infty f(t)\, dt = 1 - F(x) < \sqrt{1 - F(x)} = 1 - G(x) = \int_x^\infty g(t) \, dt$$ for each $x$.
What is the heaviest tail possible for a continuous normalizable distribution? There is no distribution which is more heavy-tailed than any other distribution. Proof: Assume $f$ is any PDF, and its CDF is $F$. We can always construct another distribution $$G(x) = 1 - \sqrt{1 - F
53,244
What is the heaviest tail possible for a continuous normalizable distribution?
Great question! As you point out, Cauchy has a power-law tail. So on a log-log scale, the complementary cdf is linear. But the only constraint on the function is that it never increases and goes to $-\infty$ in the limit. So you could swap the linear function out for a negative log, or even cook up an extreme example by inverting the increasing part of the gamma function.
What is the heaviest tail possible for a continuous normalizable distribution?
Great question! As you point out, Cauchy has a power-law tail. So on a log-log scale, the complementary cdf is linear. But the only constraint on the function is that it never increases and goes to $-
What is the heaviest tail possible for a continuous normalizable distribution? Great question! As you point out, Cauchy has a power-law tail. So on a log-log scale, the complementary cdf is linear. But the only constraint on the function is that it never increases and goes to $-\infty$ in the limit. So you could swap the linear function out for a negative log, or even cook up an extreme example by inverting the increasing part of the gamma function.
What is the heaviest tail possible for a continuous normalizable distribution? Great question! As you point out, Cauchy has a power-law tail. So on a log-log scale, the complementary cdf is linear. But the only constraint on the function is that it never increases and goes to $-
53,245
Meaning of model calibration
Let's suppose you have a set of training data and you have created a model that predicts the probability that a team will win a game. You did this e.g. by training a binary (win/loss) target on a set of input parameters. The model outputs a prediction, which is just the probability that the team will win the game. You then generate such predictions using a separate test data set (on which you have not built your model). You could then create "bins" or buckets of your predicted probabilities, say from 0 to 0.1, 0.1 to 0.2, ..., 0.9 to 1.0 and for all data rows that fall in to each bucket work out the actual mean "target" result (treating a win=1 and a loss=0). If your model is "well-calibrated", the mean result in the bucket running between a predicted probability of 0 and 0.1, should be around 0.05 i.e. 5 wins if there were 100 rows of data with predicted probabilities between 0 and 0.1. Your choice of bin size is dependent on how much data you have, but you would want there to be enough points in each bin such that the standard error on the mean of each bin is small.
Meaning of model calibration
Let's suppose you have a set of training data and you have created a model that predicts the probability that a team will win a game. You did this e.g. by training a binary (win/loss) target on a set
Meaning of model calibration Let's suppose you have a set of training data and you have created a model that predicts the probability that a team will win a game. You did this e.g. by training a binary (win/loss) target on a set of input parameters. The model outputs a prediction, which is just the probability that the team will win the game. You then generate such predictions using a separate test data set (on which you have not built your model). You could then create "bins" or buckets of your predicted probabilities, say from 0 to 0.1, 0.1 to 0.2, ..., 0.9 to 1.0 and for all data rows that fall in to each bucket work out the actual mean "target" result (treating a win=1 and a loss=0). If your model is "well-calibrated", the mean result in the bucket running between a predicted probability of 0 and 0.1, should be around 0.05 i.e. 5 wins if there were 100 rows of data with predicted probabilities between 0 and 0.1. Your choice of bin size is dependent on how much data you have, but you would want there to be enough points in each bin such that the standard error on the mean of each bin is small.
Meaning of model calibration Let's suppose you have a set of training data and you have created a model that predicts the probability that a team will win a game. You did this e.g. by training a binary (win/loss) target on a set
53,246
Meaning of model calibration
If the model is well-calibrated the points will appear along the main diagonal on the diagnostic reliability diagrams(or calibration curves). The closer the more reliable the model. If the points are below the diagonal, that indicates that the model has over-forecast; the probabilities are too large. And if the points are above the diagonal we can get to know that the model has under-forecast; the probabilities are too small. We can see from the above line plot that the blue one, which represents the logistic regression model, is closer to the diagonal than the orange line, which represents the random forest model, then we can say that the former model is more calibrated/reliable than the latter one. And we can also see that for the logistic regression model when the predictions are larger than 0.4 the probabilities are kind of too small, indicating that the model has under-forecast. References: How and When to Use a Calibrated Classification Model with scikit-learn A Guide to Calibration Plots in Python
Meaning of model calibration
If the model is well-calibrated the points will appear along the main diagonal on the diagnostic reliability diagrams(or calibration curves). The closer the more reliable the model. If the points are
Meaning of model calibration If the model is well-calibrated the points will appear along the main diagonal on the diagnostic reliability diagrams(or calibration curves). The closer the more reliable the model. If the points are below the diagonal, that indicates that the model has over-forecast; the probabilities are too large. And if the points are above the diagonal we can get to know that the model has under-forecast; the probabilities are too small. We can see from the above line plot that the blue one, which represents the logistic regression model, is closer to the diagonal than the orange line, which represents the random forest model, then we can say that the former model is more calibrated/reliable than the latter one. And we can also see that for the logistic regression model when the predictions are larger than 0.4 the probabilities are kind of too small, indicating that the model has under-forecast. References: How and When to Use a Calibrated Classification Model with scikit-learn A Guide to Calibration Plots in Python
Meaning of model calibration If the model is well-calibrated the points will appear along the main diagonal on the diagnostic reliability diagrams(or calibration curves). The closer the more reliable the model. If the points are
53,247
Significance in simple regression but not multiple regression
First, multiple regression does not necessarily have more power, particularly when there are so many interaction terms as you have specified. Each extra variable, each extra factor level, and each extra interaction uses up degrees of freedom, so you might decrease your ability to detect a true difference if the extra variables/factor levels/interactions are unrelated to your outcome variable. Second, your desire to match the results of simple regression to a combination of coefficients in multiple regression suffers from the same type of problem you had in your desire to compare intercepts against group means in a previous analysis attempt. If you don't have a perfectly balanced design with the same number of cases in each group then there is no assurance that you can match the values this way. Third, your initial data summary shows that there are no O.franksi in any reef environments except for Alligator. So there is no way to obtain coefficients that include interactions of O.franksi with Reef; you have no data on 3 of the 4 Reef environments. Hence the NA values. You seem to have done a lot of work to collect these data. Given the nature of these data, you are probably at some type of academic institution where there would almost certainly be local statistical expertise. As much fun as it is for me to answer questions on this site, it might be better for you to identify and start working with someone nearby who can go over the details of your data at close hand and help you analyze them in the best way to get at the scientific questions you are asking.
Significance in simple regression but not multiple regression
First, multiple regression does not necessarily have more power, particularly when there are so many interaction terms as you have specified. Each extra variable, each extra factor level, and each ext
Significance in simple regression but not multiple regression First, multiple regression does not necessarily have more power, particularly when there are so many interaction terms as you have specified. Each extra variable, each extra factor level, and each extra interaction uses up degrees of freedom, so you might decrease your ability to detect a true difference if the extra variables/factor levels/interactions are unrelated to your outcome variable. Second, your desire to match the results of simple regression to a combination of coefficients in multiple regression suffers from the same type of problem you had in your desire to compare intercepts against group means in a previous analysis attempt. If you don't have a perfectly balanced design with the same number of cases in each group then there is no assurance that you can match the values this way. Third, your initial data summary shows that there are no O.franksi in any reef environments except for Alligator. So there is no way to obtain coefficients that include interactions of O.franksi with Reef; you have no data on 3 of the 4 Reef environments. Hence the NA values. You seem to have done a lot of work to collect these data. Given the nature of these data, you are probably at some type of academic institution where there would almost certainly be local statistical expertise. As much fun as it is for me to answer questions on this site, it might be better for you to identify and start working with someone nearby who can go over the details of your data at close hand and help you analyze them in the best way to get at the scientific questions you are asking.
Significance in simple regression but not multiple regression First, multiple regression does not necessarily have more power, particularly when there are so many interaction terms as you have specified. Each extra variable, each extra factor level, and each ext
53,248
Fat tail? Short tail? Long tail? Where do I go from here?
Growth rates must be distributed as some variation of the Cauchy distribution. I have written a series of papers on this. The Cauchy distribution has no mean so it has no variance or covariance. You can find my author page at https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1541471 Start with the paper titled "The distribution of returns," and then switch to the paper on Bayesian methods. Generally speaking, there is no admissible non-Bayesian solution though in specific cases there is a maximum likelihood solution that can be used if a null hypothesis method is required. The Bayesian likelihood function is always minimally sufficient. You can communicate with me through the address on the author page. Because there is no variance or covariance, ANOVA and ANCOVA are impossible. EDIT With regard to the comments: 1)If I say it's Cauchy, and therefore rule out ANOVA and ANCOVA, what are my options? Bayesian regression is still available. Your likelihood function would be $$\frac{1}{\pi}\frac{\sigma}{\sigma^2+(y-\beta_0-\beta_1x_1-b_2x_2\dots\beta_nx_n)^2}$$ The meaning is very different from OLS, though. OLS is from a convergent process, such as water going down a drain. This can be conceived as a double pendulum problem, and as such has limited predictive capapcity. For example, if you had $y|x$ you could think of $x$ as the upper pendulum and $y$ as the lower pendulum attached to the top pendulum. So, while $y$ is affected by the movement of $x$ it does not mean they are even moving in the same direction with a positive correlation. A pendulum swinging to the left could cause the other pendulum from momentum to swing to the right. There is a tight linkage between the double pendulum problem, which is the first real observed problem in chaos theory, and regression in this case. The proper interpretation is that, for example, if $y=1.1x$ then 50% of the time $y$ will be greater than $1.1x$ and fifty percent of the time it will be less than $1.1x$. You may be able to make stronger statements if there are other properties in your system, such as non-negativity. 2) I've read that Cauchy is problematic if residuals are almost normal. This does not matter. You can find, or even construct, cases where the Cauchy distribution is indistinguishable from a normal distribution. Generally speaking there is no admissible non-Bayesian solution for most standard problems. This is a problem for someone trained only in Frequentist methods, but is not a problem per se. If a null hypothesis is required by the nature of the problem, then the only close solution would be quantile regression or Theil's regression. The problem with either is that, in the above equation, $x_1$ and $x_2$ are not independent, but they are also not correlated. The question is not Gaussian versus Cauchy by some empirical test, but which should I have from theory. A certain percentage of the time, data drawn from a pure normal distribution will falsify a test of normality through chance alone. While it is sometimes true we do not know the likelihood function and must test for it, sometimes we do. This is a case where we do. 3) Is this really growth rate if I only have initial and final length? Yes that is a growth rate, it is just a growth rate with limited observations per creature. Not asked Is there anything similar to anova or ancova? The answer is "it is unclear." If you will notice there is only one scale parameter in the likelihood and this does not depend on the number of variables. The scale parameter is a composite of the separate scale parameters, but it is not clear that there is any way to take advantage of this.
Fat tail? Short tail? Long tail? Where do I go from here?
Growth rates must be distributed as some variation of the Cauchy distribution. I have written a series of papers on this. The Cauchy distribution has no mean so it has no variance or covariance. Yo
Fat tail? Short tail? Long tail? Where do I go from here? Growth rates must be distributed as some variation of the Cauchy distribution. I have written a series of papers on this. The Cauchy distribution has no mean so it has no variance or covariance. You can find my author page at https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1541471 Start with the paper titled "The distribution of returns," and then switch to the paper on Bayesian methods. Generally speaking, there is no admissible non-Bayesian solution though in specific cases there is a maximum likelihood solution that can be used if a null hypothesis method is required. The Bayesian likelihood function is always minimally sufficient. You can communicate with me through the address on the author page. Because there is no variance or covariance, ANOVA and ANCOVA are impossible. EDIT With regard to the comments: 1)If I say it's Cauchy, and therefore rule out ANOVA and ANCOVA, what are my options? Bayesian regression is still available. Your likelihood function would be $$\frac{1}{\pi}\frac{\sigma}{\sigma^2+(y-\beta_0-\beta_1x_1-b_2x_2\dots\beta_nx_n)^2}$$ The meaning is very different from OLS, though. OLS is from a convergent process, such as water going down a drain. This can be conceived as a double pendulum problem, and as such has limited predictive capapcity. For example, if you had $y|x$ you could think of $x$ as the upper pendulum and $y$ as the lower pendulum attached to the top pendulum. So, while $y$ is affected by the movement of $x$ it does not mean they are even moving in the same direction with a positive correlation. A pendulum swinging to the left could cause the other pendulum from momentum to swing to the right. There is a tight linkage between the double pendulum problem, which is the first real observed problem in chaos theory, and regression in this case. The proper interpretation is that, for example, if $y=1.1x$ then 50% of the time $y$ will be greater than $1.1x$ and fifty percent of the time it will be less than $1.1x$. You may be able to make stronger statements if there are other properties in your system, such as non-negativity. 2) I've read that Cauchy is problematic if residuals are almost normal. This does not matter. You can find, or even construct, cases where the Cauchy distribution is indistinguishable from a normal distribution. Generally speaking there is no admissible non-Bayesian solution for most standard problems. This is a problem for someone trained only in Frequentist methods, but is not a problem per se. If a null hypothesis is required by the nature of the problem, then the only close solution would be quantile regression or Theil's regression. The problem with either is that, in the above equation, $x_1$ and $x_2$ are not independent, but they are also not correlated. The question is not Gaussian versus Cauchy by some empirical test, but which should I have from theory. A certain percentage of the time, data drawn from a pure normal distribution will falsify a test of normality through chance alone. While it is sometimes true we do not know the likelihood function and must test for it, sometimes we do. This is a case where we do. 3) Is this really growth rate if I only have initial and final length? Yes that is a growth rate, it is just a growth rate with limited observations per creature. Not asked Is there anything similar to anova or ancova? The answer is "it is unclear." If you will notice there is only one scale parameter in the likelihood and this does not depend on the number of variables. The scale parameter is a composite of the separate scale parameters, but it is not clear that there is any way to take advantage of this.
Fat tail? Short tail? Long tail? Where do I go from here? Growth rates must be distributed as some variation of the Cauchy distribution. I have written a series of papers on this. The Cauchy distribution has no mean so it has no variance or covariance. Yo
53,249
Fat tail? Short tail? Long tail? Where do I go from here?
Your Q-Q plot doesn't look like it has a fat tail. I'll show you how a fat tail looks like: Your tail is like a Victoria Secret's model compared to the above. I wish some of my model residuals had tails like yours has.
Fat tail? Short tail? Long tail? Where do I go from here?
Your Q-Q plot doesn't look like it has a fat tail. I'll show you how a fat tail looks like: Your tail is like a Victoria Secret's model compared to the above. I wish some of my model residuals had ta
Fat tail? Short tail? Long tail? Where do I go from here? Your Q-Q plot doesn't look like it has a fat tail. I'll show you how a fat tail looks like: Your tail is like a Victoria Secret's model compared to the above. I wish some of my model residuals had tails like yours has.
Fat tail? Short tail? Long tail? Where do I go from here? Your Q-Q plot doesn't look like it has a fat tail. I'll show you how a fat tail looks like: Your tail is like a Victoria Secret's model compared to the above. I wish some of my model residuals had ta
53,250
Fat tail? Short tail? Long tail? Where do I go from here?
It seems that bootstrapping, as suggested by @Tim in comments, might be a good way to proceed. Even if your statistical software doesn't directly support bootstrapping it's not to hard to roll your own. For example, say you have all data for each individual in a single row (species; 3 treatment types; starting and ending length, width, and height; block ID) and you have 304 rows. You set up an index vector of length 304, and for each bootstrap fill it with a random sample with replacement from the integers from 1 to 304. You then take those indexed rows from your full data set for 304 rows total (with some original rows omitted, some taken once, and others 2 or more times). Then do your analysis and store the regression coefficients. Do this 999 times. For each regression coefficient, average the 999 results; then put its values for the 999 repetitions in rank order; the 25th and 975th in order set the 95% confidence limits. Unless your analysis depends heavily on having a balanced design, this should suffice. This will not work if there is an underlying Cauchy problem, but I'm not convinced that you have this problem with your data set. The Cauchy problem comes from trying to take a ratio of 2 random variables in which you run a risk of dividing by zero or by numbers close to zero. Although this can be a serious issue in the types of economic time series addressed by @DaveHarris in the documents linked from his answer, in your case the lengths, widths and heights are all positive and far from zero so you don't seem to be in that situation. Raw differences between start and finish, or logs of start/finish ratios, should be sufficiently well behaved that you can analyze your data with the bootstrap, which is a well respected way to deal with your type of situation when you can't count on normal distributions of the data of interest.
Fat tail? Short tail? Long tail? Where do I go from here?
It seems that bootstrapping, as suggested by @Tim in comments, might be a good way to proceed. Even if your statistical software doesn't directly support bootstrapping it's not to hard to roll your ow
Fat tail? Short tail? Long tail? Where do I go from here? It seems that bootstrapping, as suggested by @Tim in comments, might be a good way to proceed. Even if your statistical software doesn't directly support bootstrapping it's not to hard to roll your own. For example, say you have all data for each individual in a single row (species; 3 treatment types; starting and ending length, width, and height; block ID) and you have 304 rows. You set up an index vector of length 304, and for each bootstrap fill it with a random sample with replacement from the integers from 1 to 304. You then take those indexed rows from your full data set for 304 rows total (with some original rows omitted, some taken once, and others 2 or more times). Then do your analysis and store the regression coefficients. Do this 999 times. For each regression coefficient, average the 999 results; then put its values for the 999 repetitions in rank order; the 25th and 975th in order set the 95% confidence limits. Unless your analysis depends heavily on having a balanced design, this should suffice. This will not work if there is an underlying Cauchy problem, but I'm not convinced that you have this problem with your data set. The Cauchy problem comes from trying to take a ratio of 2 random variables in which you run a risk of dividing by zero or by numbers close to zero. Although this can be a serious issue in the types of economic time series addressed by @DaveHarris in the documents linked from his answer, in your case the lengths, widths and heights are all positive and far from zero so you don't seem to be in that situation. Raw differences between start and finish, or logs of start/finish ratios, should be sufficiently well behaved that you can analyze your data with the bootstrap, which is a well respected way to deal with your type of situation when you can't count on normal distributions of the data of interest.
Fat tail? Short tail? Long tail? Where do I go from here? It seems that bootstrapping, as suggested by @Tim in comments, might be a good way to proceed. Even if your statistical software doesn't directly support bootstrapping it's not to hard to roll your ow
53,251
Characteristic of good binning for weight of evidence algorithm
The 5% condition is a rule of thumb for Weight of Evidence (WOE) binning. In general, a good WOE binning of a variable should also have the following characteristics: 1. Monotonous increase/decrease in WOE for consecutive bins. This is because the WOE is used primarily for logistic/linear regression models which assumes a linear relationship between log odds and independent variables. 2. WOE values for different bins should be as diverse as possible. Hence, you should merge consecutive bins that have similar WOE values. Further, if you wish to choose an automated approach to WOE binning, check out the following package in R: https://CRAN.R-project.org/package=woeR It lets you choose the minimum percent of observations in each class, no of bins you want to start with and the woe cutoff for merging consecutive bins. P.S.: I authored the above package in R
Characteristic of good binning for weight of evidence algorithm
The 5% condition is a rule of thumb for Weight of Evidence (WOE) binning. In general, a good WOE binning of a variable should also have the following characteristics: 1. Monotonous increase/decrease i
Characteristic of good binning for weight of evidence algorithm The 5% condition is a rule of thumb for Weight of Evidence (WOE) binning. In general, a good WOE binning of a variable should also have the following characteristics: 1. Monotonous increase/decrease in WOE for consecutive bins. This is because the WOE is used primarily for logistic/linear regression models which assumes a linear relationship between log odds and independent variables. 2. WOE values for different bins should be as diverse as possible. Hence, you should merge consecutive bins that have similar WOE values. Further, if you wish to choose an automated approach to WOE binning, check out the following package in R: https://CRAN.R-project.org/package=woeR It lets you choose the minimum percent of observations in each class, no of bins you want to start with and the woe cutoff for merging consecutive bins. P.S.: I authored the above package in R
Characteristic of good binning for weight of evidence algorithm The 5% condition is a rule of thumb for Weight of Evidence (WOE) binning. In general, a good WOE binning of a variable should also have the following characteristics: 1. Monotonous increase/decrease i
53,252
What is the deeper intuition behind the symmetric proposal distribution in the Metropolis-Hastings Algorithm?
1) the Normal and Uniform are symmetric probability density functions themselves, is this notion of "symmetry" the same as the "symmetry" above? Both distributions are symmetric around their mean. But the symmetry in Metropolis-Hastings is that $q(x|y)=q(y|x)$ which makes the ratio cancel in the Metropolis-Hastings acceptance probability. If one uses a Normal distribution not centered at the previous value in the Metropolis-Hastings proposal (as e.g. in the Langevin version), the Normal distribution remains symmetric as a distribution but the proposal distribution is no longer symmetric and hence it must appear in the Metropolis-Hastings acceptance probability. 2) Is there an intuitive way of seeing the deeper meaning behind the symmetry formula above? Why is it needed? There is no particular depth in this special case, it simply makes life easier by avoiding the ratio of the proposals. It may save time or it may avoid computing complex or intractable densities. Note also that the symmetry depends on the parameterisation of the model: if one changes the parameterisation, a Jacobian appears and kills the symmetry.
What is the deeper intuition behind the symmetric proposal distribution in the Metropolis-Hastings A
1) the Normal and Uniform are symmetric probability density functions themselves, is this notion of "symmetry" the same as the "symmetry" above? Both distributions are symmetric around their mean
What is the deeper intuition behind the symmetric proposal distribution in the Metropolis-Hastings Algorithm? 1) the Normal and Uniform are symmetric probability density functions themselves, is this notion of "symmetry" the same as the "symmetry" above? Both distributions are symmetric around their mean. But the symmetry in Metropolis-Hastings is that $q(x|y)=q(y|x)$ which makes the ratio cancel in the Metropolis-Hastings acceptance probability. If one uses a Normal distribution not centered at the previous value in the Metropolis-Hastings proposal (as e.g. in the Langevin version), the Normal distribution remains symmetric as a distribution but the proposal distribution is no longer symmetric and hence it must appear in the Metropolis-Hastings acceptance probability. 2) Is there an intuitive way of seeing the deeper meaning behind the symmetry formula above? Why is it needed? There is no particular depth in this special case, it simply makes life easier by avoiding the ratio of the proposals. It may save time or it may avoid computing complex or intractable densities. Note also that the symmetry depends on the parameterisation of the model: if one changes the parameterisation, a Jacobian appears and kills the symmetry.
What is the deeper intuition behind the symmetric proposal distribution in the Metropolis-Hastings A 1) the Normal and Uniform are symmetric probability density functions themselves, is this notion of "symmetry" the same as the "symmetry" above? Both distributions are symmetric around their mean
53,253
Is Levene's test necessary?
Regardless of the good points made in the comments above about whether you condition your testing procedure on the results of preliminary investigation (e.g. choosing Welch vs. standard t-tests based on the outcome of Levene's test) I suspect that the reason for this difference between ANOVA/t-test (i.e., linear models where all of the predictors are categorical) and other linear models such as regression, ANCOVA, etc. (i.e., linear models with at least one continuous predictor) is that questions of heteroscedasticity etc. apply to the conditional distribution of the data, i.e. the distribution of the $\epsilon$ in $y=\beta_0 + \beta_1 x + \ldots + \epsilon$. if you have all-categorical predictors, you can test for heteroscedasticity (and other issues such as non-Normality) by dividing the data into unique combinations of categories (i.e., in the t-test, compare the variability in each group). if you have continuous predictors, then the only way to test the conditional distribution is to fit the model first, then evaluate the distribution of the residuals. Furthermore, even after you have the residuals, there generally aren't discrete groups in the data to which you could apply Levene's test.
Is Levene's test necessary?
Regardless of the good points made in the comments above about whether you condition your testing procedure on the results of preliminary investigation (e.g. choosing Welch vs. standard t-tests based
Is Levene's test necessary? Regardless of the good points made in the comments above about whether you condition your testing procedure on the results of preliminary investigation (e.g. choosing Welch vs. standard t-tests based on the outcome of Levene's test) I suspect that the reason for this difference between ANOVA/t-test (i.e., linear models where all of the predictors are categorical) and other linear models such as regression, ANCOVA, etc. (i.e., linear models with at least one continuous predictor) is that questions of heteroscedasticity etc. apply to the conditional distribution of the data, i.e. the distribution of the $\epsilon$ in $y=\beta_0 + \beta_1 x + \ldots + \epsilon$. if you have all-categorical predictors, you can test for heteroscedasticity (and other issues such as non-Normality) by dividing the data into unique combinations of categories (i.e., in the t-test, compare the variability in each group). if you have continuous predictors, then the only way to test the conditional distribution is to fit the model first, then evaluate the distribution of the residuals. Furthermore, even after you have the residuals, there generally aren't discrete groups in the data to which you could apply Levene's test.
Is Levene's test necessary? Regardless of the good points made in the comments above about whether you condition your testing procedure on the results of preliminary investigation (e.g. choosing Welch vs. standard t-tests based
53,254
What is the difference between accuracy and precision?
(Just for reference, I am posting my comments as an answer. Note that the first version of the question did not include the formula.) "Accuracy" and "precision" are general terms throughout science. A good way to internalize the difference are the common "bullseye diagrams". In machine learning/statistics as a whole, accuracy vs. precision is analogous to bias vs. variance. However in the particular context of Binary Classification* these terms have very specific definitions. The chart at that Wikipedia page gives these, which are $$\mathrm{Accuracy}=\frac{\mathrm{True}}{\mathrm{Total}} \text{ , } \mathrm{Precision}=\frac{\mathrm{True\;Positive}}{\mathrm{All\;Positive}} $$ i.e. the fraction of cases that are correctly classified vs. the fraction of positives that are true. (*Note that this context is much more specialized than simply "machine learning".)
What is the difference between accuracy and precision?
(Just for reference, I am posting my comments as an answer. Note that the first version of the question did not include the formula.) "Accuracy" and "precision" are general terms throughout science. A
What is the difference between accuracy and precision? (Just for reference, I am posting my comments as an answer. Note that the first version of the question did not include the formula.) "Accuracy" and "precision" are general terms throughout science. A good way to internalize the difference are the common "bullseye diagrams". In machine learning/statistics as a whole, accuracy vs. precision is analogous to bias vs. variance. However in the particular context of Binary Classification* these terms have very specific definitions. The chart at that Wikipedia page gives these, which are $$\mathrm{Accuracy}=\frac{\mathrm{True}}{\mathrm{Total}} \text{ , } \mathrm{Precision}=\frac{\mathrm{True\;Positive}}{\mathrm{All\;Positive}} $$ i.e. the fraction of cases that are correctly classified vs. the fraction of positives that are true. (*Note that this context is much more specialized than simply "machine learning".)
What is the difference between accuracy and precision? (Just for reference, I am posting my comments as an answer. Note that the first version of the question did not include the formula.) "Accuracy" and "precision" are general terms throughout science. A
53,255
Why do Statistics, Machine learning and Operations research stand out as separate entities
In machine learning "programming" = coding up an algorithm, in operations research "programming" = optimization? More serious answer, I think the differences are more historical lineage and application area than techniques per se. One perspective on the cultures of (academic) statistics vs. machine learning I found interesting is "The Stats Handicap". Statistics is the oldest, and originated out of mathematics and probability, perhaps emerging as a distinct discipline in the late 19th century (though much of theory is older). Of the three, statistics is perhaps the most associated with "academic science", and is certainly the most concerned with rigorous approaches to experimental design and data collection. Operations research seems to originate closer to WWII, and is generally associated with large organizations (e.g. military, logistics/supply-chain, industrial engineering), focusing on managing and optimizing their "operations", as it were. (In terms of "data science" traditions with a long history, another big one would be econometrics. Wikipedia says it's economics, while CV says it's statistics, for what that's worth!) Machine learning is the most recent, but to me is more ambiguous, and at least in the popular-media it is essentially a re-branding of "AI". This broader sense includes many strands, including computer vision and probabilistic robotics. Computer science is an integral part of all of these, however. Finally, I would say that buzzwords like "Data Science" and "Analytics" are largely marketing terms. They are less likely to be used between members of these communities, vs. when communicating with outsiders (or when outsiders are talking between themselves).
Why do Statistics, Machine learning and Operations research stand out as separate entities
In machine learning "programming" = coding up an algorithm, in operations research "programming" = optimization? More serious answer, I think the differences are more historical lineage and applicatio
Why do Statistics, Machine learning and Operations research stand out as separate entities In machine learning "programming" = coding up an algorithm, in operations research "programming" = optimization? More serious answer, I think the differences are more historical lineage and application area than techniques per se. One perspective on the cultures of (academic) statistics vs. machine learning I found interesting is "The Stats Handicap". Statistics is the oldest, and originated out of mathematics and probability, perhaps emerging as a distinct discipline in the late 19th century (though much of theory is older). Of the three, statistics is perhaps the most associated with "academic science", and is certainly the most concerned with rigorous approaches to experimental design and data collection. Operations research seems to originate closer to WWII, and is generally associated with large organizations (e.g. military, logistics/supply-chain, industrial engineering), focusing on managing and optimizing their "operations", as it were. (In terms of "data science" traditions with a long history, another big one would be econometrics. Wikipedia says it's economics, while CV says it's statistics, for what that's worth!) Machine learning is the most recent, but to me is more ambiguous, and at least in the popular-media it is essentially a re-branding of "AI". This broader sense includes many strands, including computer vision and probabilistic robotics. Computer science is an integral part of all of these, however. Finally, I would say that buzzwords like "Data Science" and "Analytics" are largely marketing terms. They are less likely to be used between members of these communities, vs. when communicating with outsiders (or when outsiders are talking between themselves).
Why do Statistics, Machine learning and Operations research stand out as separate entities In machine learning "programming" = coding up an algorithm, in operations research "programming" = optimization? More serious answer, I think the differences are more historical lineage and applicatio
53,256
Why do Statistics, Machine learning and Operations research stand out as separate entities
In my view, the differences are more cultural than methodological. All three share a common mathematical foundation in probability theory, optimization, and linear algebra. I disagree that any one of these is more "rigorous" than any other. Each field has its PhD's who do mind-bendingly rigorous and difficult research. Each also has practitioners who utilize methods and heuristics to get the job done. As far as "analytics" there has been a concerted effort by INFORMS (the OR/MS society of the USA) to make the definition of "Analytics" more rigorous, to the point of developing a certification process (Certified Analytics Professional). The material for the exam covers far more than just statistics, machine learning, or operations research.
Why do Statistics, Machine learning and Operations research stand out as separate entities
In my view, the differences are more cultural than methodological. All three share a common mathematical foundation in probability theory, optimization, and linear algebra. I disagree that any one of
Why do Statistics, Machine learning and Operations research stand out as separate entities In my view, the differences are more cultural than methodological. All three share a common mathematical foundation in probability theory, optimization, and linear algebra. I disagree that any one of these is more "rigorous" than any other. Each field has its PhD's who do mind-bendingly rigorous and difficult research. Each also has practitioners who utilize methods and heuristics to get the job done. As far as "analytics" there has been a concerted effort by INFORMS (the OR/MS society of the USA) to make the definition of "Analytics" more rigorous, to the point of developing a certification process (Certified Analytics Professional). The material for the exam covers far more than just statistics, machine learning, or operations research.
Why do Statistics, Machine learning and Operations research stand out as separate entities In my view, the differences are more cultural than methodological. All three share a common mathematical foundation in probability theory, optimization, and linear algebra. I disagree that any one of
53,257
P-value for point biserial correlation in R
The point-biserial correlation is equivalent to calculating the Pearson correlation between a continuous and a dichotomous variable (the latter needs to be encoded with 0 and 1). Therefore, you can just use the standard cor.test function in R, which will output the correlation, a 95% confidence interval, and an independent t-test with associated p-value: set.seed(1) x <- sample.int(100, 50, replace=TRUE) y <- sample(c(0, 1), 50, replace=TRUE) cor.test(x, y) This yields a correlation of $r = 0.202$, which is not significant ($t = 1.429$, $\text{df} = 48$, $p = 0.1595$): Pearson's product-moment correlation data: x and y t = 1.429, df = 48, p-value = 0.1595 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.08088534 0.45478598 sample estimates: cor 0.2020105 As @sal-mangiafico and @igor-p point out, the function biserial.cor from the ltm package produces slightly different results. This is because cor.test uses the population standard deviation, whereas biserial.cor uses the sample standard deviation. Furthermore, the result of biserial.cor has the opposite sign than the result of cor.test. This can be adjusted by specifying the argument level=2 in biserial.cor.
P-value for point biserial correlation in R
The point-biserial correlation is equivalent to calculating the Pearson correlation between a continuous and a dichotomous variable (the latter needs to be encoded with 0 and 1). Therefore, you can ju
P-value for point biserial correlation in R The point-biserial correlation is equivalent to calculating the Pearson correlation between a continuous and a dichotomous variable (the latter needs to be encoded with 0 and 1). Therefore, you can just use the standard cor.test function in R, which will output the correlation, a 95% confidence interval, and an independent t-test with associated p-value: set.seed(1) x <- sample.int(100, 50, replace=TRUE) y <- sample(c(0, 1), 50, replace=TRUE) cor.test(x, y) This yields a correlation of $r = 0.202$, which is not significant ($t = 1.429$, $\text{df} = 48$, $p = 0.1595$): Pearson's product-moment correlation data: x and y t = 1.429, df = 48, p-value = 0.1595 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.08088534 0.45478598 sample estimates: cor 0.2020105 As @sal-mangiafico and @igor-p point out, the function biserial.cor from the ltm package produces slightly different results. This is because cor.test uses the population standard deviation, whereas biserial.cor uses the sample standard deviation. Furthermore, the result of biserial.cor has the opposite sign than the result of cor.test. This can be adjusted by specifying the argument level=2 in biserial.cor.
P-value for point biserial correlation in R The point-biserial correlation is equivalent to calculating the Pearson correlation between a continuous and a dichotomous variable (the latter needs to be encoded with 0 and 1). Therefore, you can ju
53,258
P-value for point biserial correlation in R
In response to @user9413061, I think I discovered the source of the problem. In the standard definition of biserial correlation, the population standard deviation is used. ltm::biserial.cor uses the sample standard deviation. In the following, a function is defined to calculate the population standard deviation. The function biserial.cor.new is defined, which is the same as ltm::biserial.cor with sd.pop used instead of sd. I think biserial.cor.new will return the same result as cor.test. sd.pop = function(x){sd(x)*sqrt((length(x)-1)/length(x))} biserial.cor.new = function (x, y, use = c("all.obs", "complete.obs"), level = 1) { if (!is.numeric(x)) stop("'x' must be a numeric variable.\n") y <- as.factor(y) if (length(levs <- levels(y)) > 2) stop("'y' must be a dichotomous variable.\n") if (length(x) != length(y)) stop("'x' and 'y' do not have the same length") use <- match.arg(use) if (use == "complete.obs") { cc.ind <- complete.cases(x, y) x <- x[cc.ind] y <- y[cc.ind] } ind <- y == levs[level] diff.mu <- mean(x[ind]) - mean(x[!ind]) prob <- mean(ind) diff.mu * sqrt(prob * (1 - prob))/sd.pop(x) } And an example: x = c(3,4,5,6,7,5,6,7,8,9) y = c(0,0,0,0,0,1,1,1,1,1) library(ltm) ### DIFFERENT RESULTS WITH ltm::biserial.cor biserial.cor(x,y, level=2) ### [1] 0.5477226 cor.test(x,y) ### Pearson's product-moment correlation ### sample estimates: ### cor ### 0.5773503 ### SAME RESULTS WITH new function biserial.cor.new(x,y, level=2) ### [1] 0.5773503 cor.test(x,y) ### Pearson's product-moment correlation ### sample estimates: ### cor ### 0.5773503
P-value for point biserial correlation in R
In response to @user9413061, I think I discovered the source of the problem. In the standard definition of biserial correlation, the population standard deviation is used. ltm::biserial.cor uses the s
P-value for point biserial correlation in R In response to @user9413061, I think I discovered the source of the problem. In the standard definition of biserial correlation, the population standard deviation is used. ltm::biserial.cor uses the sample standard deviation. In the following, a function is defined to calculate the population standard deviation. The function biserial.cor.new is defined, which is the same as ltm::biserial.cor with sd.pop used instead of sd. I think biserial.cor.new will return the same result as cor.test. sd.pop = function(x){sd(x)*sqrt((length(x)-1)/length(x))} biserial.cor.new = function (x, y, use = c("all.obs", "complete.obs"), level = 1) { if (!is.numeric(x)) stop("'x' must be a numeric variable.\n") y <- as.factor(y) if (length(levs <- levels(y)) > 2) stop("'y' must be a dichotomous variable.\n") if (length(x) != length(y)) stop("'x' and 'y' do not have the same length") use <- match.arg(use) if (use == "complete.obs") { cc.ind <- complete.cases(x, y) x <- x[cc.ind] y <- y[cc.ind] } ind <- y == levs[level] diff.mu <- mean(x[ind]) - mean(x[!ind]) prob <- mean(ind) diff.mu * sqrt(prob * (1 - prob))/sd.pop(x) } And an example: x = c(3,4,5,6,7,5,6,7,8,9) y = c(0,0,0,0,0,1,1,1,1,1) library(ltm) ### DIFFERENT RESULTS WITH ltm::biserial.cor biserial.cor(x,y, level=2) ### [1] 0.5477226 cor.test(x,y) ### Pearson's product-moment correlation ### sample estimates: ### cor ### 0.5773503 ### SAME RESULTS WITH new function biserial.cor.new(x,y, level=2) ### [1] 0.5773503 cor.test(x,y) ### Pearson's product-moment correlation ### sample estimates: ### cor ### 0.5773503
P-value for point biserial correlation in R In response to @user9413061, I think I discovered the source of the problem. In the standard definition of biserial correlation, the population standard deviation is used. ltm::biserial.cor uses the s
53,259
P-value for point biserial correlation in R
For my understanding you don't have to code the dichotome variable with 0 and 1. Therefore using other values results in exactly the same output. Try for example: x <- 1:100 y <- rep(c(0,1), 50) y2 <- rep(c(-786,345), 50) cor.test(x, y) cor.test(x, y2) Both gives you an r of 0.01732137. The only thing that can happen by coding the dichotome variable differently is that you get -0.01732137, which will be the case if the first number is bigger than the second, e.g. y3 <- rep(c(0,1), 50) cor.test(x, y3) results in -0.01732137. Furthermore, I read on different pages that "the point-biserial correlation is equivalent to calculating the Pearson correlation between a continuous and a dichotomous variable", but in fact I get different results if I conduct a Pearson and a point-biserial correlation on same data. An example: x <- 1:100 y <- rep(c(0,1), 50) cor.test(x, y) gives me 0.01732137, but biserial.cor(x, y) results in -0.01723455. I understand that it is okay to get positive and negative values, but the absolute value should be the same, which is not the case. The results are also different if I use other data, e.g. x <- rnorm(100, 100, 15) instead of x <- 1:100. For this reason I am unsure whether it is acceptable to use cor.test() and report that you have conducted a point-biserial correlation.
P-value for point biserial correlation in R
For my understanding you don't have to code the dichotome variable with 0 and 1. Therefore using other values results in exactly the same output. Try for example: x <- 1:100 y <- rep(c(0,1), 50) y2 <-
P-value for point biserial correlation in R For my understanding you don't have to code the dichotome variable with 0 and 1. Therefore using other values results in exactly the same output. Try for example: x <- 1:100 y <- rep(c(0,1), 50) y2 <- rep(c(-786,345), 50) cor.test(x, y) cor.test(x, y2) Both gives you an r of 0.01732137. The only thing that can happen by coding the dichotome variable differently is that you get -0.01732137, which will be the case if the first number is bigger than the second, e.g. y3 <- rep(c(0,1), 50) cor.test(x, y3) results in -0.01732137. Furthermore, I read on different pages that "the point-biserial correlation is equivalent to calculating the Pearson correlation between a continuous and a dichotomous variable", but in fact I get different results if I conduct a Pearson and a point-biserial correlation on same data. An example: x <- 1:100 y <- rep(c(0,1), 50) cor.test(x, y) gives me 0.01732137, but biserial.cor(x, y) results in -0.01723455. I understand that it is okay to get positive and negative values, but the absolute value should be the same, which is not the case. The results are also different if I use other data, e.g. x <- rnorm(100, 100, 15) instead of x <- 1:100. For this reason I am unsure whether it is acceptable to use cor.test() and report that you have conducted a point-biserial correlation.
P-value for point biserial correlation in R For my understanding you don't have to code the dichotome variable with 0 and 1. Therefore using other values results in exactly the same output. Try for example: x <- 1:100 y <- rep(c(0,1), 50) y2 <-
53,260
Is there a 3D neural network and how to code it in R?
A '3d network' might commonly be described as a network with 2d layers. It's not fundamentally different from any other network because the principles of activation are the same. The activation of each unit is a linear combination of its inputs, passed through a (typically nonlinear) activation function. In one sense, the dimensionality is just a property of the drawing. You could draw the units anywhere you wanted (using however many dimensions you wanted) and the function would be the same. Instead, it's the connectivity that matters. There are a couple reasons to 'organize' networks layers into well defined shapes. One is for convenience in thinking about things (e.g. in the case of processing 2d inputs like images). This is particularly the case when connectivity is constrained. For example, in a convolutional network that processes images, each unit receives connections from a local 'patch' of units in the previous layer. Thinking about the layers as 2d makes intuitive sense here because it lets us talk about things like 'local patches'. But, as before, you could completely scramble the 'locations' and the function would be the same as long as connectivity is the same. In convolutional networks (e.g. for image processing), there's an additional benefit to representing the layers in 2d. Because of the way these networks constrain the weights/connectivity, representing the layers in 2d makes it possible to use the 2d convolution operation when computing the activations of all units in a layer. Although it doesn't change the fundamental function of the network, it's a very computationally efficient way of implementing things. This is just one example, and there could be other cases where representing units on a grid with some dimensionality makes it possible to play computational tricks that speed things up (e.g. in the blog post you linked). Actually, the network in the blog post is a different beast because it's a spiking network that tries to emulate biological neurons slightly more closely than a standard artificial neural net. But, that's a whole different issue.
Is there a 3D neural network and how to code it in R?
A '3d network' might commonly be described as a network with 2d layers. It's not fundamentally different from any other network because the principles of activation are the same. The activation of eac
Is there a 3D neural network and how to code it in R? A '3d network' might commonly be described as a network with 2d layers. It's not fundamentally different from any other network because the principles of activation are the same. The activation of each unit is a linear combination of its inputs, passed through a (typically nonlinear) activation function. In one sense, the dimensionality is just a property of the drawing. You could draw the units anywhere you wanted (using however many dimensions you wanted) and the function would be the same. Instead, it's the connectivity that matters. There are a couple reasons to 'organize' networks layers into well defined shapes. One is for convenience in thinking about things (e.g. in the case of processing 2d inputs like images). This is particularly the case when connectivity is constrained. For example, in a convolutional network that processes images, each unit receives connections from a local 'patch' of units in the previous layer. Thinking about the layers as 2d makes intuitive sense here because it lets us talk about things like 'local patches'. But, as before, you could completely scramble the 'locations' and the function would be the same as long as connectivity is the same. In convolutional networks (e.g. for image processing), there's an additional benefit to representing the layers in 2d. Because of the way these networks constrain the weights/connectivity, representing the layers in 2d makes it possible to use the 2d convolution operation when computing the activations of all units in a layer. Although it doesn't change the fundamental function of the network, it's a very computationally efficient way of implementing things. This is just one example, and there could be other cases where representing units on a grid with some dimensionality makes it possible to play computational tricks that speed things up (e.g. in the blog post you linked). Actually, the network in the blog post is a different beast because it's a spiking network that tries to emulate biological neurons slightly more closely than a standard artificial neural net. But, that's a whole different issue.
Is there a 3D neural network and how to code it in R? A '3d network' might commonly be described as a network with 2d layers. It's not fundamentally different from any other network because the principles of activation are the same. The activation of eac
53,261
Is there a 3D neural network and how to code it in R?
For artificial neural networks (the kind employed in machine learning) there is no "dimensionality". As @user20160 notes, convolution nets are often presented in 2D to help us understand the operations of the network, but there is no position in space for any of the units, just connections to different parts of an image. In the website you link to, the neural network has a connectivity rule that is defined by unit-to-unit hop distances, meaning that there is an implicit 3D spatial location for each unit. To answer your question: I don't think there are any packages in R for this type of architecture (I haven't done an exhaustive search though). But, the NEURON or Brian simulation environments could potentially let you do it with simple integrate-and-fire units. As well, implementing it in R or python wouldn't be that hard - just define a point in space for each neuron and set a connectivity rule based on distance. I would note: whether such a design is useful is an open question. We see some patterns of connectivity the depend on distance in the real brain (see e.g. this paper), so there may be a good reason to do it. As far as I know, though, no one has ever actually demonstrated a good reason from a machine learning perspective.
Is there a 3D neural network and how to code it in R?
For artificial neural networks (the kind employed in machine learning) there is no "dimensionality". As @user20160 notes, convolution nets are often presented in 2D to help us understand the operation
Is there a 3D neural network and how to code it in R? For artificial neural networks (the kind employed in machine learning) there is no "dimensionality". As @user20160 notes, convolution nets are often presented in 2D to help us understand the operations of the network, but there is no position in space for any of the units, just connections to different parts of an image. In the website you link to, the neural network has a connectivity rule that is defined by unit-to-unit hop distances, meaning that there is an implicit 3D spatial location for each unit. To answer your question: I don't think there are any packages in R for this type of architecture (I haven't done an exhaustive search though). But, the NEURON or Brian simulation environments could potentially let you do it with simple integrate-and-fire units. As well, implementing it in R or python wouldn't be that hard - just define a point in space for each neuron and set a connectivity rule based on distance. I would note: whether such a design is useful is an open question. We see some patterns of connectivity the depend on distance in the real brain (see e.g. this paper), so there may be a good reason to do it. As far as I know, though, no one has ever actually demonstrated a good reason from a machine learning perspective.
Is there a 3D neural network and how to code it in R? For artificial neural networks (the kind employed in machine learning) there is no "dimensionality". As @user20160 notes, convolution nets are often presented in 2D to help us understand the operation
53,262
Simulating random variables from a discrete distribution
This answer develops a simple procedure to generate values from this distribution. It illustrates the procedure, analyzes its scope of application (that is, for which $p$ it might be considered a practical method), and provides executable code. The Idea Because $$x^2 = 2\binom{x}{2} + \binom{x}{1},$$ consider the distributions $f_{p;m}$ given by $$f_{p;m}(x) \propto \binom{x}{m-1}p^x$$ for $m=3$ and $m=2$. A recent thread on inverse sampling demonstrates that these distributions count the number of observations of independent Bernoulli$(1-p)$ variables needed before first seeing $m$ successes, with $x+1$ equal to that number. It also shows that the normalizing constant is $$C(p;m)=\sum_{x=m-1}^\infty \binom{x}{m-1}p^x = \frac{p^{m-1}}{(1-p)^m}.$$ Consider the probabilities in the question, $$x^2 p^x = \left( 2\binom{x}{2} + \binom{x}{1} \right)p^x = 2 \binom{x}{2}p^x + \binom{x}{1} p^x =2 C(p;3) f_{p;3}(x) + C(p;2) f_{p;2}(x).$$ Consequently, the given distribution is a mixture of $f_{p;3}$ and $f_{p;2}$. The proportions are as $$2C(p;3):C(p;2) = 2p:(1-p).$$ It is simple to sample from a mixture: generate an independent uniform variate $u$ and draw $x$ from $f_{p;2}$ when $u \lt (1-p)/(2p+1-p)$; that is, when $u(1+p) \lt 1-p$, and otherwise draw $x$ from $f_{p;3}$. (It is evident that this method generalizes: many probability distributions where the chance of $x$ is of the form $P(x)p^x$ for a polynomial $P$, such as $P(x)=x^2$ here, can be expressed as a mixture of these inverse-sampling distributions.) The Algorithm These considerations lead to the following simple algorithm to generate one realization of the desired distribution: Let U ~ Uniform(0,1+p) If (U < 1-p) then m = 2 else m = 3 x = 0 While (m > 0) { x = x + 1 Let Z ~ Bernoulli(1-p) m = m - Z } Return x-1 These histograms show simulations (based on 100,000 iterations) and the true distribution for a range of values of $p$. Analysis How efficient is this? The expectation of $x+1$ under the distribution $f_{p;m}$ is readily computed; it equals $m/(1-p)$. Therefore the expected number of trials (that is, values of Z to generate in the algorithm) is $$\left((1-p) \frac{2}{1-p} + (2p) \frac{3}{1-p}\right) / (1-p+2p) = 2 \frac{1+2p}{1-p^2}.$$ Add one more for generating U. The total is close to $3$ for small values of $p$. As $p$ approaches $1$, this count asymptotically is $$1 + 2\frac{1 + 2p}{(1-p)(1+p)} \approx \frac{3}{1-p}.$$ This shows us that the algorithm will, on the average, be reasonably quick for $p \lt 2/3$ (taking up to ten easy steps) and not too bad for $p \lt 0.97$ (taking under a hundred steps). Code Here is the R code used to implement the algorithm and produce the figures. A $\chi^2$ test will show that the simulated results do not differ significantly from the expected frequencies. sample <- function(p) { m <- ifelse(runif(1, max=1+p) < 1-p, 2, 3) x <- 0 while (m > 0) { x <- x + 1 m <- m - (runif(1) > p) } return(x-1) } n <- 1e5 set.seed(17) par(mfcol=c(2,3)) for (p in c(1/5, 1/2, 9/10)) { # Simulate and summarize. x <- replicate(n, sample(p)) y <- table(x) # Compute the true distribution for comparison. k <- as.numeric(names(y)) theta <- sapply(k, function(i) i^2 * p^i) * (1-p)^3 / (p^2 + p) names(theta) <- names(y) # Plot both. barplot(y/n, main=paste("Simulation for", format(p, digits=2)), border="#00000010") barplot(theta, main=paste("Distribution for", format(p, digits=2)), border="#00000010") }
Simulating random variables from a discrete distribution
This answer develops a simple procedure to generate values from this distribution. It illustrates the procedure, analyzes its scope of application (that is, for which $p$ it might be considered a pra
Simulating random variables from a discrete distribution This answer develops a simple procedure to generate values from this distribution. It illustrates the procedure, analyzes its scope of application (that is, for which $p$ it might be considered a practical method), and provides executable code. The Idea Because $$x^2 = 2\binom{x}{2} + \binom{x}{1},$$ consider the distributions $f_{p;m}$ given by $$f_{p;m}(x) \propto \binom{x}{m-1}p^x$$ for $m=3$ and $m=2$. A recent thread on inverse sampling demonstrates that these distributions count the number of observations of independent Bernoulli$(1-p)$ variables needed before first seeing $m$ successes, with $x+1$ equal to that number. It also shows that the normalizing constant is $$C(p;m)=\sum_{x=m-1}^\infty \binom{x}{m-1}p^x = \frac{p^{m-1}}{(1-p)^m}.$$ Consider the probabilities in the question, $$x^2 p^x = \left( 2\binom{x}{2} + \binom{x}{1} \right)p^x = 2 \binom{x}{2}p^x + \binom{x}{1} p^x =2 C(p;3) f_{p;3}(x) + C(p;2) f_{p;2}(x).$$ Consequently, the given distribution is a mixture of $f_{p;3}$ and $f_{p;2}$. The proportions are as $$2C(p;3):C(p;2) = 2p:(1-p).$$ It is simple to sample from a mixture: generate an independent uniform variate $u$ and draw $x$ from $f_{p;2}$ when $u \lt (1-p)/(2p+1-p)$; that is, when $u(1+p) \lt 1-p$, and otherwise draw $x$ from $f_{p;3}$. (It is evident that this method generalizes: many probability distributions where the chance of $x$ is of the form $P(x)p^x$ for a polynomial $P$, such as $P(x)=x^2$ here, can be expressed as a mixture of these inverse-sampling distributions.) The Algorithm These considerations lead to the following simple algorithm to generate one realization of the desired distribution: Let U ~ Uniform(0,1+p) If (U < 1-p) then m = 2 else m = 3 x = 0 While (m > 0) { x = x + 1 Let Z ~ Bernoulli(1-p) m = m - Z } Return x-1 These histograms show simulations (based on 100,000 iterations) and the true distribution for a range of values of $p$. Analysis How efficient is this? The expectation of $x+1$ under the distribution $f_{p;m}$ is readily computed; it equals $m/(1-p)$. Therefore the expected number of trials (that is, values of Z to generate in the algorithm) is $$\left((1-p) \frac{2}{1-p} + (2p) \frac{3}{1-p}\right) / (1-p+2p) = 2 \frac{1+2p}{1-p^2}.$$ Add one more for generating U. The total is close to $3$ for small values of $p$. As $p$ approaches $1$, this count asymptotically is $$1 + 2\frac{1 + 2p}{(1-p)(1+p)} \approx \frac{3}{1-p}.$$ This shows us that the algorithm will, on the average, be reasonably quick for $p \lt 2/3$ (taking up to ten easy steps) and not too bad for $p \lt 0.97$ (taking under a hundred steps). Code Here is the R code used to implement the algorithm and produce the figures. A $\chi^2$ test will show that the simulated results do not differ significantly from the expected frequencies. sample <- function(p) { m <- ifelse(runif(1, max=1+p) < 1-p, 2, 3) x <- 0 while (m > 0) { x <- x + 1 m <- m - (runif(1) > p) } return(x-1) } n <- 1e5 set.seed(17) par(mfcol=c(2,3)) for (p in c(1/5, 1/2, 9/10)) { # Simulate and summarize. x <- replicate(n, sample(p)) y <- table(x) # Compute the true distribution for comparison. k <- as.numeric(names(y)) theta <- sapply(k, function(i) i^2 * p^i) * (1-p)^3 / (p^2 + p) names(theta) <- names(y) # Plot both. barplot(y/n, main=paste("Simulation for", format(p, digits=2)), border="#00000010") barplot(theta, main=paste("Distribution for", format(p, digits=2)), border="#00000010") }
Simulating random variables from a discrete distribution This answer develops a simple procedure to generate values from this distribution. It illustrates the procedure, analyzes its scope of application (that is, for which $p$ it might be considered a pra
53,263
Simulating random variables from a discrete distribution
@dsaxton's approach is known as inverse transform sampling and is probably the way to go for a problem like this. To be a bit more explicit, the approach is: Draw $u$ from uniform distribution on (0,1). Compute $x = F^{-1}(u)$ where $F^{-1}$ is the inverse of the cumulative distribution function. Computing $x = F^{-1}(u)$ is equivalent to finding the integer $x$ that is the solution to: $$ \text{minimize} \quad x \quad \text{subject to} \sum_{j=0}^x \frac{(1 - p)^3}{p(1+p)} j^2p^j \geq u $$ Quick pseudo code to do this numerically: Construct a vector $\boldsymbol{m}$ such that $m_j = \frac{(1 - p)^3}{p(1+p)} j^2p^j$. Create a vector $\boldsymbol{c}$ such that $c_j = \sum_{k=0}^j m_j$. Find the minimum index $x$ such that $c_x \geq u$.
Simulating random variables from a discrete distribution
@dsaxton's approach is known as inverse transform sampling and is probably the way to go for a problem like this. To be a bit more explicit, the approach is: Draw $u$ from uniform distribution on (0,
Simulating random variables from a discrete distribution @dsaxton's approach is known as inverse transform sampling and is probably the way to go for a problem like this. To be a bit more explicit, the approach is: Draw $u$ from uniform distribution on (0,1). Compute $x = F^{-1}(u)$ where $F^{-1}$ is the inverse of the cumulative distribution function. Computing $x = F^{-1}(u)$ is equivalent to finding the integer $x$ that is the solution to: $$ \text{minimize} \quad x \quad \text{subject to} \sum_{j=0}^x \frac{(1 - p)^3}{p(1+p)} j^2p^j \geq u $$ Quick pseudo code to do this numerically: Construct a vector $\boldsymbol{m}$ such that $m_j = \frac{(1 - p)^3}{p(1+p)} j^2p^j$. Create a vector $\boldsymbol{c}$ such that $c_j = \sum_{k=0}^j m_j$. Find the minimum index $x$ such that $c_x \geq u$.
Simulating random variables from a discrete distribution @dsaxton's approach is known as inverse transform sampling and is probably the way to go for a problem like this. To be a bit more explicit, the approach is: Draw $u$ from uniform distribution on (0,
53,264
Simulating random variables from a discrete distribution
Draw $u$ from a uniform$(0, 1)$ distribution and let $x$ be the smallest value of $k$ for which $\sum_{j=0}^{k} \frac{(1 - p)^3}{p (1 + p)} j^2 p^j > u$. Then $x$ will be a realization from the desired distribution.
Simulating random variables from a discrete distribution
Draw $u$ from a uniform$(0, 1)$ distribution and let $x$ be the smallest value of $k$ for which $\sum_{j=0}^{k} \frac{(1 - p)^3}{p (1 + p)} j^2 p^j > u$. Then $x$ will be a realization from the desir
Simulating random variables from a discrete distribution Draw $u$ from a uniform$(0, 1)$ distribution and let $x$ be the smallest value of $k$ for which $\sum_{j=0}^{k} \frac{(1 - p)^3}{p (1 + p)} j^2 p^j > u$. Then $x$ will be a realization from the desired distribution.
Simulating random variables from a discrete distribution Draw $u$ from a uniform$(0, 1)$ distribution and let $x$ be the smallest value of $k$ for which $\sum_{j=0}^{k} \frac{(1 - p)^3}{p (1 + p)} j^2 p^j > u$. Then $x$ will be a realization from the desir
53,265
Convergence in distribution of the following sequence of random variables
The MGF of a Beta Distribution is: $$1+\sum_{k=1}^{\infty}\left(\prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r}\right)\frac{t^k}{k!}$$ As $n\to \infty$, we see that $$r,\alpha,\beta>0 \implies\frac{\alpha/n+r}{\alpha/n+\beta/n+r} \to 1$$ For $r=0$ we get: $$\lim_{n\to \infty} \frac{\alpha/n}{\alpha/n+\beta/n} = \frac{\alpha}{\alpha+\beta} = E[X_1]$$ Putting this together we see that: $$ \lim_{n\to \infty} \left[1+\sum_{k=1}^{\infty}\left(\prod_{r=0}^{k-1}\frac{\alpha/n+r}{\alpha/n+\beta/n+r}\right)\frac{t^k}{k!}\right] = 1+E[X_1]\sum_{k=1}^{\infty}\frac{t^k}{k!} =$$ $$ (1-E[X_1])+E[X_1]e^t$$ The MGF of a $\text{Bernoulli}(p)$ is: $$(1-p)+pe^t$$ By comparison, we see that the MGF of the Beta has converged to the MGF of a $\text{Bernoulli}\left(\frac{\alpha}{\alpha+\beta}\right)$ It is a known theorem that if random variables $X$ and $Y$ have the same MGF over $t \in (-a,a),a>0$ then they have the same distribution. Clearly, this holds here, so we can conclude, as @Henry alluded to in the comments, that the limiting distribution is indeed a $\text{Bernoulli}\left(\frac{\alpha}{\alpha+\beta}\right)$
Convergence in distribution of the following sequence of random variables
The MGF of a Beta Distribution is: $$1+\sum_{k=1}^{\infty}\left(\prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r}\right)\frac{t^k}{k!}$$ As $n\to \infty$, we see that $$r,\alpha,\beta>0 \implies\frac
Convergence in distribution of the following sequence of random variables The MGF of a Beta Distribution is: $$1+\sum_{k=1}^{\infty}\left(\prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r}\right)\frac{t^k}{k!}$$ As $n\to \infty$, we see that $$r,\alpha,\beta>0 \implies\frac{\alpha/n+r}{\alpha/n+\beta/n+r} \to 1$$ For $r=0$ we get: $$\lim_{n\to \infty} \frac{\alpha/n}{\alpha/n+\beta/n} = \frac{\alpha}{\alpha+\beta} = E[X_1]$$ Putting this together we see that: $$ \lim_{n\to \infty} \left[1+\sum_{k=1}^{\infty}\left(\prod_{r=0}^{k-1}\frac{\alpha/n+r}{\alpha/n+\beta/n+r}\right)\frac{t^k}{k!}\right] = 1+E[X_1]\sum_{k=1}^{\infty}\frac{t^k}{k!} =$$ $$ (1-E[X_1])+E[X_1]e^t$$ The MGF of a $\text{Bernoulli}(p)$ is: $$(1-p)+pe^t$$ By comparison, we see that the MGF of the Beta has converged to the MGF of a $\text{Bernoulli}\left(\frac{\alpha}{\alpha+\beta}\right)$ It is a known theorem that if random variables $X$ and $Y$ have the same MGF over $t \in (-a,a),a>0$ then they have the same distribution. Clearly, this holds here, so we can conclude, as @Henry alluded to in the comments, that the limiting distribution is indeed a $\text{Bernoulli}\left(\frac{\alpha}{\alpha+\beta}\right)$
Convergence in distribution of the following sequence of random variables The MGF of a Beta Distribution is: $$1+\sum_{k=1}^{\infty}\left(\prod_{r=0}^{k-1} \frac{\alpha+r}{\alpha+\beta+r}\right)\frac{t^k}{k!}$$ As $n\to \infty$, we see that $$r,\alpha,\beta>0 \implies\frac
53,266
Convergence in distribution of the following sequence of random variables
It converges in distribution to a Bernoulli variable with parameter $\alpha/(\alpha+\beta)$. This figure shows the Beta distributions in the case $\alpha=2,\beta=1$ for $n=1,4,16,64,$ and $\infty$. They settle down to a distribution with a jump of $\beta/(\alpha+\beta)=1/3$ at $0$ and another jump of $\alpha/(\alpha+\beta) = 2/3$ at $1$. This is a Bernoulli$(2/3)$ distribution. The following sketches a rigorous demonstration using only elementary techniques. The idea is to break the Beta integral into three parts: a portion near zero, a portion near one, and everything in between. We can easily approximate the integrals at the ends using elementary power integrals. The middle integral becomes much smaller than the two ends, and so eventually can be neglected. This is obvious when you look at the density function: as soon as $\alpha/n$ drops below $1$, the density shoots up to infinity at the left end. Similarly, as soon as $\beta/n$ drops below $1$, the density shoots to infinity at the right end. This produces two "limbs" of a U-shaped distribution. The limbs dominate the probability. All that we need to do is (1) show that their relative areas approach a limiting value and (2) compute that limiting value. For those who might be new to such arguments, here are some details. Let $1/2 \ge \epsilon \gt 0$. ($\epsilon$ is going to determine how close to an end of the interval $[0,1]$ we will be.) We will derive three numerical relationships associated with the Beta PDF $x^{\alpha/n-1}(1-x)^{\beta/n-1}$. (Notice the lack of the normalizing constant: the trick is to ignore it by looking at relative probabilities.) First, let $\gamma$ be any number. From $$\log(\epsilon^{\gamma/n}) = \frac{\gamma}{n}\log(\epsilon) \xrightarrow{n\to\infty} 0$$ we conclude that $\epsilon^{\gamma/n}$ can be made as close to $\exp(0)=1$ as we like. Second, for $0 \le x \le 1-\epsilon$, a similar argument yields $$|\log((1-x)^{\beta/n-1})| = |1 - \beta/n||\log(1-x)| \le |1 - \beta/n||\epsilon| \xrightarrow{n\to\infty} 0.$$ Consequently, for sufficiently large $n$ the value of $(1-x)^{\beta/n-1}$ can be made as close to $\exp(0)=1$ as we desire. This will later enable us to ignore the contribution of this term to the left-hand limb of the integral. Third, $$\int_0^\epsilon x^{\alpha/n-1} dx = \frac{n}{\alpha}\epsilon^{\alpha/n}.$$ Apply these results to the integral of the product: $$\frac{n}{\alpha}\epsilon^{\alpha/n}= \int_0^\epsilon x^{\alpha/n-1}(1)dx \approx \int_0^\epsilon x^{\alpha/n-1}(1-x)^{\beta/n-1}dx.$$ There's no need to repeat the work for the other limb: the change of variable $x \to 1-x$ interchanges the roles of $\alpha$ and $\beta$, allowing us immediately to write the equivalent approximation $$\frac{n}{\beta}\epsilon^{\beta/n} \approx \int_{1-\epsilon}^1 x^{\alpha/n-1}(1-x)^{\beta/n-1}dx.$$ Furthermore, applying the foregoing to the case $\epsilon=1/2$ shows us that for sufficiently large $n$ $$\int_\epsilon^{1/2} x^{\alpha/n-1} (1-x)^{\beta/n - 1} dx \approx \frac{n}{\alpha}\left(\left(\frac{1}{2}\right)^{\alpha/n} - \epsilon^{\alpha/n}\right) \ll \frac{n}{\alpha}\epsilon^{\alpha/n}.$$ Again applying the change of variable and adding that result to the preceding result we obtain $$\int_{\epsilon}^{1-\epsilon} x^{\alpha/n-1}(1-x)^{\beta/n-1}dx \ll \frac{n}{\alpha}\epsilon^{\alpha/n} + \frac{n}{\beta}\epsilon^{\beta/n}.$$ In other words, for large enough $n$ essentially all the probability of a Beta$(\alpha/n,\beta/n)$ distribution is concentrated in the terminal intervals $[0,\epsilon)$ and $(1-\epsilon, 1]$. The relative probability of the right hand interval, compared to the total probability, therefore comes arbitrarily close to $$\Pr((1-\epsilon, 1]) \approx \frac{\frac{n}{\beta}\epsilon^{\beta/n}}{\frac{n}{\beta}\epsilon^{\beta/n} + \frac{n}{\alpha}\epsilon^{\alpha/n}} = \frac{\alpha}{\alpha + \beta\epsilon^{(\alpha-\beta)/n} } \xrightarrow{n\to\infty}\frac{\alpha}{\alpha+\beta}$$ and, similarly, the relative probability of the left hand interval comes arbitrarily close to $\beta/(\alpha+\beta)$, QED.
Convergence in distribution of the following sequence of random variables
It converges in distribution to a Bernoulli variable with parameter $\alpha/(\alpha+\beta)$. This figure shows the Beta distributions in the case $\alpha=2,\beta=1$ for $n=1,4,16,64,$ and $\infty$.
Convergence in distribution of the following sequence of random variables It converges in distribution to a Bernoulli variable with parameter $\alpha/(\alpha+\beta)$. This figure shows the Beta distributions in the case $\alpha=2,\beta=1$ for $n=1,4,16,64,$ and $\infty$. They settle down to a distribution with a jump of $\beta/(\alpha+\beta)=1/3$ at $0$ and another jump of $\alpha/(\alpha+\beta) = 2/3$ at $1$. This is a Bernoulli$(2/3)$ distribution. The following sketches a rigorous demonstration using only elementary techniques. The idea is to break the Beta integral into three parts: a portion near zero, a portion near one, and everything in between. We can easily approximate the integrals at the ends using elementary power integrals. The middle integral becomes much smaller than the two ends, and so eventually can be neglected. This is obvious when you look at the density function: as soon as $\alpha/n$ drops below $1$, the density shoots up to infinity at the left end. Similarly, as soon as $\beta/n$ drops below $1$, the density shoots to infinity at the right end. This produces two "limbs" of a U-shaped distribution. The limbs dominate the probability. All that we need to do is (1) show that their relative areas approach a limiting value and (2) compute that limiting value. For those who might be new to such arguments, here are some details. Let $1/2 \ge \epsilon \gt 0$. ($\epsilon$ is going to determine how close to an end of the interval $[0,1]$ we will be.) We will derive three numerical relationships associated with the Beta PDF $x^{\alpha/n-1}(1-x)^{\beta/n-1}$. (Notice the lack of the normalizing constant: the trick is to ignore it by looking at relative probabilities.) First, let $\gamma$ be any number. From $$\log(\epsilon^{\gamma/n}) = \frac{\gamma}{n}\log(\epsilon) \xrightarrow{n\to\infty} 0$$ we conclude that $\epsilon^{\gamma/n}$ can be made as close to $\exp(0)=1$ as we like. Second, for $0 \le x \le 1-\epsilon$, a similar argument yields $$|\log((1-x)^{\beta/n-1})| = |1 - \beta/n||\log(1-x)| \le |1 - \beta/n||\epsilon| \xrightarrow{n\to\infty} 0.$$ Consequently, for sufficiently large $n$ the value of $(1-x)^{\beta/n-1}$ can be made as close to $\exp(0)=1$ as we desire. This will later enable us to ignore the contribution of this term to the left-hand limb of the integral. Third, $$\int_0^\epsilon x^{\alpha/n-1} dx = \frac{n}{\alpha}\epsilon^{\alpha/n}.$$ Apply these results to the integral of the product: $$\frac{n}{\alpha}\epsilon^{\alpha/n}= \int_0^\epsilon x^{\alpha/n-1}(1)dx \approx \int_0^\epsilon x^{\alpha/n-1}(1-x)^{\beta/n-1}dx.$$ There's no need to repeat the work for the other limb: the change of variable $x \to 1-x$ interchanges the roles of $\alpha$ and $\beta$, allowing us immediately to write the equivalent approximation $$\frac{n}{\beta}\epsilon^{\beta/n} \approx \int_{1-\epsilon}^1 x^{\alpha/n-1}(1-x)^{\beta/n-1}dx.$$ Furthermore, applying the foregoing to the case $\epsilon=1/2$ shows us that for sufficiently large $n$ $$\int_\epsilon^{1/2} x^{\alpha/n-1} (1-x)^{\beta/n - 1} dx \approx \frac{n}{\alpha}\left(\left(\frac{1}{2}\right)^{\alpha/n} - \epsilon^{\alpha/n}\right) \ll \frac{n}{\alpha}\epsilon^{\alpha/n}.$$ Again applying the change of variable and adding that result to the preceding result we obtain $$\int_{\epsilon}^{1-\epsilon} x^{\alpha/n-1}(1-x)^{\beta/n-1}dx \ll \frac{n}{\alpha}\epsilon^{\alpha/n} + \frac{n}{\beta}\epsilon^{\beta/n}.$$ In other words, for large enough $n$ essentially all the probability of a Beta$(\alpha/n,\beta/n)$ distribution is concentrated in the terminal intervals $[0,\epsilon)$ and $(1-\epsilon, 1]$. The relative probability of the right hand interval, compared to the total probability, therefore comes arbitrarily close to $$\Pr((1-\epsilon, 1]) \approx \frac{\frac{n}{\beta}\epsilon^{\beta/n}}{\frac{n}{\beta}\epsilon^{\beta/n} + \frac{n}{\alpha}\epsilon^{\alpha/n}} = \frac{\alpha}{\alpha + \beta\epsilon^{(\alpha-\beta)/n} } \xrightarrow{n\to\infty}\frac{\alpha}{\alpha+\beta}$$ and, similarly, the relative probability of the left hand interval comes arbitrarily close to $\beta/(\alpha+\beta)$, QED.
Convergence in distribution of the following sequence of random variables It converges in distribution to a Bernoulli variable with parameter $\alpha/(\alpha+\beta)$. This figure shows the Beta distributions in the case $\alpha=2,\beta=1$ for $n=1,4,16,64,$ and $\infty$.
53,267
What is the perplexity of a mini-language of numbers [0-9] where 0 has prob 10 times the other numbers?
The reason why you get the wrong answer is because the way the calculation has been described in the book is a little confusing. The book says "imagine a string of digits of length $N$". This means a long string of digits, not just a string of $10$ digits. Imagine a long string of digits from the new language. On average, in $19$ digits from this sequence, you have $10$ zeros and $1$ of each of the other numbers, because $0$ occurs ten times as often as each of the other digits. Imagine dividing your long sequence into blocks of length $19$. Say there are $M$ of these blocks. Then the perplexity is $$\left(\left(\frac{10}{19}\right)^{10M}\left(\frac{1}{19}\right)^{9M}\right)^{-1/19M}$$ and just like in the example in the book, the $M$'s cancel, so the perplexity is: $$\left(\left(\frac{10}{19}\right)^{10}\left(\frac{1}{19}\right)^{9}\right)^{-1/19}$$
What is the perplexity of a mini-language of numbers [0-9] where 0 has prob 10 times the other numbe
The reason why you get the wrong answer is because the way the calculation has been described in the book is a little confusing. The book says "imagine a string of digits of length $N$". This means a
What is the perplexity of a mini-language of numbers [0-9] where 0 has prob 10 times the other numbers? The reason why you get the wrong answer is because the way the calculation has been described in the book is a little confusing. The book says "imagine a string of digits of length $N$". This means a long string of digits, not just a string of $10$ digits. Imagine a long string of digits from the new language. On average, in $19$ digits from this sequence, you have $10$ zeros and $1$ of each of the other numbers, because $0$ occurs ten times as often as each of the other digits. Imagine dividing your long sequence into blocks of length $19$. Say there are $M$ of these blocks. Then the perplexity is $$\left(\left(\frac{10}{19}\right)^{10M}\left(\frac{1}{19}\right)^{9M}\right)^{-1/19M}$$ and just like in the example in the book, the $M$'s cancel, so the perplexity is: $$\left(\left(\frac{10}{19}\right)^{10}\left(\frac{1}{19}\right)^{9}\right)^{-1/19}$$
What is the perplexity of a mini-language of numbers [0-9] where 0 has prob 10 times the other numbe The reason why you get the wrong answer is because the way the calculation has been described in the book is a little confusing. The book says "imagine a string of digits of length $N$". This means a
53,268
Why would I ever use a linear autoencoder for dimensionality reduction?
Using a linear autoencoder instead of PCA could also be useful in a large-scale learning scenario. Since you can use Stochastic Gradient Descent (SGD) to train the AE, there is no neeed to load all the training samples in the main memory at once, which can be problematic with large-scale problems. The linear AE may also come handy in online-learning scenarios, where the training examples arrive over time, as this could be easily handled with SGD. Another option would be using an Incremental version of PCA (e.g., that of Scikit-learn): http://scikit-learn.org/stable/auto_examples/decomposition/plot_incremental_pca.html
Why would I ever use a linear autoencoder for dimensionality reduction?
Using a linear autoencoder instead of PCA could also be useful in a large-scale learning scenario. Since you can use Stochastic Gradient Descent (SGD) to train the AE, there is no neeed to load all th
Why would I ever use a linear autoencoder for dimensionality reduction? Using a linear autoencoder instead of PCA could also be useful in a large-scale learning scenario. Since you can use Stochastic Gradient Descent (SGD) to train the AE, there is no neeed to load all the training samples in the main memory at once, which can be problematic with large-scale problems. The linear AE may also come handy in online-learning scenarios, where the training examples arrive over time, as this could be easily handled with SGD. Another option would be using an Incremental version of PCA (e.g., that of Scikit-learn): http://scikit-learn.org/stable/auto_examples/decomposition/plot_incremental_pca.html
Why would I ever use a linear autoencoder for dimensionality reduction? Using a linear autoencoder instead of PCA could also be useful in a large-scale learning scenario. Since you can use Stochastic Gradient Descent (SGD) to train the AE, there is no neeed to load all th
53,269
Deep Neural Network weight initialization [duplicate]
As far as I know the two formulas you gave are pretty much the standard initialization. I did a literature review a while ago, please see my linked answer.
Deep Neural Network weight initialization [duplicate]
As far as I know the two formulas you gave are pretty much the standard initialization. I did a literature review a while ago, please see my linked answer.
Deep Neural Network weight initialization [duplicate] As far as I know the two formulas you gave are pretty much the standard initialization. I did a literature review a while ago, please see my linked answer.
Deep Neural Network weight initialization [duplicate] As far as I know the two formulas you gave are pretty much the standard initialization. I did a literature review a while ago, please see my linked answer.
53,270
Deep Neural Network weight initialization [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Recently, Batch Normalization was introduced for this sole purpose. Please find the paper here
Deep Neural Network weight initialization [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Deep Neural Network weight initialization [duplicate] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Recently, Batch Normalization was introduced for this sole purpose. Please find the paper here
Deep Neural Network weight initialization [duplicate] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
53,271
Deep Neural Network weight initialization [duplicate]
The paper 'all you need is a good init' is a good relatively recent article about inits in deep learning. What I liked about it is that: it has a short and effective literature survey on init methods, references included. It achieves very good results without too many bells and whistles on cifar10.
Deep Neural Network weight initialization [duplicate]
The paper 'all you need is a good init' is a good relatively recent article about inits in deep learning. What I liked about it is that: it has a short and effective literature survey on init methods
Deep Neural Network weight initialization [duplicate] The paper 'all you need is a good init' is a good relatively recent article about inits in deep learning. What I liked about it is that: it has a short and effective literature survey on init methods, references included. It achieves very good results without too many bells and whistles on cifar10.
Deep Neural Network weight initialization [duplicate] The paper 'all you need is a good init' is a good relatively recent article about inits in deep learning. What I liked about it is that: it has a short and effective literature survey on init methods
53,272
Deep Neural Network weight initialization [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Weights initialization depend on the activation function being used. Xavier and Bengio(2010) derived a method for initializing weights based on the assumption that the activations are linear. Their method resulted in the formula: \begin{align} W \sim U \left[ -\frac{\sqrt 6}{\sqrt {n_{i} + n_{i+1}}}, \frac{\sqrt 6}{\sqrt {n_{i} + n_{i+1}}} \right] \end{align} For weights initialized using uniform distribution where $n_{i}$ represents $\text{fan in}$ and $n_{i+1}$ represents $\text{fan out}$. He, Kaiming, et al.(2015) used a derivation method that considered use of ReLUs as the activation function and obtain a weight initialization formula: \begin{align} W_l \sim \mathcal N \left({\Large 0}, \sqrt{\frac{2}{n_l}} \right). \end{align} For weights initialized using Gaussian distribution whose standard deviation (std) is $\sqrt{\frac{2}{n_l}}$ Read a more comprehensive series of articles covering Mathematics behind weights initialization here.
Deep Neural Network weight initialization [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Deep Neural Network weight initialization [duplicate] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Weights initialization depend on the activation function being used. Xavier and Bengio(2010) derived a method for initializing weights based on the assumption that the activations are linear. Their method resulted in the formula: \begin{align} W \sim U \left[ -\frac{\sqrt 6}{\sqrt {n_{i} + n_{i+1}}}, \frac{\sqrt 6}{\sqrt {n_{i} + n_{i+1}}} \right] \end{align} For weights initialized using uniform distribution where $n_{i}$ represents $\text{fan in}$ and $n_{i+1}$ represents $\text{fan out}$. He, Kaiming, et al.(2015) used a derivation method that considered use of ReLUs as the activation function and obtain a weight initialization formula: \begin{align} W_l \sim \mathcal N \left({\Large 0}, \sqrt{\frac{2}{n_l}} \right). \end{align} For weights initialized using Gaussian distribution whose standard deviation (std) is $\sqrt{\frac{2}{n_l}}$ Read a more comprehensive series of articles covering Mathematics behind weights initialization here.
Deep Neural Network weight initialization [duplicate] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
53,273
How do I compute/estimate the variance of sequential data? [duplicate]
The proper answer to your question is named online sample variance, and in general online statistics. It's named online because you update the current value of the sample statistic and don't look back at that number. In order to find an online algorithm to handle that what you need is to break the definition of sample variance into two kind of terms: first kind would use previous information and the second kind the new value. For simplicity I let's look at sample mean: $m_n = \frac{1}{n} \sum_{i=1}^{n} x_i$, where $n$ is the number of elements and $x_i$ is the element from the stream at $i$-th position. Let's do some tricks: $$m_n = \frac{1}{n}\sum_{i=1}^{n} x_i = \frac{n-1}{n(n-1)}(\sum_{i=1}^{n-1} x_i + x_n) = \frac{n-1}{n}(\frac{1}{n-1}\sum_{i=1}^{n-1}x_i) + \frac{1}{n}x_n$$ Now you recognize in the last expression in parenthesis the term $m_{n-1}$, so we have the following recursive relation: $$m_n = \frac{n-1}{n}m_{n-1} + \frac{1}{n}x_n$$ which states that we have to know the number of elements and the mean from a current step to compute the next mean by using also the next element. Similar recursive equations can be developed for minimum, maximum, standard deviation, variance, skewness and kurtosis. For a full explanation look at the beautiful article by John D. Cook. I implemented myself such an online statistic tool in Java, which you can find here
How do I compute/estimate the variance of sequential data? [duplicate]
The proper answer to your question is named online sample variance, and in general online statistics. It's named online because you update the current value of the sample statistic and don't look back
How do I compute/estimate the variance of sequential data? [duplicate] The proper answer to your question is named online sample variance, and in general online statistics. It's named online because you update the current value of the sample statistic and don't look back at that number. In order to find an online algorithm to handle that what you need is to break the definition of sample variance into two kind of terms: first kind would use previous information and the second kind the new value. For simplicity I let's look at sample mean: $m_n = \frac{1}{n} \sum_{i=1}^{n} x_i$, where $n$ is the number of elements and $x_i$ is the element from the stream at $i$-th position. Let's do some tricks: $$m_n = \frac{1}{n}\sum_{i=1}^{n} x_i = \frac{n-1}{n(n-1)}(\sum_{i=1}^{n-1} x_i + x_n) = \frac{n-1}{n}(\frac{1}{n-1}\sum_{i=1}^{n-1}x_i) + \frac{1}{n}x_n$$ Now you recognize in the last expression in parenthesis the term $m_{n-1}$, so we have the following recursive relation: $$m_n = \frac{n-1}{n}m_{n-1} + \frac{1}{n}x_n$$ which states that we have to know the number of elements and the mean from a current step to compute the next mean by using also the next element. Similar recursive equations can be developed for minimum, maximum, standard deviation, variance, skewness and kurtosis. For a full explanation look at the beautiful article by John D. Cook. I implemented myself such an online statistic tool in Java, which you can find here
How do I compute/estimate the variance of sequential data? [duplicate] The proper answer to your question is named online sample variance, and in general online statistics. It's named online because you update the current value of the sample statistic and don't look back
53,274
How do I compute/estimate the variance of sequential data? [duplicate]
It seems after some looking around, that the algorithms given in this technical report by Chan, Golub, and LeVeque from 1983 are still the state of the art.
How do I compute/estimate the variance of sequential data? [duplicate]
It seems after some looking around, that the algorithms given in this technical report by Chan, Golub, and LeVeque from 1983 are still the state of the art.
How do I compute/estimate the variance of sequential data? [duplicate] It seems after some looking around, that the algorithms given in this technical report by Chan, Golub, and LeVeque from 1983 are still the state of the art.
How do I compute/estimate the variance of sequential data? [duplicate] It seems after some looking around, that the algorithms given in this technical report by Chan, Golub, and LeVeque from 1983 are still the state of the art.
53,275
How do I compute/estimate the variance of sequential data? [duplicate]
My intuition, might not be correct: Lets say you divide sequence in 2 groups having same number of elements, calculate there mean (mean1 and mean2) and variances (variance1, variance 2) , For calculating variance 2 you can use combined mean of both the sequences i.e. (mean1 + mean2)/2, Now based on this mean (mean3) you can correct variance of first sequence lets call it variance3, Now combined variance will be (variance3 + variance2)/2
How do I compute/estimate the variance of sequential data? [duplicate]
My intuition, might not be correct: Lets say you divide sequence in 2 groups having same number of elements, calculate there mean (mean1 and mean2) and variances (variance1, variance 2) , For calculat
How do I compute/estimate the variance of sequential data? [duplicate] My intuition, might not be correct: Lets say you divide sequence in 2 groups having same number of elements, calculate there mean (mean1 and mean2) and variances (variance1, variance 2) , For calculating variance 2 you can use combined mean of both the sequences i.e. (mean1 + mean2)/2, Now based on this mean (mean3) you can correct variance of first sequence lets call it variance3, Now combined variance will be (variance3 + variance2)/2
How do I compute/estimate the variance of sequential data? [duplicate] My intuition, might not be correct: Lets say you divide sequence in 2 groups having same number of elements, calculate there mean (mean1 and mean2) and variances (variance1, variance 2) , For calculat
53,276
Repeated CrossValidation, finalModel and ROC curves
For all caret models, the final model is trained on the full dataset. caret::train uses the cross-validation scheme you chose to select model parameters (e.g. mtry for a random forest) and estimate out-of-sample performance of the model. Once the cross-validation is done, caret retrains the model on the full dataset, using the parameters it selected during cross-validation. So roc.1 is an in-sample roc curve. The model does not average the trained model's coefficients. It re-fits the model on the full dataset. It is NOT correct to use the final model on the training data, but it is correct to use on a different dataset.
Repeated CrossValidation, finalModel and ROC curves
For all caret models, the final model is trained on the full dataset. caret::train uses the cross-validation scheme you chose to select model parameters (e.g. mtry for a random forest) and estimate o
Repeated CrossValidation, finalModel and ROC curves For all caret models, the final model is trained on the full dataset. caret::train uses the cross-validation scheme you chose to select model parameters (e.g. mtry for a random forest) and estimate out-of-sample performance of the model. Once the cross-validation is done, caret retrains the model on the full dataset, using the parameters it selected during cross-validation. So roc.1 is an in-sample roc curve. The model does not average the trained model's coefficients. It re-fits the model on the full dataset. It is NOT correct to use the final model on the training data, but it is correct to use on a different dataset.
Repeated CrossValidation, finalModel and ROC curves For all caret models, the final model is trained on the full dataset. caret::train uses the cross-validation scheme you chose to select model parameters (e.g. mtry for a random forest) and estimate o
53,277
Repeated CrossValidation, finalModel and ROC curves
So finally to summarize : ctrl = trainControl(method="repeatedcv", number=10, repeats = 300, savePredictions = TRUE, classProbs = TRUE) mdl = train("Label~.", data=Data, method = "glm", trControl = ctrl) pred = predict(mdl, newdata = Data, type="prob") roc.1 = roc(Data$Label, pred$control) roc.2 = roc(mdl$pred$obs,mdl$pred$control) roc.3 = roc(as.numeric(mdl$trainingData$.outcome=='case'),aggregate(case~rowIndex,mdl$pred,mean)[,'case']) roc.1 is irrelevant as it evaluates a model on the same data used to train it (the finalModel is just the fit on Data ignoring the CV argument, built to apply on a different dataset for future prediction) roc.2 is 'almost' accurate as it will consider each prediction independently (averaging the prediction, not the probabilities) roc.3 is the correct way to do it as it averages the prediction probabilities for each sample among the repeated CV (contrary to roc.2 where the prediction results are averaged)
Repeated CrossValidation, finalModel and ROC curves
So finally to summarize : ctrl = trainControl(method="repeatedcv", number=10, repeats = 300, savePredictions = TRUE, classProbs = TRUE) mdl = train("Label~.", data=Data, method = "glm", trControl = ct
Repeated CrossValidation, finalModel and ROC curves So finally to summarize : ctrl = trainControl(method="repeatedcv", number=10, repeats = 300, savePredictions = TRUE, classProbs = TRUE) mdl = train("Label~.", data=Data, method = "glm", trControl = ctrl) pred = predict(mdl, newdata = Data, type="prob") roc.1 = roc(Data$Label, pred$control) roc.2 = roc(mdl$pred$obs,mdl$pred$control) roc.3 = roc(as.numeric(mdl$trainingData$.outcome=='case'),aggregate(case~rowIndex,mdl$pred,mean)[,'case']) roc.1 is irrelevant as it evaluates a model on the same data used to train it (the finalModel is just the fit on Data ignoring the CV argument, built to apply on a different dataset for future prediction) roc.2 is 'almost' accurate as it will consider each prediction independently (averaging the prediction, not the probabilities) roc.3 is the correct way to do it as it averages the prediction probabilities for each sample among the repeated CV (contrary to roc.2 where the prediction results are averaged)
Repeated CrossValidation, finalModel and ROC curves So finally to summarize : ctrl = trainControl(method="repeatedcv", number=10, repeats = 300, savePredictions = TRUE, classProbs = TRUE) mdl = train("Label~.", data=Data, method = "glm", trControl = ct
53,278
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software?
There are several options available to you: Try to compute the derivatives by hand and then implement them in code. Use a symbolic computation package like Maple, Mathematica, Wolfram Alpha, etc. to find the derivatives. Some of these packages will translate the resulting formulas directly into code. Use an automatic differentiation tool that takes a program for computing the cost function and (using compiler like techniques) produces a program that computes the derivatives as well as the cost function. Use finite difference formulas to approximate the derivatives. For anything other than the simplest problems (like ordinary least squares), option 1 is a poor choice. Most experts on optimization will tell you that it is very common for users of optimization software to supply incorrect derivative formulas to optimization routines. This typically leads to slow convergence or no convergence at all. Option 2 is a good one for most relatively simple cost functions. It doesn't require really exotic tools. Option 3 really shines when the cost function is the result of a fairly complicated function for which you have the source code. However, AD tools are specialized and not many users of optimization software are familiar with them. Option 4 is sometimes a necessary choice. If you have a "black box" function that you can't get source code for (or that is so badly written that AD tools can't handle it), finite difference approximations can save the day. However, using finite difference approximations has a significant cost in run time and in the accuracy of the derivatives and ultimately the solutions obtained. For most machine learning applications, options 1 and 2 are perfectly adequate. The loss functions (least squares, logistic regression, etc.) and penalties (one-norm, two-norm, elastic net, etc.) are simple enough that the derivatives are easy to find. Options 3 and 4 come into play more often in engineering optimization where the objective functions are more complicated.
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software?
There are several options available to you: Try to compute the derivatives by hand and then implement them in code. Use a symbolic computation package like Maple, Mathematica, Wolfram Alpha, etc. to
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software? There are several options available to you: Try to compute the derivatives by hand and then implement them in code. Use a symbolic computation package like Maple, Mathematica, Wolfram Alpha, etc. to find the derivatives. Some of these packages will translate the resulting formulas directly into code. Use an automatic differentiation tool that takes a program for computing the cost function and (using compiler like techniques) produces a program that computes the derivatives as well as the cost function. Use finite difference formulas to approximate the derivatives. For anything other than the simplest problems (like ordinary least squares), option 1 is a poor choice. Most experts on optimization will tell you that it is very common for users of optimization software to supply incorrect derivative formulas to optimization routines. This typically leads to slow convergence or no convergence at all. Option 2 is a good one for most relatively simple cost functions. It doesn't require really exotic tools. Option 3 really shines when the cost function is the result of a fairly complicated function for which you have the source code. However, AD tools are specialized and not many users of optimization software are familiar with them. Option 4 is sometimes a necessary choice. If you have a "black box" function that you can't get source code for (or that is so badly written that AD tools can't handle it), finite difference approximations can save the day. However, using finite difference approximations has a significant cost in run time and in the accuracy of the derivatives and ultimately the solutions obtained. For most machine learning applications, options 1 and 2 are perfectly adequate. The loss functions (least squares, logistic regression, etc.) and penalties (one-norm, two-norm, elastic net, etc.) are simple enough that the derivatives are easy to find. Options 3 and 4 come into play more often in engineering optimization where the objective functions are more complicated.
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software? There are several options available to you: Try to compute the derivatives by hand and then implement them in code. Use a symbolic computation package like Maple, Mathematica, Wolfram Alpha, etc. to
53,279
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software?
You can if there's a nice analytic solution. Otherwise, use numerical techniques or libraries like tensorflow / theano.
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software?
You can if there's a nice analytic solution. Otherwise, use numerical techniques or libraries like tensorflow / theano.
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software? You can if there's a nice analytic solution. Otherwise, use numerical techniques or libraries like tensorflow / theano.
Gradient descent: compute partial derivative of arbitrary cost function by hand or through software? You can if there's a nice analytic solution. Otherwise, use numerical techniques or libraries like tensorflow / theano.
53,280
Find state space model to compare with Box-Jenkins ARIMA model
State-space models are very flexible; indeed they can encompass ARIMA models. One class of state space models that has some overlap with ARIMA models but also has a large subset of models that don't overlap with them is the Basic Structural Model (BSM). See Harvey (1989)[1]. There are also numerous papers by Harvey (usually with other authors) relating to the BSM and at least a couple of other books. Structural models are also sometimes called unobserved components models (UCM). For example, the 1990 paper by Harvey and Peters ("Estimation Procedures for Structural Time Series Models," J. Forecasting) is not hard to locate and has some useful details that are also in the book reference I give. Here's an outline of the Basic Structural Model: $$y_t = \mu_t + \gamma_t +\epsilon_t,\qquad t=1,...,T$$ where $\mu_t$ is the trend component, $\gamma_t$ is a seasonal component and $\epsilon_t$ an irregular component (or noise). The model for $\mu_t$ is: \begin{eqnarray} \mu_t&=&\mu_{t-1}+\beta_{t-1}+\eta_t\\ \beta_t&=&\beta_{t-1}+\zeta_t \end{eqnarray} with $\eta_t$ and $\zeta_t$ independent of each other and across time; they have mean zero and each has its own variance. The trend component $\mu$ is "locally linear"; $\beta_t$ is the local slope. There are several ways to write a seasonal component. The "seasonal dummy" formulation is: $$\gamma_t=-\sum_{j=1}^{s-t}\gamma_{t-j}\,\omega_t$$ where $\omega_t$ is another independently distributed disturbance term with its own variance. [There's also a different seasonal model that can be used based on sin and cos components.] The parameters $\mu_t,\beta_t,\gamma_t$ form the state. The first equation is the observation equation and the remaining equations (put together) define the state equation. See also some of the other references here The BSM is readily extended in any number of ways, or can be made more specific by omitting unneeded components (e.g. leaving out the seasonal component if there's no seasonality), and has the nice property that its state components have nice human-understandable interpretations. A pure random walk with noise model would set $\beta$'s and $\gamma$'s to zero: \begin{eqnarray} y_t &=& \mu_t + \epsilon_t,\qquad t=1,...,T\\ \mu_t&=&\mu_{t-1}+\eta_t \end{eqnarray} (and a straight-out pure random walk would set $\epsilon$ to 0). Another paper you might find relevant is Harvey and Todd (1983) "Forecasting Economic Time Series with Structural and Box-Jenkins Models", J. Business & Economic Statistics, 1:4, since it seems to be closely related to what you are trying to do - compare state space models with ARIMA. Many stats packages offer BSM models or something very similar; there's UCM in SAS, there's the StructTS package in R, and so on -- so you don't really have to do much to even set up the state space model (not that it's onerous). [1]: Andrew C. Harvey (1989) Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge University Press
Find state space model to compare with Box-Jenkins ARIMA model
State-space models are very flexible; indeed they can encompass ARIMA models. One class of state space models that has some overlap with ARIMA models but also has a large subset of models that don't o
Find state space model to compare with Box-Jenkins ARIMA model State-space models are very flexible; indeed they can encompass ARIMA models. One class of state space models that has some overlap with ARIMA models but also has a large subset of models that don't overlap with them is the Basic Structural Model (BSM). See Harvey (1989)[1]. There are also numerous papers by Harvey (usually with other authors) relating to the BSM and at least a couple of other books. Structural models are also sometimes called unobserved components models (UCM). For example, the 1990 paper by Harvey and Peters ("Estimation Procedures for Structural Time Series Models," J. Forecasting) is not hard to locate and has some useful details that are also in the book reference I give. Here's an outline of the Basic Structural Model: $$y_t = \mu_t + \gamma_t +\epsilon_t,\qquad t=1,...,T$$ where $\mu_t$ is the trend component, $\gamma_t$ is a seasonal component and $\epsilon_t$ an irregular component (or noise). The model for $\mu_t$ is: \begin{eqnarray} \mu_t&=&\mu_{t-1}+\beta_{t-1}+\eta_t\\ \beta_t&=&\beta_{t-1}+\zeta_t \end{eqnarray} with $\eta_t$ and $\zeta_t$ independent of each other and across time; they have mean zero and each has its own variance. The trend component $\mu$ is "locally linear"; $\beta_t$ is the local slope. There are several ways to write a seasonal component. The "seasonal dummy" formulation is: $$\gamma_t=-\sum_{j=1}^{s-t}\gamma_{t-j}\,\omega_t$$ where $\omega_t$ is another independently distributed disturbance term with its own variance. [There's also a different seasonal model that can be used based on sin and cos components.] The parameters $\mu_t,\beta_t,\gamma_t$ form the state. The first equation is the observation equation and the remaining equations (put together) define the state equation. See also some of the other references here The BSM is readily extended in any number of ways, or can be made more specific by omitting unneeded components (e.g. leaving out the seasonal component if there's no seasonality), and has the nice property that its state components have nice human-understandable interpretations. A pure random walk with noise model would set $\beta$'s and $\gamma$'s to zero: \begin{eqnarray} y_t &=& \mu_t + \epsilon_t,\qquad t=1,...,T\\ \mu_t&=&\mu_{t-1}+\eta_t \end{eqnarray} (and a straight-out pure random walk would set $\epsilon$ to 0). Another paper you might find relevant is Harvey and Todd (1983) "Forecasting Economic Time Series with Structural and Box-Jenkins Models", J. Business & Economic Statistics, 1:4, since it seems to be closely related to what you are trying to do - compare state space models with ARIMA. Many stats packages offer BSM models or something very similar; there's UCM in SAS, there's the StructTS package in R, and so on -- so you don't really have to do much to even set up the state space model (not that it's onerous). [1]: Andrew C. Harvey (1989) Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge University Press
Find state space model to compare with Box-Jenkins ARIMA model State-space models are very flexible; indeed they can encompass ARIMA models. One class of state space models that has some overlap with ARIMA models but also has a large subset of models that don't o
53,281
Use ACF and PACF for irregular time series?
The latter approach is preferred since the time difference must be invariant/constant for an ACF/PACF to be useful for model identification purposes. Intervention Detection can be iteratively used to estimate the missing values while accounting for the auto-correlative structure. One can invert the time series--i.e., go from latest to earliest to estimate missing values--and then reverse the process (normal view) to tune the missing value estimates.
Use ACF and PACF for irregular time series?
The latter approach is preferred since the time difference must be invariant/constant for an ACF/PACF to be useful for model identification purposes. Intervention Detection can be iteratively used to
Use ACF and PACF for irregular time series? The latter approach is preferred since the time difference must be invariant/constant for an ACF/PACF to be useful for model identification purposes. Intervention Detection can be iteratively used to estimate the missing values while accounting for the auto-correlative structure. One can invert the time series--i.e., go from latest to earliest to estimate missing values--and then reverse the process (normal view) to tune the missing value estimates.
Use ACF and PACF for irregular time series? The latter approach is preferred since the time difference must be invariant/constant for an ACF/PACF to be useful for model identification purposes. Intervention Detection can be iteratively used to
53,282
Use ACF and PACF for irregular time series?
Yes, you should definitely use the second approach: if you do the first, you are considering distant observations as close. If auto-correlation is decreasing with the lag (as is usually the case) then this would lead to an under-estimation of the ACF values: indeed, using say lag 5 (low correlation) for estimating lag 1 (higher correlation) biases your results. See the plot below to see this result. Also, no need to fill-in manually NA, as acf() is calling as.ts(), and as.ts() on a zoo object returns a vector with NA already. library(zoo) #> #> Attaching package: 'zoo' #> The following objects are masked from 'package:base': #> #> as.Date, as.Date.numeric N <- 5000 x <- arima.sim(model = list(ma = c(0.2, 0.9)), n = N) set.seed(123) index_x <- sort(sample(1:N, size = N/5, replace = FALSE)) x_miss <- x[index_x] x_miss_zoo <- zoo(x_miss, order.by = index_x) ## coredata ac_1 <- acf(coredata(x_miss_zoo), lag.max = 24, plot = FALSE) ac_2 <- acf(x_miss_zoo, na.action = na.pass, lag.max = 24, plot = FALSE) library(tidyverse) data_frame(lag = 0:24, acf_coredata = ac_1$acf[,1,], acf_na_pass = ac_2$acf[,1,]) %>% gather(method, value, -lag) %>% mutate(lag = ifelse(method =="acf_coredata", lag, lag +0.5)) %>% #kind of a hack... ggplot(aes(x = lag, y= value, colour = method)) + geom_segment(aes(xend = lag, yend = 0)) + ggtitle("ACF with the 2 methods, true should be l1= 0.2, l2 = 0.9") Created on 2018-11-16 by the reprex package (v0.2.1)
Use ACF and PACF for irregular time series?
Yes, you should definitely use the second approach: if you do the first, you are considering distant observations as close. If auto-correlation is decreasing with the lag (as is usually the case) then
Use ACF and PACF for irregular time series? Yes, you should definitely use the second approach: if you do the first, you are considering distant observations as close. If auto-correlation is decreasing with the lag (as is usually the case) then this would lead to an under-estimation of the ACF values: indeed, using say lag 5 (low correlation) for estimating lag 1 (higher correlation) biases your results. See the plot below to see this result. Also, no need to fill-in manually NA, as acf() is calling as.ts(), and as.ts() on a zoo object returns a vector with NA already. library(zoo) #> #> Attaching package: 'zoo' #> The following objects are masked from 'package:base': #> #> as.Date, as.Date.numeric N <- 5000 x <- arima.sim(model = list(ma = c(0.2, 0.9)), n = N) set.seed(123) index_x <- sort(sample(1:N, size = N/5, replace = FALSE)) x_miss <- x[index_x] x_miss_zoo <- zoo(x_miss, order.by = index_x) ## coredata ac_1 <- acf(coredata(x_miss_zoo), lag.max = 24, plot = FALSE) ac_2 <- acf(x_miss_zoo, na.action = na.pass, lag.max = 24, plot = FALSE) library(tidyverse) data_frame(lag = 0:24, acf_coredata = ac_1$acf[,1,], acf_na_pass = ac_2$acf[,1,]) %>% gather(method, value, -lag) %>% mutate(lag = ifelse(method =="acf_coredata", lag, lag +0.5)) %>% #kind of a hack... ggplot(aes(x = lag, y= value, colour = method)) + geom_segment(aes(xend = lag, yend = 0)) + ggtitle("ACF with the 2 methods, true should be l1= 0.2, l2 = 0.9") Created on 2018-11-16 by the reprex package (v0.2.1)
Use ACF and PACF for irregular time series? Yes, you should definitely use the second approach: if you do the first, you are considering distant observations as close. If auto-correlation is decreasing with the lag (as is usually the case) then
53,283
Why can't my (auto.)arima-model forecast my time series?
Your both examples concern with deterministic time series, with no noise and no trend. Deterministic time series, with no trend is not really the kind of data that ARIMA was designed for (see this question to learn more on ARIMA assumptions). Actually to forecast future trend given your data what you could do is simply to take averages of different time lags and then repeat them in the same order. The problem in here is to determine the number of lags to use, that is, to find out the length of the window that repeats itself. This could be simply achieved, using sum of squared errors or other error measure. If you define average $k$'th lag value as $$ \overline x_{(k)} = \frac{1}{N/K} \sum_{i=0}^{(N/K)-1} x_{k+i \times K} $$ and sum of squared errors is $$ \mathrm{SSE} = \sum_{k=1}^K \sum_{i=0}^{(N/K)-1} \left( x_{k+i \times K} - \overline x_{(k)} \right)^2 $$ then you can use SSE to choose the best window size. Below you can find an example in R. tsPattern <- function(x, kmax = ceiling((length(x)/2))) { stopifnot(is.numeric(x)) kmax <- min(round(length(x)/2), kmax) predPattern <- function(x, k) { n <- length(x) pattern <- rep(1:k, times = ceiling(n/k), length.out = n) pred <- rep(tapply(x, pattern, mean), times = ceiling(n/k), length.out = n) as.numeric(pred) } out <- NULL for (k in 1:kmax) { xhat <- predPattern(x, k) out[k] <- sum((x - xhat)^2) } list(fitted = predPattern(x, which.min(out)), sumsq = which.min(out)) } ts<-c(1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1) ts2 <-rep(c(1,1,2,3,4,4,5,5,6,5,5,4,3,3,2),10) ts2 <- ts2 * round(runif(length(ts2), 0.95, 1.0), digits=2) tsPattern(ts, 10) tsPattern(ts2, 20) Below you can see the results plotted (red points are estimated values, lines are actual data). This primitive approach works very good for your examples of deterministic time series with no trend and no noise, it would fail however with real life data, i.e. exactly in such cases as ARIMA (and other similar time-series methods) were designed for. Below you can see results of using this method and auto.arima for real life data (WWWusage dataset in forecast library).
Why can't my (auto.)arima-model forecast my time series?
Your both examples concern with deterministic time series, with no noise and no trend. Deterministic time series, with no trend is not really the kind of data that ARIMA was designed for (see this que
Why can't my (auto.)arima-model forecast my time series? Your both examples concern with deterministic time series, with no noise and no trend. Deterministic time series, with no trend is not really the kind of data that ARIMA was designed for (see this question to learn more on ARIMA assumptions). Actually to forecast future trend given your data what you could do is simply to take averages of different time lags and then repeat them in the same order. The problem in here is to determine the number of lags to use, that is, to find out the length of the window that repeats itself. This could be simply achieved, using sum of squared errors or other error measure. If you define average $k$'th lag value as $$ \overline x_{(k)} = \frac{1}{N/K} \sum_{i=0}^{(N/K)-1} x_{k+i \times K} $$ and sum of squared errors is $$ \mathrm{SSE} = \sum_{k=1}^K \sum_{i=0}^{(N/K)-1} \left( x_{k+i \times K} - \overline x_{(k)} \right)^2 $$ then you can use SSE to choose the best window size. Below you can find an example in R. tsPattern <- function(x, kmax = ceiling((length(x)/2))) { stopifnot(is.numeric(x)) kmax <- min(round(length(x)/2), kmax) predPattern <- function(x, k) { n <- length(x) pattern <- rep(1:k, times = ceiling(n/k), length.out = n) pred <- rep(tapply(x, pattern, mean), times = ceiling(n/k), length.out = n) as.numeric(pred) } out <- NULL for (k in 1:kmax) { xhat <- predPattern(x, k) out[k] <- sum((x - xhat)^2) } list(fitted = predPattern(x, which.min(out)), sumsq = which.min(out)) } ts<-c(1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1,0,0, 1,1,1,1,1) ts2 <-rep(c(1,1,2,3,4,4,5,5,6,5,5,4,3,3,2),10) ts2 <- ts2 * round(runif(length(ts2), 0.95, 1.0), digits=2) tsPattern(ts, 10) tsPattern(ts2, 20) Below you can see the results plotted (red points are estimated values, lines are actual data). This primitive approach works very good for your examples of deterministic time series with no trend and no noise, it would fail however with real life data, i.e. exactly in such cases as ARIMA (and other similar time-series methods) were designed for. Below you can see results of using this method and auto.arima for real life data (WWWusage dataset in forecast library).
Why can't my (auto.)arima-model forecast my time series? Your both examples concern with deterministic time series, with no noise and no trend. Deterministic time series, with no trend is not really the kind of data that ARIMA was designed for (see this que
53,284
Why can't my (auto.)arima-model forecast my time series?
A bit late, but you can specify your frequency and tell arima that you have seasonality: library(forecast) ts<-c(1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1) ts <- ts(ts, frequency = 7) fit <- auto.arima(ts, D = 1) plot(forecast(fit,h=20)) This works as desired and ARIMA finds the structure.
Why can't my (auto.)arima-model forecast my time series?
A bit late, but you can specify your frequency and tell arima that you have seasonality: library(forecast) ts<-c(1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,
Why can't my (auto.)arima-model forecast my time series? A bit late, but you can specify your frequency and tell arima that you have seasonality: library(forecast) ts<-c(1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1) ts <- ts(ts, frequency = 7) fit <- auto.arima(ts, D = 1) plot(forecast(fit,h=20)) This works as desired and ARIMA finds the structure.
Why can't my (auto.)arima-model forecast my time series? A bit late, but you can specify your frequency and tell arima that you have seasonality: library(forecast) ts<-c(1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,
53,285
Equivalence of the dt and pt function in R
I believe this calculates the probability of P(X>x) occurring (since P(X=x)=0 in a continuous distribution). I would expect pt and 1−dt to be the same. Your belief (and the consequently the expectation you hold) is wrong. The $d$ in dt refers to density. You're right to think density is not probability. The d... functions in R (when done on continuous distributions like the $t$) don't return probability, they return the height of the density function at the value of their first argument. https://en.wikipedia.org/wiki/Probability_density_function If you look at the wikipedia page on the t-distribution, the top diagram on the right shows the density -- the thing returned by dt for several different degrees of freedom. The diagram below it shows the distribution function (cdf), which is the thing returned by pt (by default at least). If you want the upper tail, you can get that by changing the arguments to pt. This is explained in the help on the t-functions ?dt pt(x,df) returns the area under the density to the left of x:
Equivalence of the dt and pt function in R
I believe this calculates the probability of P(X>x) occurring (since P(X=x)=0 in a continuous distribution). I would expect pt and 1−dt to be the same. Your belief (and the consequently the expectati
Equivalence of the dt and pt function in R I believe this calculates the probability of P(X>x) occurring (since P(X=x)=0 in a continuous distribution). I would expect pt and 1−dt to be the same. Your belief (and the consequently the expectation you hold) is wrong. The $d$ in dt refers to density. You're right to think density is not probability. The d... functions in R (when done on continuous distributions like the $t$) don't return probability, they return the height of the density function at the value of their first argument. https://en.wikipedia.org/wiki/Probability_density_function If you look at the wikipedia page on the t-distribution, the top diagram on the right shows the density -- the thing returned by dt for several different degrees of freedom. The diagram below it shows the distribution function (cdf), which is the thing returned by pt (by default at least). If you want the upper tail, you can get that by changing the arguments to pt. This is explained in the help on the t-functions ?dt pt(x,df) returns the area under the density to the left of x:
Equivalence of the dt and pt function in R I believe this calculates the probability of P(X>x) occurring (since P(X=x)=0 in a continuous distribution). I would expect pt and 1−dt to be the same. Your belief (and the consequently the expectati
53,286
Strictly positive random variables
A truncated normal distribution might fit the bill. (Or a better statistics book.) The truncated normal is obtained by discarding whatever is below zero, in your situation. The pdf, cdf and the moments are fully described in the linked Wikipedia article.
Strictly positive random variables
A truncated normal distribution might fit the bill. (Or a better statistics book.) The truncated normal is obtained by discarding whatever is below zero, in your situation. The pdf, cdf and the moment
Strictly positive random variables A truncated normal distribution might fit the bill. (Or a better statistics book.) The truncated normal is obtained by discarding whatever is below zero, in your situation. The pdf, cdf and the moments are fully described in the linked Wikipedia article.
Strictly positive random variables A truncated normal distribution might fit the bill. (Or a better statistics book.) The truncated normal is obtained by discarding whatever is below zero, in your situation. The pdf, cdf and the moment
53,287
Strictly positive random variables
Depending on how "normal-like" you want your variable, you might consider a log-normal distribution, in which the logarithm of the variable has a normal distribution. The variable itself is thus always non-negative, and for the type of distribution you specify (large mean, small variance) the variable might look close to normal itself. For measurements on items than are necessarily non-negative, a log-normal distribution can be more appropriate than a normal distribution if the measurement error is proportional to the value measured. If you try to model such measurements with a normal distribution, you can get into trouble because the variance isn't constant over the range of measurements. On the log scale, the variance would tend to be independent of the measured values.
Strictly positive random variables
Depending on how "normal-like" you want your variable, you might consider a log-normal distribution, in which the logarithm of the variable has a normal distribution. The variable itself is thus alway
Strictly positive random variables Depending on how "normal-like" you want your variable, you might consider a log-normal distribution, in which the logarithm of the variable has a normal distribution. The variable itself is thus always non-negative, and for the type of distribution you specify (large mean, small variance) the variable might look close to normal itself. For measurements on items than are necessarily non-negative, a log-normal distribution can be more appropriate than a normal distribution if the measurement error is proportional to the value measured. If you try to model such measurements with a normal distribution, you can get into trouble because the variance isn't constant over the range of measurements. On the log scale, the variance would tend to be independent of the measured values.
Strictly positive random variables Depending on how "normal-like" you want your variable, you might consider a log-normal distribution, in which the logarithm of the variable has a normal distribution. The variable itself is thus alway
53,288
What is the expected value of $\frac{X}{X+Y}$?
If $(X,Y)$ is binormal, then so is $(X,Z) = (X,X+Y)$. The ratio $X/Z$ is the tangent of the slope of the line through the origin and the point $(Z,X)$. When $X$ and $Z$ are uncorrelated with zero means, it is well known (and easy to compute) that $X/Z$ has a Cauchy distribution. Cauchy distributions have no expectations. This should lead us to suspect $X/Z$ might not have a mean, either. Let's see whether it does nor not. For any angle $0 \lt \theta \lt \pi/2$, consider the event $$E_\theta = \{(Z,X)\,|\, X \ge Z\cot(\theta\}.$$ This is of interest because its probability is the chance that $X/Z$ exceeds $\cot(\theta)$: the survival function of $X/Z$. It carries all the information of the distribution function of $X/Z$. $E_\theta$ is a (closed) cone in the plane consisting of all points on all lines making an angle of $\theta$ or less to the right of the vertical ($X$) axis. Let's underestimate the probability of $E_\theta$. To do so, we will work in polar coordinates. Consider any possible radius $\rho$. Among all points of this radius within the set $E_\theta$, the density $f$ of $(Z,X)$ will achieve a minimum value $f_\theta(\rho)$. This minimum must be nonzero provided the density does not degenerate. (More about this possibility later.) Use this to bound the probability $$\eqalign{ \Pr(E_\theta) &= \int_{\pi/2-\theta}^{\pi/2}\int_0^\infty f(\phi,\rho) \rho d\rho d\phi \\ &\ge \int_{\pi/2-\theta}^{\pi/2}\int_0^\infty \rho f_\theta(\rho) d\rho d\phi \\ &=\theta \int_0^\infty \rho f_\theta(\rho) d\rho \\ &= C(\theta) \theta }$$ where I have written $C(\theta)$ for the integral, which is some positive number depending on $\theta$. Moreover, for $0\lt\theta\lt\pi/2$, $C(\theta)$ has a nonzero lower bound $C \gt 0$. By definition, the expectation of $X/Z$ is the sum of two parts: one integral for the positive part when $X/Z \ge 0$ and another for the negative part when $X/Z \lt 0$. Let's tackle the positive part. For any positive random variable $W$ with distribution function $F$, integration by parts shows its expectation equals the integral of its survival function $1-F$, since $$\mathbb{E}(W) = \int_0^\infty w dF(w) = (w(1-F(w))|_0^\infty + \int_0^\infty (1-F(w)) dw = \int_0^\infty (1-F(w)) dw.$$ Applying this to $W = X/Z$ and substituting $w=\cot(\phi)$ gives for the positive part of the integral $$\eqalign{ \int_0^\infty (1 - F(w)) dw &= \int_0^{\pi/2} (1 - F(\cot(\phi))) \csc^2(\phi) d\phi \\ &= \int_0^{\pi/2} \Pr(E_\phi) \csc^2(\phi) d\phi \\ &\ge C \int_0^\theta \phi \csc^2(\phi) d\phi \\ &\gt C \int_0^\theta \frac{d\phi}{\phi}. }$$ (The final inequality is a simple consequence of the well-known inequalities $0 \lt \sin(\phi) \lt \phi$ for $0 \lt \phi \lt \pi$, which upon taking the $-2$ power gives $\csc^2(\phi) \gt 1/\phi^2$.) For any $\theta \gt 0$, the last term is a divergent integral, because for $0\lt \epsilon$, $$\int_0^\theta \frac{d\phi}{\phi} \gt \int_\epsilon^\theta \frac{d\phi}{\phi} = \log(\theta) - \log(\epsilon) \to \infty$$ as $\epsilon \to 0^{+}$. Consequently, the positive part of the expectation does not exist. It is immediate that the expectation of $X/W$ does not exist, either. We left behind one exception to consider: when $X/Z$ is supported on a line passing through the origin, this argument breaks down (because then the density can equal zero--and in fact is zero for almost all $\theta$). In this degenerate case, $X/Z$ reduces to a constant--equal to tangent of the slope of that line--and obviously that constant is its expectation. This is the only such situation in which $X/Z$ has an expectation.
What is the expected value of $\frac{X}{X+Y}$?
If $(X,Y)$ is binormal, then so is $(X,Z) = (X,X+Y)$. The ratio $X/Z$ is the tangent of the slope of the line through the origin and the point $(Z,X)$. When $X$ and $Z$ are uncorrelated with zero me
What is the expected value of $\frac{X}{X+Y}$? If $(X,Y)$ is binormal, then so is $(X,Z) = (X,X+Y)$. The ratio $X/Z$ is the tangent of the slope of the line through the origin and the point $(Z,X)$. When $X$ and $Z$ are uncorrelated with zero means, it is well known (and easy to compute) that $X/Z$ has a Cauchy distribution. Cauchy distributions have no expectations. This should lead us to suspect $X/Z$ might not have a mean, either. Let's see whether it does nor not. For any angle $0 \lt \theta \lt \pi/2$, consider the event $$E_\theta = \{(Z,X)\,|\, X \ge Z\cot(\theta\}.$$ This is of interest because its probability is the chance that $X/Z$ exceeds $\cot(\theta)$: the survival function of $X/Z$. It carries all the information of the distribution function of $X/Z$. $E_\theta$ is a (closed) cone in the plane consisting of all points on all lines making an angle of $\theta$ or less to the right of the vertical ($X$) axis. Let's underestimate the probability of $E_\theta$. To do so, we will work in polar coordinates. Consider any possible radius $\rho$. Among all points of this radius within the set $E_\theta$, the density $f$ of $(Z,X)$ will achieve a minimum value $f_\theta(\rho)$. This minimum must be nonzero provided the density does not degenerate. (More about this possibility later.) Use this to bound the probability $$\eqalign{ \Pr(E_\theta) &= \int_{\pi/2-\theta}^{\pi/2}\int_0^\infty f(\phi,\rho) \rho d\rho d\phi \\ &\ge \int_{\pi/2-\theta}^{\pi/2}\int_0^\infty \rho f_\theta(\rho) d\rho d\phi \\ &=\theta \int_0^\infty \rho f_\theta(\rho) d\rho \\ &= C(\theta) \theta }$$ where I have written $C(\theta)$ for the integral, which is some positive number depending on $\theta$. Moreover, for $0\lt\theta\lt\pi/2$, $C(\theta)$ has a nonzero lower bound $C \gt 0$. By definition, the expectation of $X/Z$ is the sum of two parts: one integral for the positive part when $X/Z \ge 0$ and another for the negative part when $X/Z \lt 0$. Let's tackle the positive part. For any positive random variable $W$ with distribution function $F$, integration by parts shows its expectation equals the integral of its survival function $1-F$, since $$\mathbb{E}(W) = \int_0^\infty w dF(w) = (w(1-F(w))|_0^\infty + \int_0^\infty (1-F(w)) dw = \int_0^\infty (1-F(w)) dw.$$ Applying this to $W = X/Z$ and substituting $w=\cot(\phi)$ gives for the positive part of the integral $$\eqalign{ \int_0^\infty (1 - F(w)) dw &= \int_0^{\pi/2} (1 - F(\cot(\phi))) \csc^2(\phi) d\phi \\ &= \int_0^{\pi/2} \Pr(E_\phi) \csc^2(\phi) d\phi \\ &\ge C \int_0^\theta \phi \csc^2(\phi) d\phi \\ &\gt C \int_0^\theta \frac{d\phi}{\phi}. }$$ (The final inequality is a simple consequence of the well-known inequalities $0 \lt \sin(\phi) \lt \phi$ for $0 \lt \phi \lt \pi$, which upon taking the $-2$ power gives $\csc^2(\phi) \gt 1/\phi^2$.) For any $\theta \gt 0$, the last term is a divergent integral, because for $0\lt \epsilon$, $$\int_0^\theta \frac{d\phi}{\phi} \gt \int_\epsilon^\theta \frac{d\phi}{\phi} = \log(\theta) - \log(\epsilon) \to \infty$$ as $\epsilon \to 0^{+}$. Consequently, the positive part of the expectation does not exist. It is immediate that the expectation of $X/W$ does not exist, either. We left behind one exception to consider: when $X/Z$ is supported on a line passing through the origin, this argument breaks down (because then the density can equal zero--and in fact is zero for almost all $\theta$). In this degenerate case, $X/Z$ reduces to a constant--equal to tangent of the slope of that line--and obviously that constant is its expectation. This is the only such situation in which $X/Z$ has an expectation.
What is the expected value of $\frac{X}{X+Y}$? If $(X,Y)$ is binormal, then so is $(X,Z) = (X,X+Y)$. The ratio $X/Z$ is the tangent of the slope of the line through the origin and the point $(Z,X)$. When $X$ and $Z$ are uncorrelated with zero me
53,289
What is the expected value of $\frac{X}{X+Y}$?
This is a follow-up to whuber's answer, and posted as a separate answer because it is too long for a comment. Lest people think that it is the bivariate normality of $X$ and $Y$ that is causing the problem, it is worth emphasizing that if $W$ is a continuous random variable whose density is nonzero on an open interval containing the origin, then $E\left[\frac 1W\right]$ does not exist. Since $\frac 1w$ diverges to $\pm\infty$ as $w$ approaches $0$, the integral for $E\left[\frac 1W\right]$, which is of the form $$E\left[\frac 1W\right]=\int_{-\infty}^0 \frac 1w f_W(w)\,\mathrm dw + \int_0^{-\infty} \frac 1w f_W(w)\,\mathrm dw\tag{1}$$ is undefined because both integrals on the right side of $(1)$ diverge and the right side of $(1)$ is of the form $\infty-\infty$ (which is undefined).
What is the expected value of $\frac{X}{X+Y}$?
This is a follow-up to whuber's answer, and posted as a separate answer because it is too long for a comment. Lest people think that it is the bivariate normality of $X$ and $Y$ that is causing the p
What is the expected value of $\frac{X}{X+Y}$? This is a follow-up to whuber's answer, and posted as a separate answer because it is too long for a comment. Lest people think that it is the bivariate normality of $X$ and $Y$ that is causing the problem, it is worth emphasizing that if $W$ is a continuous random variable whose density is nonzero on an open interval containing the origin, then $E\left[\frac 1W\right]$ does not exist. Since $\frac 1w$ diverges to $\pm\infty$ as $w$ approaches $0$, the integral for $E\left[\frac 1W\right]$, which is of the form $$E\left[\frac 1W\right]=\int_{-\infty}^0 \frac 1w f_W(w)\,\mathrm dw + \int_0^{-\infty} \frac 1w f_W(w)\,\mathrm dw\tag{1}$$ is undefined because both integrals on the right side of $(1)$ diverge and the right side of $(1)$ is of the form $\infty-\infty$ (which is undefined).
What is the expected value of $\frac{X}{X+Y}$? This is a follow-up to whuber's answer, and posted as a separate answer because it is too long for a comment. Lest people think that it is the bivariate normality of $X$ and $Y$ that is causing the p
53,290
Estimates of the variance of the variance component of a mixed effects model
Here is the analysis with R-package VCA V1.2: > library(VCA) > data(sleepstudy) > fit <- anovaMM(Reaction~Days*(Subject), sleepstudy) > inf <- VCAinference(fit, VarVC=TRUE) > print(inf, what="VCA") Inference from Mixed Model Fit ------------------------------ > VCA Result: ------------- [Fixed Effects] int Days 251.4051 10.4673 [Variance Components] Name DF SS MS VC %Total SD CV[%] Var(VC) 1 total 11.21 1388.5416 100 37.2631 12.4831 2 Subject 17 250618.1083 14742.2417 698.5289 50.3067 26.4297 8.8539 94751.0064 3 Days:Subject 17 60322.0013 3548.353 35.0717 2.5258 5.9221 1.9839 204.4845 4 error 144 94311.5079 654.941 654.941 47.1675 25.5918 8.5732 5914.196 Mean: 298.5079 (N = 180) Experimental Design: unbalanced Fixed effects are equal and variance components of the ANOVA Type1-estimators are, except for Subject which is a bit larger (conservatively estimated), also equal to REML-estimators. Column "Var(VC)" contains variances of variance components according to Giebrecht and Burns (1985). The complete covariance matrix for variance components can also be extracted: > vcovVC(fit) Subject Days:Subject error Subject 94751.006 -128.55799 -1523.85985 Days:Subject -128.558 204.48451 -47.53872 error -1523.860 -47.53872 5914.19600 attr(,"method") [1] "gb"
Estimates of the variance of the variance component of a mixed effects model
Here is the analysis with R-package VCA V1.2: > library(VCA) > data(sleepstudy) > fit <- anovaMM(Reaction~Days*(Subject), sleepstudy) > inf <- VCAinference(fit, VarVC=TRUE) > print(inf, what="VCA")
Estimates of the variance of the variance component of a mixed effects model Here is the analysis with R-package VCA V1.2: > library(VCA) > data(sleepstudy) > fit <- anovaMM(Reaction~Days*(Subject), sleepstudy) > inf <- VCAinference(fit, VarVC=TRUE) > print(inf, what="VCA") Inference from Mixed Model Fit ------------------------------ > VCA Result: ------------- [Fixed Effects] int Days 251.4051 10.4673 [Variance Components] Name DF SS MS VC %Total SD CV[%] Var(VC) 1 total 11.21 1388.5416 100 37.2631 12.4831 2 Subject 17 250618.1083 14742.2417 698.5289 50.3067 26.4297 8.8539 94751.0064 3 Days:Subject 17 60322.0013 3548.353 35.0717 2.5258 5.9221 1.9839 204.4845 4 error 144 94311.5079 654.941 654.941 47.1675 25.5918 8.5732 5914.196 Mean: 298.5079 (N = 180) Experimental Design: unbalanced Fixed effects are equal and variance components of the ANOVA Type1-estimators are, except for Subject which is a bit larger (conservatively estimated), also equal to REML-estimators. Column "Var(VC)" contains variances of variance components according to Giebrecht and Burns (1985). The complete covariance matrix for variance components can also be extracted: > vcovVC(fit) Subject Days:Subject error Subject 94751.006 -128.55799 -1523.85985 Days:Subject -128.558 204.48451 -47.53872 error -1523.860 -47.53872 5914.19600 attr(,"method") [1] "gb"
Estimates of the variance of the variance component of a mixed effects model Here is the analysis with R-package VCA V1.2: > library(VCA) > data(sleepstudy) > fit <- anovaMM(Reaction~Days*(Subject), sleepstudy) > inf <- VCAinference(fit, VarVC=TRUE) > print(inf, what="VCA")
53,291
Estimates of the variance of the variance component of a mixed effects model
In package VCA V1.3 it is possible to use REML-estimation of linear mixed models besides ANOVA-type estimation. > library(VCA) > data(sleepstudy) > fit <- remlMM(Reaction~Days+(Subject)+Days:(Subject), sleepstudy, cov=TRUE) > fit REML-Estimation of Mixed Model: ------------------------------- [Fixed Effects] int Days 251.40510 10.46729 [Variance Components] Name DF VC %Total SD CV[%] Var(VC) 1 total 41.025787 1302.10245 100 36.084657 12.088343 82653.906666 2 Subject 9.357189 612.089747 47.007802 24.740448 8.288038 80078.294606 3 Days:Subject 11.714078 35.071663 2.693464 5.922133 1.983912 210.007398 4 error 145.181043 654.941041 50.298733 25.591816 8.573246 5909.142918 Mean: 298.5079 (N = 180) Experimental Design: unbalanced | Method: REML You find the variance of variance components in column "Var(VC)". The VCA-package uses the lme4-package for REML-estimation, so the fitted model is identical to one using lmer(). Here, the variance of variance components is approximated via the method given in Giesbrecht & Burns (1985). > vcovVC(fit) Subject Days:Subject error Subject 80078.29461 -62.72396 -1657.13070 Days:Subject -62.72396 210.00740 -51.91447 error -1657.13070 -51.91447 5909.14292 attr(,"method") [1] "gb"
Estimates of the variance of the variance component of a mixed effects model
In package VCA V1.3 it is possible to use REML-estimation of linear mixed models besides ANOVA-type estimation. > library(VCA) > data(sleepstudy) > fit <- remlMM(Reaction~Days+(Subject)+Days:(Subject)
Estimates of the variance of the variance component of a mixed effects model In package VCA V1.3 it is possible to use REML-estimation of linear mixed models besides ANOVA-type estimation. > library(VCA) > data(sleepstudy) > fit <- remlMM(Reaction~Days+(Subject)+Days:(Subject), sleepstudy, cov=TRUE) > fit REML-Estimation of Mixed Model: ------------------------------- [Fixed Effects] int Days 251.40510 10.46729 [Variance Components] Name DF VC %Total SD CV[%] Var(VC) 1 total 41.025787 1302.10245 100 36.084657 12.088343 82653.906666 2 Subject 9.357189 612.089747 47.007802 24.740448 8.288038 80078.294606 3 Days:Subject 11.714078 35.071663 2.693464 5.922133 1.983912 210.007398 4 error 145.181043 654.941041 50.298733 25.591816 8.573246 5909.142918 Mean: 298.5079 (N = 180) Experimental Design: unbalanced | Method: REML You find the variance of variance components in column "Var(VC)". The VCA-package uses the lme4-package for REML-estimation, so the fitted model is identical to one using lmer(). Here, the variance of variance components is approximated via the method given in Giesbrecht & Burns (1985). > vcovVC(fit) Subject Days:Subject error Subject 80078.29461 -62.72396 -1657.13070 Days:Subject -62.72396 210.00740 -51.91447 error -1657.13070 -51.91447 5909.14292 attr(,"method") [1] "gb"
Estimates of the variance of the variance component of a mixed effects model In package VCA V1.3 it is possible to use REML-estimation of linear mixed models besides ANOVA-type estimation. > library(VCA) > data(sleepstudy) > fit <- remlMM(Reaction~Days+(Subject)+Days:(Subject)
53,292
Estimates of the variance of the variance component of a mixed effects model
(Leaving my previous answer to the wrong question in tact for posterity, hopefully this time I'm answering the question actually being asked...) A question about the variance of the variance estimates was recently posted on R-SIG-MIXED-MODELS. Ben Bolker, one of the lme4 authors, has already worked out how to do this for ML estimates, for REML the problem is apparently a bit harder due to the internal parameterization (links below). The full answer is a bit long, but the basic idea is to use confidence intervals, as I suggested in my comment. Modern lme4 provides only profile and bootstrap confidence intervals for the random-effect components, which aren't as straightforwardly related to the variance/standard error of those estimates as the Wald confidence intervals are, but perhaps provide the better measure of the estimate's variability. If you do want to go the Wald confidence-interval route, from which you can rapidly compute the standard error and hence the variance on those estimates, then check out Ben Bolker's longer explication (with code). There is also an older version that not completely identically in methodology and focus (much in the same way that nlme differs from lme4, that might be worth taking a look at.
Estimates of the variance of the variance component of a mixed effects model
(Leaving my previous answer to the wrong question in tact for posterity, hopefully this time I'm answering the question actually being asked...) A question about the variance of the variance estimates
Estimates of the variance of the variance component of a mixed effects model (Leaving my previous answer to the wrong question in tact for posterity, hopefully this time I'm answering the question actually being asked...) A question about the variance of the variance estimates was recently posted on R-SIG-MIXED-MODELS. Ben Bolker, one of the lme4 authors, has already worked out how to do this for ML estimates, for REML the problem is apparently a bit harder due to the internal parameterization (links below). The full answer is a bit long, but the basic idea is to use confidence intervals, as I suggested in my comment. Modern lme4 provides only profile and bootstrap confidence intervals for the random-effect components, which aren't as straightforwardly related to the variance/standard error of those estimates as the Wald confidence intervals are, but perhaps provide the better measure of the estimate's variability. If you do want to go the Wald confidence-interval route, from which you can rapidly compute the standard error and hence the variance on those estimates, then check out Ben Bolker's longer explication (with code). There is also an older version that not completely identically in methodology and focus (much in the same way that nlme differs from lme4, that might be worth taking a look at.
Estimates of the variance of the variance component of a mixed effects model (Leaving my previous answer to the wrong question in tact for posterity, hopefully this time I'm answering the question actually being asked...) A question about the variance of the variance estimates
53,293
Estimates of the variance of the variance component of a mixed effects model
If you are willing to fit the mixed model using ANOVA Type-1 estimation you can use R-package VCA which has two approaches to estimation of the variance of variance components implemented following Searle et al. (1992) "Variance Components" and alternatively an approximation of Giesbrecht and Burns (1985) Two-Stage Analysis Based on a Mixed Model: Large-Sample Asymptotic Theory and Small-Sample Simulation Results, Biometrics 41, p. 477-486.
Estimates of the variance of the variance component of a mixed effects model
If you are willing to fit the mixed model using ANOVA Type-1 estimation you can use R-package VCA which has two approaches to estimation of the variance of variance components implemented following Se
Estimates of the variance of the variance component of a mixed effects model If you are willing to fit the mixed model using ANOVA Type-1 estimation you can use R-package VCA which has two approaches to estimation of the variance of variance components implemented following Searle et al. (1992) "Variance Components" and alternatively an approximation of Giesbrecht and Burns (1985) Two-Stage Analysis Based on a Mixed Model: Large-Sample Asymptotic Theory and Small-Sample Simulation Results, Biometrics 41, p. 477-486.
Estimates of the variance of the variance component of a mixed effects model If you are willing to fit the mixed model using ANOVA Type-1 estimation you can use R-package VCA which has two approaches to estimation of the variance of variance components implemented following Se
53,294
Estimates of the variance of the variance component of a mixed effects model
To find standard errors of random effects for lmer(), use library(merDeriv); sqrt(diag(vcov(lmer(), full = TRUE))). Another mentioned library(arm); se.ranef(lmer()) at https://stackoverflow.com/questions/31694812. If you use nlme::lme() instead, see the answer in https://stackoverflow.com/a/76025033/20653759 for standard errors of variance of random effects using fisher information matrix from the package lmeInfo. According to the comment Estimates of the variance of the variance component of a mixed effects model, your question seems to be instead whether the variability differs between groups of states. Then, reporting standard errors of random effects' standard deviations or variances may not help. Instead, consider a likelihood ratio test between models estimated by REML. The state effects on variability can be captured by either (1) the residual structure or (2) the random effects of another grouping level UNDER each state. This should be done in nlme::lme(), as lmer() does not allow such specifications. If the initial model is lme(sbp ~ age * sex, random = ~ 1 | state), following approach (1) leads to lme(sbp ~ age * sex, random = ~ 1 | state, weights = varIdent(form = ~ 1 | state)) so that the residual standard error (sigma) is allowed to differ by a ratio to a reference state's. Then compare these two models using anova() to test H0: the error variance is the same among states. Approach (2) requires an additional level of hierarchy, such as repeated measurements on the same patients, leading to lme(sbp ~ age * sex, random = list(patient = pdDiag(~ 0 + state))) or simply lme(sbp ~ age * sex, random = ~ 0 + state | patient), where the standard deviation of random intercepts by patient is allowed to vary by state. Although "0 +" in the formula appears to omit intercepts, but random intercepts are fully contained within each state factor level. Comparing it with an restrictive model lme(sbp ~ age * sex, random = list(patient = pdIdent(~ 1))) or simply lme(sbp ~ age * sex, random = ~ 1 | patient ), where the standard deviation of random intercepts by patient is homogeneous among states by anova() tests H0: the random effect variance of patient-specific intercepts is the same among states. Note that Approaches (1) and (2) address different questions. It appears that the clarification in Alexis's comment points to Approach (1).
Estimates of the variance of the variance component of a mixed effects model
To find standard errors of random effects for lmer(), use library(merDeriv); sqrt(diag(vcov(lmer(), full = TRUE))). Another mentioned library(arm); se.ranef(lmer()) at https://stackoverflow.com/questi
Estimates of the variance of the variance component of a mixed effects model To find standard errors of random effects for lmer(), use library(merDeriv); sqrt(diag(vcov(lmer(), full = TRUE))). Another mentioned library(arm); se.ranef(lmer()) at https://stackoverflow.com/questions/31694812. If you use nlme::lme() instead, see the answer in https://stackoverflow.com/a/76025033/20653759 for standard errors of variance of random effects using fisher information matrix from the package lmeInfo. According to the comment Estimates of the variance of the variance component of a mixed effects model, your question seems to be instead whether the variability differs between groups of states. Then, reporting standard errors of random effects' standard deviations or variances may not help. Instead, consider a likelihood ratio test between models estimated by REML. The state effects on variability can be captured by either (1) the residual structure or (2) the random effects of another grouping level UNDER each state. This should be done in nlme::lme(), as lmer() does not allow such specifications. If the initial model is lme(sbp ~ age * sex, random = ~ 1 | state), following approach (1) leads to lme(sbp ~ age * sex, random = ~ 1 | state, weights = varIdent(form = ~ 1 | state)) so that the residual standard error (sigma) is allowed to differ by a ratio to a reference state's. Then compare these two models using anova() to test H0: the error variance is the same among states. Approach (2) requires an additional level of hierarchy, such as repeated measurements on the same patients, leading to lme(sbp ~ age * sex, random = list(patient = pdDiag(~ 0 + state))) or simply lme(sbp ~ age * sex, random = ~ 0 + state | patient), where the standard deviation of random intercepts by patient is allowed to vary by state. Although "0 +" in the formula appears to omit intercepts, but random intercepts are fully contained within each state factor level. Comparing it with an restrictive model lme(sbp ~ age * sex, random = list(patient = pdIdent(~ 1))) or simply lme(sbp ~ age * sex, random = ~ 1 | patient ), where the standard deviation of random intercepts by patient is homogeneous among states by anova() tests H0: the random effect variance of patient-specific intercepts is the same among states. Note that Approaches (1) and (2) address different questions. It appears that the clarification in Alexis's comment points to Approach (1).
Estimates of the variance of the variance component of a mixed effects model To find standard errors of random effects for lmer(), use library(merDeriv); sqrt(diag(vcov(lmer(), full = TRUE))). Another mentioned library(arm); se.ranef(lmer()) at https://stackoverflow.com/questi
53,295
Estimates of the variance of the variance component of a mixed effects model
The lmer function in lme4 does provide estimates of the variance of the varying slopes/intercepts, both on the variance and the standard deviation scales. > library(lme4) Loading required package: Matrix Loading required package: Rcpp > m <- lmer(Reaction ~ Days + (Days|Subject),sleepstudy) > m Linear mixed model fit by REML ['lmerMod'] Formula: Reaction ~ Days + (Days | Subject) Data: sleepstudy REML criterion at convergence: 1743.628 Random effects: Groups Name Std.Dev. Corr Subject (Intercept) 24.740 Days 5.922 0.07 Residual 25.592 Number of obs: 180, groups: Subject, 18 Fixed Effects: (Intercept) Days 251.41 10.47 > summary(m) Linear mixed model fit by REML ['lmerMod'] Formula: Reaction ~ Days + (Days | Subject) Data: sleepstudy REML criterion at convergence: 1743.6 Scaled residuals: Min 1Q Median 3Q Max -3.9536 -0.4634 0.0231 0.4634 5.1793 Random effects: Groups Name Variance Std.Dev. Corr Subject (Intercept) 612.09 24.740 Days 35.07 5.922 0.07 Residual 654.94 25.592 Number of obs: 180, groups: Subject, 18 Fixed effects: Estimate Std. Error t value (Intercept) 251.405 6.825 36.84 Days 10.467 1.546 6.77 Correlation of Fixed Effects: (Intr) Days -0.138 As part of the REML or ML calculations, the BLUPs (more generally the conditional modes) are also computed. You can extract them with ranef(): > ranef(m) $Subject (Intercept) Days 308 2.2585637 9.1989722 309 -40.3985802 -8.6197026 310 -38.9602496 -5.4488792 330 23.6905025 -4.8143320 331 22.2602062 -3.0698952 332 9.0395271 -0.2721709 333 16.8404333 -0.2236248 334 -7.2325803 1.0745763 335 -0.3336936 -10.7521594 337 34.8903534 8.6282835 349 -25.2101138 1.1734148 350 -13.0699598 6.6142055 351 4.5778364 -3.0152574 352 20.8635944 3.5360130 369 3.2754532 0.8722166 370 -25.6128737 4.8224653 371 0.8070401 -0.9881551 372 12.3145406 1.2840295
Estimates of the variance of the variance component of a mixed effects model
The lmer function in lme4 does provide estimates of the variance of the varying slopes/intercepts, both on the variance and the standard deviation scales. > library(lme4) Loading required package: Ma
Estimates of the variance of the variance component of a mixed effects model The lmer function in lme4 does provide estimates of the variance of the varying slopes/intercepts, both on the variance and the standard deviation scales. > library(lme4) Loading required package: Matrix Loading required package: Rcpp > m <- lmer(Reaction ~ Days + (Days|Subject),sleepstudy) > m Linear mixed model fit by REML ['lmerMod'] Formula: Reaction ~ Days + (Days | Subject) Data: sleepstudy REML criterion at convergence: 1743.628 Random effects: Groups Name Std.Dev. Corr Subject (Intercept) 24.740 Days 5.922 0.07 Residual 25.592 Number of obs: 180, groups: Subject, 18 Fixed Effects: (Intercept) Days 251.41 10.47 > summary(m) Linear mixed model fit by REML ['lmerMod'] Formula: Reaction ~ Days + (Days | Subject) Data: sleepstudy REML criterion at convergence: 1743.6 Scaled residuals: Min 1Q Median 3Q Max -3.9536 -0.4634 0.0231 0.4634 5.1793 Random effects: Groups Name Variance Std.Dev. Corr Subject (Intercept) 612.09 24.740 Days 35.07 5.922 0.07 Residual 654.94 25.592 Number of obs: 180, groups: Subject, 18 Fixed effects: Estimate Std. Error t value (Intercept) 251.405 6.825 36.84 Days 10.467 1.546 6.77 Correlation of Fixed Effects: (Intr) Days -0.138 As part of the REML or ML calculations, the BLUPs (more generally the conditional modes) are also computed. You can extract them with ranef(): > ranef(m) $Subject (Intercept) Days 308 2.2585637 9.1989722 309 -40.3985802 -8.6197026 310 -38.9602496 -5.4488792 330 23.6905025 -4.8143320 331 22.2602062 -3.0698952 332 9.0395271 -0.2721709 333 16.8404333 -0.2236248 334 -7.2325803 1.0745763 335 -0.3336936 -10.7521594 337 34.8903534 8.6282835 349 -25.2101138 1.1734148 350 -13.0699598 6.6142055 351 4.5778364 -3.0152574 352 20.8635944 3.5360130 369 3.2754532 0.8722166 370 -25.6128737 4.8224653 371 0.8070401 -0.9881551 372 12.3145406 1.2840295
Estimates of the variance of the variance component of a mixed effects model The lmer function in lme4 does provide estimates of the variance of the varying slopes/intercepts, both on the variance and the standard deviation scales. > library(lme4) Loading required package: Ma
53,296
Adjustable sample size in clinical trial
Ideally that's the point of a Phase II trial. Results from these studies, often single-arm in design, are used for power calculations. Sometimes they experiment with dosing and eligibility criteria, the more moving parts in a Phase II study, the more of a gamble a Phase III study will be. If a compound is showing to be promising a Data Monitoring Committee might recommend increasing enrollment or decreasing it appropriately. Sometimes it's about the risk of harm. If a compound is underpowered because the effect is not as powered as it was hoped to be, the DMC may end the study since the study subjects, by virtue of participating in the study, are exposing themselves to risk. Studies cannot go on perpetually as a matter of ethics. Indeed there is a whole field of sequential adaptive trials that allows researchers to seamlessly transition from Phase II to Phase III studies. The statistical software package SeqTrial in S+ from Scott Emerson allows you to perform sample size calculations for a variety of alpha spending rules and effect sizes. The FDA's overreliance on "traditional" statistics is pretty against it, as it can affect the integrity of findings. It's actually a good principle in this case, and Tom Fleming has rallied against it in his paper "Discerning Hype From Substance." Basically, collating Phase II and Phase III study findings is rarely if ever appropriate, even when the protocols are similar (identical) between II and III. This is because the Phase III study only happened because Phase II looks/looked promising. So selection bias will affect the validity of those aggregated findings.
Adjustable sample size in clinical trial
Ideally that's the point of a Phase II trial. Results from these studies, often single-arm in design, are used for power calculations. Sometimes they experiment with dosing and eligibility criteria, t
Adjustable sample size in clinical trial Ideally that's the point of a Phase II trial. Results from these studies, often single-arm in design, are used for power calculations. Sometimes they experiment with dosing and eligibility criteria, the more moving parts in a Phase II study, the more of a gamble a Phase III study will be. If a compound is showing to be promising a Data Monitoring Committee might recommend increasing enrollment or decreasing it appropriately. Sometimes it's about the risk of harm. If a compound is underpowered because the effect is not as powered as it was hoped to be, the DMC may end the study since the study subjects, by virtue of participating in the study, are exposing themselves to risk. Studies cannot go on perpetually as a matter of ethics. Indeed there is a whole field of sequential adaptive trials that allows researchers to seamlessly transition from Phase II to Phase III studies. The statistical software package SeqTrial in S+ from Scott Emerson allows you to perform sample size calculations for a variety of alpha spending rules and effect sizes. The FDA's overreliance on "traditional" statistics is pretty against it, as it can affect the integrity of findings. It's actually a good principle in this case, and Tom Fleming has rallied against it in his paper "Discerning Hype From Substance." Basically, collating Phase II and Phase III study findings is rarely if ever appropriate, even when the protocols are similar (identical) between II and III. This is because the Phase III study only happened because Phase II looks/looked promising. So selection bias will affect the validity of those aggregated findings.
Adjustable sample size in clinical trial Ideally that's the point of a Phase II trial. Results from these studies, often single-arm in design, are used for power calculations. Sometimes they experiment with dosing and eligibility criteria, t
53,297
Adjustable sample size in clinical trial
I think AdamO's answer is great, but I think it's also worth emphasizing out that this adaptive sample size design is how many (maybe even most? I've done theoretical work during internships at pharm companies, but can't say I've ever planned a real study...) clinical trials are run. That is to say, if a sequential design is used, initial patients are recruited and treated. Part way through the study, the currently collected data gets analyzed. Three possible actions can occur at this point: the data may show a statistically significant result and the study will be stopped because efficacy has been demonstrated, the data many statistically significantly show that there is no strong effect (for example, the upper end of the confidence interval is below some clinically significant threshold) and the study is stopped due to futility or the data is not yet conclusive (i.e. both a clinically significant effect and a clinically insignificant effect are contained in the confidence interval) in which more data will be collected. So you can see that in this case, the sample size is not fixed. An important note about this: you can't just run a standard test each time you "check" your data, otherwise you are doing multiple comparisons! Because the test statistics at different times should be positively correlated, it's not as big an issue as standard multiple comparison issues, but it still should be addressed for proper inference. Clinical trials, being regulated by the FDA, must state a plan for how they will address this (as @AdamO points out, SeqTrial provides software for this). However, often times academic researchers, not being regulated by the FDA, will continue to collect data until they find significance without adjusting for the fact that they are doing several comparisons. It's not the biggest abuse of statistical practice in research, but it still is an abuse.
Adjustable sample size in clinical trial
I think AdamO's answer is great, but I think it's also worth emphasizing out that this adaptive sample size design is how many (maybe even most? I've done theoretical work during internships at pharm
Adjustable sample size in clinical trial I think AdamO's answer is great, but I think it's also worth emphasizing out that this adaptive sample size design is how many (maybe even most? I've done theoretical work during internships at pharm companies, but can't say I've ever planned a real study...) clinical trials are run. That is to say, if a sequential design is used, initial patients are recruited and treated. Part way through the study, the currently collected data gets analyzed. Three possible actions can occur at this point: the data may show a statistically significant result and the study will be stopped because efficacy has been demonstrated, the data many statistically significantly show that there is no strong effect (for example, the upper end of the confidence interval is below some clinically significant threshold) and the study is stopped due to futility or the data is not yet conclusive (i.e. both a clinically significant effect and a clinically insignificant effect are contained in the confidence interval) in which more data will be collected. So you can see that in this case, the sample size is not fixed. An important note about this: you can't just run a standard test each time you "check" your data, otherwise you are doing multiple comparisons! Because the test statistics at different times should be positively correlated, it's not as big an issue as standard multiple comparison issues, but it still should be addressed for proper inference. Clinical trials, being regulated by the FDA, must state a plan for how they will address this (as @AdamO points out, SeqTrial provides software for this). However, often times academic researchers, not being regulated by the FDA, will continue to collect data until they find significance without adjusting for the fact that they are doing several comparisons. It's not the biggest abuse of statistical practice in research, but it still is an abuse.
Adjustable sample size in clinical trial I think AdamO's answer is great, but I think it's also worth emphasizing out that this adaptive sample size design is how many (maybe even most? I've done theoretical work during internships at pharm
53,298
Covariance of Categorical variables
Use your crayons! That's all you need to know. The rest of this answer elaborates on it, for those who have not read the link, and then it supplies a formal demonstration of the claim in that link: coloring rectangles in a scatterplot really does give the correct covariance in all cases. The figure shows two indicator variables $X$ and $Y$: $(X,Y)=(1,0)$ with probability $p_1$, $(X,Y)=(0,1)$ with probability $p_2$, and otherwise $(X,Y)=(0,0)$. The probabilities are indicated by sets of points, where a proportion $p_1$ of them all are located close to $(1,0)$ (but spread about so you can see each of them), $p_2$ are located close to $(0,1)$, and the remaining fraction $1-p_1-p_2$ around $(0,0)$. All possible rectangles that use some two of these points have been drawn. As explained in the linked post, rectangles are positive (and drawn in red) when the points are at the upper right and lower left and otherwise they are negative (and drawn in cyan). It is always the case that many of the rectangles cannot be seen because their width, their height, or both are zero. In the present situation, many of the rest are extremely slender because of the slight spreads of the points: they really should be invisible, too. The ones that can be seen all use one point near $(1,0)$ and one point near $(0,1)$. That makes them all negative, explaining the overall cyan cast to the picture. Solution A fraction $p_1$ of all rectangles have a corner at $(1,0)$. Independently of that, the proportion of those with another corner at $(0,1)$ is $p_2$. When the locations are not spread out, all such rectangles have unit width $1-0$ and unit height $1-0$ and they are negative. Therefore the covariance is $$\text{Cov}(X,Y) = p_1 p_2 (1-0)(1-0)(-1) = -p_1p_2,$$ QED. Mathematical Proof The question asks for a proof. To get started, let's establish the notation. Suppose $X$ is a discrete random variable that takes on the values $x_1,x_2,\ldots,x_k$ with probabilities $p_1,p_2,\ldots,p_k$, respectively. Let $Y_i$ be the indicator of $x_i$; that is, $Y_i = 1$ when $X = x_i$ and otherwise $Y_i=0$. Let $i\ne j$. The chance that $(Y_i,Y_j)=(1,0)$, which corresponds to $X=x_i$, is $p_i$; and the chance that $(Y_i,Y_j)=(0,1)$, which corresponds to $X=x_j$, is $p_j$. Since it is impossible for $(Y_i,Y_j)=(1,1)$, the chance that $(Y_i,Y_j)=(0,0)$ must be $1-p_i-p_j$, corresponding to $X\ne x_i$ and $X\ne x_j$. (The vector-valued random variable $(Y_1,Y_2,\ldots,Y_k)$ has a Multinomial$(1;p_1,p_2,\ldots,p_k)$ distribution.) The question asks for the covariances of $Y_i$ and $Y_j$ for any indexes $i$ and $j$ in $1,2,\ldots, k$. The proof uses two ideas. Their demonstrations are simple and easy. Let $(X,Y)$ be any bivariate random variable. Suppose $(X^\prime, Y^\prime)$ is another random variable with the same distribution but is independent of $(X,Y)$. Then $$\text{Cov}(X,Y) = \frac{1}{2}\mathbb{E}((X-X^\prime)(Y-Y^\prime)).$$ To see why this is so, note that the right hand side remains unchanged when $X$ and $X^\prime$ are shifted by the same amount and also when $Y$ and $Y^\prime$ are shifted by some common amount. We may therefore apply suitable shifts to make the expectations all zero. In this situation $$\eqalign{\text{Cov}(X,Y) &= \mathbb{E}(XY)\\ & = \frac{1}{2}\mathbb{E}(XY + X^\prime Y^\prime)\\ & = \frac{1}{2}\mathbb{E}(XY + X^\prime Y^\prime) + \frac{1}{2}\mathbb{E}(X)\mathbb{E}(Y^\prime) + \frac{1}{2}\mathbb{E}(X^\prime)\mathbb{E}(Y) \\ & = \frac{1}{2}\mathbb{E}(XY + X^\prime Y^\prime) + \frac{1}{2}\mathbb{E}(XY^\prime) + \frac{1}{2}\mathbb{E}(X^\prime Y) \\ &= \frac{1}{2}\mathbb{E}((X-X^\prime)(Y-Y^\prime)). }$$ Those extra terms like $\mathbb{E}(X)\mathbb{E}(Y^\prime)$ could be freely added in the middle step because they are all zero. The equalities of the form $\mathbb{E}(X)\mathbb{E}(Y^\prime) = \mathbb{E}(X Y^\prime)$ in the following step result from the independence of $X$ and $Y^\prime$ and of $X^\prime$ and $Y$. Where did that factor of $1/2$ go in the crayon calculation? When $(X,Y)$ has a discrete distribution with values $(x_i,y_i)$ and associated probabilities $\pi_{i}$, $$\eqalign{ \frac{1}{2}\mathbb{E}((X-X^\prime)\mathbb{E}(Y-Y^\prime)) &=\sum_{i,j=1}^k (x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j} \\ &= \frac{1}{2}\left(\sum_{i\gt j} + \sum_{i \lt j} + \sum_{i=j}\right)(x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j} \\ &= \frac{1}{2}\left(2\sum_{i\gt j} (x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j}\right) + \sum_{i=j} 0 \\ &= \sum_{i \gt j} (x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j}. }$$ In words: the expectation averages over ordered pairs of indices, causing each non-empty rectangle to be counted twice. That's why the factor of $1/2$ is needed in the formula but does not need to be used in the crayon calculation, which counts each distinct rectangle just once. Applying these two ideas to the bivariate $(Y_i,Y_j)$ in the question, which takes on only four possible values $(0,0),(1,0),(0,1),(1,1)$ with probabilities $1-p_ip_j, p_i, p_j$, and $0$, gives a sum that has only one nonzero term arising from $(1,0)$ and $(0,1)$ equal to $$\text{Cov}(Y_i,Y_j) = p_i p_j (1-0)(0-1) = -p_i p_j,$$ QED.
Covariance of Categorical variables
Use your crayons! That's all you need to know. The rest of this answer elaborates on it, for those who have not read the link, and then it supplies a formal demonstration of the claim in that link: c
Covariance of Categorical variables Use your crayons! That's all you need to know. The rest of this answer elaborates on it, for those who have not read the link, and then it supplies a formal demonstration of the claim in that link: coloring rectangles in a scatterplot really does give the correct covariance in all cases. The figure shows two indicator variables $X$ and $Y$: $(X,Y)=(1,0)$ with probability $p_1$, $(X,Y)=(0,1)$ with probability $p_2$, and otherwise $(X,Y)=(0,0)$. The probabilities are indicated by sets of points, where a proportion $p_1$ of them all are located close to $(1,0)$ (but spread about so you can see each of them), $p_2$ are located close to $(0,1)$, and the remaining fraction $1-p_1-p_2$ around $(0,0)$. All possible rectangles that use some two of these points have been drawn. As explained in the linked post, rectangles are positive (and drawn in red) when the points are at the upper right and lower left and otherwise they are negative (and drawn in cyan). It is always the case that many of the rectangles cannot be seen because their width, their height, or both are zero. In the present situation, many of the rest are extremely slender because of the slight spreads of the points: they really should be invisible, too. The ones that can be seen all use one point near $(1,0)$ and one point near $(0,1)$. That makes them all negative, explaining the overall cyan cast to the picture. Solution A fraction $p_1$ of all rectangles have a corner at $(1,0)$. Independently of that, the proportion of those with another corner at $(0,1)$ is $p_2$. When the locations are not spread out, all such rectangles have unit width $1-0$ and unit height $1-0$ and they are negative. Therefore the covariance is $$\text{Cov}(X,Y) = p_1 p_2 (1-0)(1-0)(-1) = -p_1p_2,$$ QED. Mathematical Proof The question asks for a proof. To get started, let's establish the notation. Suppose $X$ is a discrete random variable that takes on the values $x_1,x_2,\ldots,x_k$ with probabilities $p_1,p_2,\ldots,p_k$, respectively. Let $Y_i$ be the indicator of $x_i$; that is, $Y_i = 1$ when $X = x_i$ and otherwise $Y_i=0$. Let $i\ne j$. The chance that $(Y_i,Y_j)=(1,0)$, which corresponds to $X=x_i$, is $p_i$; and the chance that $(Y_i,Y_j)=(0,1)$, which corresponds to $X=x_j$, is $p_j$. Since it is impossible for $(Y_i,Y_j)=(1,1)$, the chance that $(Y_i,Y_j)=(0,0)$ must be $1-p_i-p_j$, corresponding to $X\ne x_i$ and $X\ne x_j$. (The vector-valued random variable $(Y_1,Y_2,\ldots,Y_k)$ has a Multinomial$(1;p_1,p_2,\ldots,p_k)$ distribution.) The question asks for the covariances of $Y_i$ and $Y_j$ for any indexes $i$ and $j$ in $1,2,\ldots, k$. The proof uses two ideas. Their demonstrations are simple and easy. Let $(X,Y)$ be any bivariate random variable. Suppose $(X^\prime, Y^\prime)$ is another random variable with the same distribution but is independent of $(X,Y)$. Then $$\text{Cov}(X,Y) = \frac{1}{2}\mathbb{E}((X-X^\prime)(Y-Y^\prime)).$$ To see why this is so, note that the right hand side remains unchanged when $X$ and $X^\prime$ are shifted by the same amount and also when $Y$ and $Y^\prime$ are shifted by some common amount. We may therefore apply suitable shifts to make the expectations all zero. In this situation $$\eqalign{\text{Cov}(X,Y) &= \mathbb{E}(XY)\\ & = \frac{1}{2}\mathbb{E}(XY + X^\prime Y^\prime)\\ & = \frac{1}{2}\mathbb{E}(XY + X^\prime Y^\prime) + \frac{1}{2}\mathbb{E}(X)\mathbb{E}(Y^\prime) + \frac{1}{2}\mathbb{E}(X^\prime)\mathbb{E}(Y) \\ & = \frac{1}{2}\mathbb{E}(XY + X^\prime Y^\prime) + \frac{1}{2}\mathbb{E}(XY^\prime) + \frac{1}{2}\mathbb{E}(X^\prime Y) \\ &= \frac{1}{2}\mathbb{E}((X-X^\prime)(Y-Y^\prime)). }$$ Those extra terms like $\mathbb{E}(X)\mathbb{E}(Y^\prime)$ could be freely added in the middle step because they are all zero. The equalities of the form $\mathbb{E}(X)\mathbb{E}(Y^\prime) = \mathbb{E}(X Y^\prime)$ in the following step result from the independence of $X$ and $Y^\prime$ and of $X^\prime$ and $Y$. Where did that factor of $1/2$ go in the crayon calculation? When $(X,Y)$ has a discrete distribution with values $(x_i,y_i)$ and associated probabilities $\pi_{i}$, $$\eqalign{ \frac{1}{2}\mathbb{E}((X-X^\prime)\mathbb{E}(Y-Y^\prime)) &=\sum_{i,j=1}^k (x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j} \\ &= \frac{1}{2}\left(\sum_{i\gt j} + \sum_{i \lt j} + \sum_{i=j}\right)(x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j} \\ &= \frac{1}{2}\left(2\sum_{i\gt j} (x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j}\right) + \sum_{i=j} 0 \\ &= \sum_{i \gt j} (x_i-x_j^\prime)(y_i-y_j^\prime)\pi_{i}\pi_{j}. }$$ In words: the expectation averages over ordered pairs of indices, causing each non-empty rectangle to be counted twice. That's why the factor of $1/2$ is needed in the formula but does not need to be used in the crayon calculation, which counts each distinct rectangle just once. Applying these two ideas to the bivariate $(Y_i,Y_j)$ in the question, which takes on only four possible values $(0,0),(1,0),(0,1),(1,1)$ with probabilities $1-p_ip_j, p_i, p_j$, and $0$, gives a sum that has only one nonzero term arising from $(1,0)$ and $(0,1)$ equal to $$\text{Cov}(Y_i,Y_j) = p_i p_j (1-0)(0-1) = -p_i p_j,$$ QED.
Covariance of Categorical variables Use your crayons! That's all you need to know. The rest of this answer elaborates on it, for those who have not read the link, and then it supplies a formal demonstration of the claim in that link: c
53,299
Covariance of Categorical variables
Consider a single trial from a multinomial, so $n=1$. This will give a random vector $x$ with $k$ components. The $ith$ and $jth$ coordinates of the covariance matrix is given by $cov(x_i, x_j) = E[(x_i - u_i)(x_j - u_j)] $ $ = E[x_i x_j - p_i x_j -p_j x_i + p_i p_j]$ $ = E[x_i x_j] - E[x_j]p_i - E[x_i]p_j + p_i p_j$ Since $x_i$ and $x_j$ cannot occur simultaneously, $E[x_i x_j]=0$ So $cov(x_i, x_j) = 0 - p_j p_i - p_i p_j + p_i p_j = -p_i p_j$ This definitely wasn't immediately obvious to me. Thanks @whuber, @Taylor!
Covariance of Categorical variables
Consider a single trial from a multinomial, so $n=1$. This will give a random vector $x$ with $k$ components. The $ith$ and $jth$ coordinates of the covariance matrix is given by $cov(x_i, x_j) = E[
Covariance of Categorical variables Consider a single trial from a multinomial, so $n=1$. This will give a random vector $x$ with $k$ components. The $ith$ and $jth$ coordinates of the covariance matrix is given by $cov(x_i, x_j) = E[(x_i - u_i)(x_j - u_j)] $ $ = E[x_i x_j - p_i x_j -p_j x_i + p_i p_j]$ $ = E[x_i x_j] - E[x_j]p_i - E[x_i]p_j + p_i p_j$ Since $x_i$ and $x_j$ cannot occur simultaneously, $E[x_i x_j]=0$ So $cov(x_i, x_j) = 0 - p_j p_i - p_i p_j + p_i p_j = -p_i p_j$ This definitely wasn't immediately obvious to me. Thanks @whuber, @Taylor!
Covariance of Categorical variables Consider a single trial from a multinomial, so $n=1$. This will give a random vector $x$ with $k$ components. The $ith$ and $jth$ coordinates of the covariance matrix is given by $cov(x_i, x_j) = E[
53,300
Covariance of Categorical variables
Plz see https://arxiv.org/abs/1605.05087 This article describes more detail derivation about the covariance of categorical variables. The derivation show: Defining the variance of categorical variables = Gini-index Defining the covariance of categorical variables = Correspondence Analysis And also It showed application to natural language processing( word as a category). word2vec is well-known tool in natural language processing. We can show the covariance of categorical variables ( Correspondence Analysis ) = word2vec.
Covariance of Categorical variables
Plz see https://arxiv.org/abs/1605.05087 This article describes more detail derivation about the covariance of categorical variables. The derivation show: Defining the variance of categorical variable
Covariance of Categorical variables Plz see https://arxiv.org/abs/1605.05087 This article describes more detail derivation about the covariance of categorical variables. The derivation show: Defining the variance of categorical variables = Gini-index Defining the covariance of categorical variables = Correspondence Analysis And also It showed application to natural language processing( word as a category). word2vec is well-known tool in natural language processing. We can show the covariance of categorical variables ( Correspondence Analysis ) = word2vec.
Covariance of Categorical variables Plz see https://arxiv.org/abs/1605.05087 This article describes more detail derivation about the covariance of categorical variables. The derivation show: Defining the variance of categorical variable