idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
53,701 | In linear regression, is there any meaning for the term $X^Ty$? | Suppose we have a linear system of $m$ equations in $\mathrm x \in \mathbb R^n$
$$\mathrm A \mathrm x = \mathrm b$$
where $\mathrm A \in \mathbb R^{m \times n}$ has full column rank, and $\mathrm b \in \mathbb R^m$. Left-multiplying both sides by $\mathrm A^T$, we obtain a linear system of $n \leq m$ equations in $\mathrm x \in \mathbb R^n$
$$\mathrm A^T \mathrm A \mathrm x = \mathrm A^T \mathrm b$$
which is usually known as "normal equations". Since $\mathrm A$ has full column rank, the square matrix $\mathrm A^T \mathrm A$ is invertible. Hence, the latter linear system has the unique solution $(\mathrm A^T \mathrm A)^{-1} \mathrm A^T \mathrm b$, whereas the original linear system, $\mathrm A \mathrm x = \mathrm b$, may not even have a solution. Note that a solution to the "normal equations" is not necessarily a solution to the original linear system.
So, what is the "meaning" of $\mathrm A^T \mathrm b$? It is a scaled projection of $\mathrm b$ onto the column space of $\mathrm A$. The dimension of the right-hand side is reduced from $m \geq n$ to $n$, so that a unique solution can be found. As the columns of $\mathrm A$ are not necessarily normalized, left-multiplication by $(\mathrm A^T \mathrm A)^{-1}$ provides the needed normalization. | In linear regression, is there any meaning for the term $X^Ty$? | Suppose we have a linear system of $m$ equations in $\mathrm x \in \mathbb R^n$
$$\mathrm A \mathrm x = \mathrm b$$
where $\mathrm A \in \mathbb R^{m \times n}$ has full column rank, and $\mathrm b \i | In linear regression, is there any meaning for the term $X^Ty$?
Suppose we have a linear system of $m$ equations in $\mathrm x \in \mathbb R^n$
$$\mathrm A \mathrm x = \mathrm b$$
where $\mathrm A \in \mathbb R^{m \times n}$ has full column rank, and $\mathrm b \in \mathbb R^m$. Left-multiplying both sides by $\mathrm A^T$, we obtain a linear system of $n \leq m$ equations in $\mathrm x \in \mathbb R^n$
$$\mathrm A^T \mathrm A \mathrm x = \mathrm A^T \mathrm b$$
which is usually known as "normal equations". Since $\mathrm A$ has full column rank, the square matrix $\mathrm A^T \mathrm A$ is invertible. Hence, the latter linear system has the unique solution $(\mathrm A^T \mathrm A)^{-1} \mathrm A^T \mathrm b$, whereas the original linear system, $\mathrm A \mathrm x = \mathrm b$, may not even have a solution. Note that a solution to the "normal equations" is not necessarily a solution to the original linear system.
So, what is the "meaning" of $\mathrm A^T \mathrm b$? It is a scaled projection of $\mathrm b$ onto the column space of $\mathrm A$. The dimension of the right-hand side is reduced from $m \geq n$ to $n$, so that a unique solution can be found. As the columns of $\mathrm A$ are not necessarily normalized, left-multiplication by $(\mathrm A^T \mathrm A)^{-1}$ provides the needed normalization. | In linear regression, is there any meaning for the term $X^Ty$?
Suppose we have a linear system of $m$ equations in $\mathrm x \in \mathbb R^n$
$$\mathrm A \mathrm x = \mathrm b$$
where $\mathrm A \in \mathbb R^{m \times n}$ has full column rank, and $\mathrm b \i |
53,702 | What's the convergence rate in the context of convergence in probability? | I would argue that the most widely accepted definition of a convergence rate uses the "big-Oh" and "small-oh" notation.
That is, convergence in probability is written as $z_n-z=o_p(1)$, while a rate of convergence could be indicated via a statement like $z_n-z=O_p(n^{-\alpha})$, which says that $z_n-z$ remains stochastically bounded even when divided by $n^{-\alpha}$, or multiplied by $n^{\alpha}$. A leading case is $\alpha=1/2$. Hence, $z_n-z$ must vanish (converge) at rate $n^{\alpha}$.
Of course, you are right that not all convergence rates need to directly relate to $n$. In nonparametric estimation, it is for example of interest to see what happens to the bias of a density estimator as the underlying bandwidth $h\to0$. | What's the convergence rate in the context of convergence in probability? | I would argue that the most widely accepted definition of a convergence rate uses the "big-Oh" and "small-oh" notation.
That is, convergence in probability is written as $z_n-z=o_p(1)$, while a rate o | What's the convergence rate in the context of convergence in probability?
I would argue that the most widely accepted definition of a convergence rate uses the "big-Oh" and "small-oh" notation.
That is, convergence in probability is written as $z_n-z=o_p(1)$, while a rate of convergence could be indicated via a statement like $z_n-z=O_p(n^{-\alpha})$, which says that $z_n-z$ remains stochastically bounded even when divided by $n^{-\alpha}$, or multiplied by $n^{\alpha}$. A leading case is $\alpha=1/2$. Hence, $z_n-z$ must vanish (converge) at rate $n^{\alpha}$.
Of course, you are right that not all convergence rates need to directly relate to $n$. In nonparametric estimation, it is for example of interest to see what happens to the bias of a density estimator as the underlying bandwidth $h\to0$. | What's the convergence rate in the context of convergence in probability?
I would argue that the most widely accepted definition of a convergence rate uses the "big-Oh" and "small-oh" notation.
That is, convergence in probability is written as $z_n-z=o_p(1)$, while a rate o |
53,703 | Should random forests based on same data but different random seeds be compared? | Random Forest converges with growing number of trees, see the Breiman 2001 paper. So if you would set the number of trees (ntree) to infinity, you would always get the same accuracy (or some other measure like logloss). It only varies a lot because your number of trees is too small (or your resampling strategy (10-fold-CV) is to unstable, can be reduced by more repetitions).
In normal data situations (especially if the data is big enough) your accuracy should grow with growing trees. So instead of training with 100 different seeds I would train one randomForest with actual_ntree * 100 or even more.
In some packages you can also see the development of the accuracy with growing number of trees.
For getting a faster evaluation and possibly tuning you can use out-of-bag estimations, that are usually implemented in standard packages (like randomForest in R). They are normally as good as 10 fold-CVs (and more stable) if the number of trees is big enough. | Should random forests based on same data but different random seeds be compared? | Random Forest converges with growing number of trees, see the Breiman 2001 paper. So if you would set the number of trees (ntree) to infinity, you would always get the same accuracy (or some other mea | Should random forests based on same data but different random seeds be compared?
Random Forest converges with growing number of trees, see the Breiman 2001 paper. So if you would set the number of trees (ntree) to infinity, you would always get the same accuracy (or some other measure like logloss). It only varies a lot because your number of trees is too small (or your resampling strategy (10-fold-CV) is to unstable, can be reduced by more repetitions).
In normal data situations (especially if the data is big enough) your accuracy should grow with growing trees. So instead of training with 100 different seeds I would train one randomForest with actual_ntree * 100 or even more.
In some packages you can also see the development of the accuracy with growing number of trees.
For getting a faster evaluation and possibly tuning you can use out-of-bag estimations, that are usually implemented in standard packages (like randomForest in R). They are normally as good as 10 fold-CVs (and more stable) if the number of trees is big enough. | Should random forests based on same data but different random seeds be compared?
Random Forest converges with growing number of trees, see the Breiman 2001 paper. So if you would set the number of trees (ntree) to infinity, you would always get the same accuracy (or some other mea |
53,704 | Should random forests based on same data but different random seeds be compared? | Accuracy is just another random variable that depends on your model, the seed, the train / test split, quality of your current data etc. - maximizing this random variable does not automatically lead to the best possible generalization of your model.
Besides looking at metrics like accuracy, logloss, auc roc etc. you might also want to look at other learning characteristics like the learning curves of your train / testdata while adding more data, the difference between the train and test error etc., as finally all you are facing is the bias-variance-tradeoff that lives in every model.
See https://en.m.wikipedia.org/wiki/Bias–variance_tradeoff
To answer your question, you should not only rely on comparing the random seed. | Should random forests based on same data but different random seeds be compared? | Accuracy is just another random variable that depends on your model, the seed, the train / test split, quality of your current data etc. - maximizing this random variable does not automatically lead t | Should random forests based on same data but different random seeds be compared?
Accuracy is just another random variable that depends on your model, the seed, the train / test split, quality of your current data etc. - maximizing this random variable does not automatically lead to the best possible generalization of your model.
Besides looking at metrics like accuracy, logloss, auc roc etc. you might also want to look at other learning characteristics like the learning curves of your train / testdata while adding more data, the difference between the train and test error etc., as finally all you are facing is the bias-variance-tradeoff that lives in every model.
See https://en.m.wikipedia.org/wiki/Bias–variance_tradeoff
To answer your question, you should not only rely on comparing the random seed. | Should random forests based on same data but different random seeds be compared?
Accuracy is just another random variable that depends on your model, the seed, the train / test split, quality of your current data etc. - maximizing this random variable does not automatically lead t |
53,705 | how to check missing data is missing at random or not? | Here is one way to test the missingness-at-random assumption.
Suppose the question on participant's income has some missing entries. Run a logistic regression with income as your response and everything else as predictors. Your response would be 1 if it's missing, 0 otherwise. The p-value of the predictors should give you an idea whether this MAR assumption is any good.
Do the same for all other columns with missing data.
EDIT :
There is a huge literature behind this issue. I'm risking possibly misleading simplification here. See Ch 25 of this,
Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press. | how to check missing data is missing at random or not? | Here is one way to test the missingness-at-random assumption.
Suppose the question on participant's income has some missing entries. Run a logistic regression with income as your response and everyth | how to check missing data is missing at random or not?
Here is one way to test the missingness-at-random assumption.
Suppose the question on participant's income has some missing entries. Run a logistic regression with income as your response and everything else as predictors. Your response would be 1 if it's missing, 0 otherwise. The p-value of the predictors should give you an idea whether this MAR assumption is any good.
Do the same for all other columns with missing data.
EDIT :
There is a huge literature behind this issue. I'm risking possibly misleading simplification here. See Ch 25 of this,
Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press. | how to check missing data is missing at random or not?
Here is one way to test the missingness-at-random assumption.
Suppose the question on participant's income has some missing entries. Run a logistic regression with income as your response and everyth |
53,706 | how to check missing data is missing at random or not? | A little bit of terminology:
Missing completely at random: Missingness does not depend on any observed or unobserved variables.
Missing at random: Missingness does not depend on unobserved variables, but it may depend on observed variables.
Missing not at random: Missingness depends on unobserved variables (it may or may not also depend on observed variables).
The answer given by horaceT shows a way to test whether your data is missing at random, but there is a strong assumption here: you have to assume that your data is not missing not at random (sorry for the double negative!). In other words, your null hypothesis is "missing completely at random", and the alternative hypothesis is "missing at random".
The reason for this is clear: you cannot test if missingness depends on unobserved variables, because, well, you didn't observe/measure them. This subtlety is important, because it affects how you interpret your results. | how to check missing data is missing at random or not? | A little bit of terminology:
Missing completely at random: Missingness does not depend on any observed or unobserved variables.
Missing at random: Missingness does not depend on unobserved variables, | how to check missing data is missing at random or not?
A little bit of terminology:
Missing completely at random: Missingness does not depend on any observed or unobserved variables.
Missing at random: Missingness does not depend on unobserved variables, but it may depend on observed variables.
Missing not at random: Missingness depends on unobserved variables (it may or may not also depend on observed variables).
The answer given by horaceT shows a way to test whether your data is missing at random, but there is a strong assumption here: you have to assume that your data is not missing not at random (sorry for the double negative!). In other words, your null hypothesis is "missing completely at random", and the alternative hypothesis is "missing at random".
The reason for this is clear: you cannot test if missingness depends on unobserved variables, because, well, you didn't observe/measure them. This subtlety is important, because it affects how you interpret your results. | how to check missing data is missing at random or not?
A little bit of terminology:
Missing completely at random: Missingness does not depend on any observed or unobserved variables.
Missing at random: Missingness does not depend on unobserved variables, |
53,707 | XGBoost - Can we find a "better" objective function than RMSE for regression? | does the same logic hold true for gradient boosted trees?
Yes, by any mean. Gradient boosting can be used to minimize any sensible loss function, and it is very effective in doing it.
It is worth saying that generalised linear models are generally picked considering not the loss/utility function (that answers the question: how well is doing my model/how bad are its errors), but the kind of random variable you want to model. Then, for instance, if you have a target variable which is the number of some kind of events registered in some time, it makes sense to use a Poisson model. In case you have a rich complex dataset, Xgboost can model a Poisson response much better than a GLM.
You can define a custom objective if you wish, but does it matter?
Of course it does, but I'd like to point out that trees are perfectly non-linear (there is no constraint on the functional form) and thus a model trained with MSE loss can often do quite well even if judged with quite different score functions, even for classification tasks! However, MSE is always symmetric and when circumstances require weighting one tail more than the other (like for gamma regression, or for binary regression, when close to extremes) MSE is not optimal and does not perform as well as the most fitting loss function.
But what is it?
This depends on your goal. For ordinary regression MSE is such an appreciated choice because it models the conditional mean of the target variable, which is often the objective, it benefits from the conceptual link with gaussian variable and central limit theorem, it is fast, and it is actually quite robust. This of course doesn't mean you have to use it, it's just a good standard, but every problem is different and many many times you don't want to predict the conditional mean. For instance you could need to predict the order of magnitude of some measure, then MSE should be applied on the logarithm of that variable, or you could have a situation where outliers are common and shouldn't affect the predictions more than other residuals, in that case MAE is a better loss. You can't list them all because there are infinitely many! | XGBoost - Can we find a "better" objective function than RMSE for regression? | does the same logic hold true for gradient boosted trees?
Yes, by any mean. Gradient boosting can be used to minimize any sensible loss function, and it is very effective in doing it.
It is worth say | XGBoost - Can we find a "better" objective function than RMSE for regression?
does the same logic hold true for gradient boosted trees?
Yes, by any mean. Gradient boosting can be used to minimize any sensible loss function, and it is very effective in doing it.
It is worth saying that generalised linear models are generally picked considering not the loss/utility function (that answers the question: how well is doing my model/how bad are its errors), but the kind of random variable you want to model. Then, for instance, if you have a target variable which is the number of some kind of events registered in some time, it makes sense to use a Poisson model. In case you have a rich complex dataset, Xgboost can model a Poisson response much better than a GLM.
You can define a custom objective if you wish, but does it matter?
Of course it does, but I'd like to point out that trees are perfectly non-linear (there is no constraint on the functional form) and thus a model trained with MSE loss can often do quite well even if judged with quite different score functions, even for classification tasks! However, MSE is always symmetric and when circumstances require weighting one tail more than the other (like for gamma regression, or for binary regression, when close to extremes) MSE is not optimal and does not perform as well as the most fitting loss function.
But what is it?
This depends on your goal. For ordinary regression MSE is such an appreciated choice because it models the conditional mean of the target variable, which is often the objective, it benefits from the conceptual link with gaussian variable and central limit theorem, it is fast, and it is actually quite robust. This of course doesn't mean you have to use it, it's just a good standard, but every problem is different and many many times you don't want to predict the conditional mean. For instance you could need to predict the order of magnitude of some measure, then MSE should be applied on the logarithm of that variable, or you could have a situation where outliers are common and shouldn't affect the predictions more than other residuals, in that case MAE is a better loss. You can't list them all because there are infinitely many! | XGBoost - Can we find a "better" objective function than RMSE for regression?
does the same logic hold true for gradient boosted trees?
Yes, by any mean. Gradient boosting can be used to minimize any sensible loss function, and it is very effective in doing it.
It is worth say |
53,708 | XGBoost - Can we find a "better" objective function than RMSE for regression? | To answer your question you need to define what is "better". If your goal is to reach a smaller distance to a gamma distributed variable measured in squared distance than you should build your objective function in that way.
A very common problem is to find a solution that minimizes Mean Absolute Error. That cost function is different from RMSE and using a different objective function can help in minimizing the MAE. | XGBoost - Can we find a "better" objective function than RMSE for regression? | To answer your question you need to define what is "better". If your goal is to reach a smaller distance to a gamma distributed variable measured in squared distance than you should build your objecti | XGBoost - Can we find a "better" objective function than RMSE for regression?
To answer your question you need to define what is "better". If your goal is to reach a smaller distance to a gamma distributed variable measured in squared distance than you should build your objective function in that way.
A very common problem is to find a solution that minimizes Mean Absolute Error. That cost function is different from RMSE and using a different objective function can help in minimizing the MAE. | XGBoost - Can we find a "better" objective function than RMSE for regression?
To answer your question you need to define what is "better". If your goal is to reach a smaller distance to a gamma distributed variable measured in squared distance than you should build your objecti |
53,709 | XGBoost - Can we find a "better" objective function than RMSE for regression? | Well, in the original paper of Gradient Boosting, Friedman proposes some alternatives to RMSE (or L2 loss) for the loss function in the implementation of Boosted Trees. These are Least Absolute Deviation (equivalent to MAE, as expressed in the previous answers), and M-Regression, which uses instead a Huber Loss, which might for example be better in case of outliers.
In general, you can define gradient boosted trees with any loss you want, as long as it respects some continuity constraints and you are able to compute its derivative.
It is also quite important to notice that you also use a loss function within the tree-building process and not only the boosting one, so changes there also affect the way trees are built, and it would make sense to use the same loss for both steps | XGBoost - Can we find a "better" objective function than RMSE for regression? | Well, in the original paper of Gradient Boosting, Friedman proposes some alternatives to RMSE (or L2 loss) for the loss function in the implementation of Boosted Trees. These are Least Absolute Devia | XGBoost - Can we find a "better" objective function than RMSE for regression?
Well, in the original paper of Gradient Boosting, Friedman proposes some alternatives to RMSE (or L2 loss) for the loss function in the implementation of Boosted Trees. These are Least Absolute Deviation (equivalent to MAE, as expressed in the previous answers), and M-Regression, which uses instead a Huber Loss, which might for example be better in case of outliers.
In general, you can define gradient boosted trees with any loss you want, as long as it respects some continuity constraints and you are able to compute its derivative.
It is also quite important to notice that you also use a loss function within the tree-building process and not only the boosting one, so changes there also affect the way trees are built, and it would make sense to use the same loss for both steps | XGBoost - Can we find a "better" objective function than RMSE for regression?
Well, in the original paper of Gradient Boosting, Friedman proposes some alternatives to RMSE (or L2 loss) for the loss function in the implementation of Boosted Trees. These are Least Absolute Devia |
53,710 | Is the Jackknife estimation better than Maximum Likelihood Estimator? | You are not doing anything wrong, the are suppossed to give the same number (in this simple setting):
# the true mean of y is 0
y <- rnorm(1000)
# The MLE estimator is simply the sample mean:
MLE <- mean(y)
# The Jackknife estimator is:
require(bootstrap)
JKE <- mean(jackknife(y, mean)$jack.values)
# Both estimator's results in:
# 0.009662955
# Exatcly the same number.
# However the estimated standard error is different:
JKE_se <- jackknife(y, mean)$jack.se
# 0.0307548
MLE_se <- sqrt(var(y) * (length(y)-1) * 1/length(y))
# 0.9725519
Oops! The estimated sd for the jackknife is way off. There is general lesson here, provided you are willing to asumme the distributional assumptions imposed by MLE you cannot do any better. The nice thing about the jackknife, and orther resample methods, is that they do not impose such rigorous assumptions.
Of course today, with modern computers, there isnt really a reason to rely on the jackknife, instead you would almost always be better of using a simple bootstrap - because you learn something about the sampling distribution. | Is the Jackknife estimation better than Maximum Likelihood Estimator? | You are not doing anything wrong, the are suppossed to give the same number (in this simple setting):
# the true mean of y is 0
y <- rnorm(1000)
# The MLE estimator is simply the sample mean:
MLE <- | Is the Jackknife estimation better than Maximum Likelihood Estimator?
You are not doing anything wrong, the are suppossed to give the same number (in this simple setting):
# the true mean of y is 0
y <- rnorm(1000)
# The MLE estimator is simply the sample mean:
MLE <- mean(y)
# The Jackknife estimator is:
require(bootstrap)
JKE <- mean(jackknife(y, mean)$jack.values)
# Both estimator's results in:
# 0.009662955
# Exatcly the same number.
# However the estimated standard error is different:
JKE_se <- jackknife(y, mean)$jack.se
# 0.0307548
MLE_se <- sqrt(var(y) * (length(y)-1) * 1/length(y))
# 0.9725519
Oops! The estimated sd for the jackknife is way off. There is general lesson here, provided you are willing to asumme the distributional assumptions imposed by MLE you cannot do any better. The nice thing about the jackknife, and orther resample methods, is that they do not impose such rigorous assumptions.
Of course today, with modern computers, there isnt really a reason to rely on the jackknife, instead you would almost always be better of using a simple bootstrap - because you learn something about the sampling distribution. | Is the Jackknife estimation better than Maximum Likelihood Estimator?
You are not doing anything wrong, the are suppossed to give the same number (in this simple setting):
# the true mean of y is 0
y <- rnorm(1000)
# The MLE estimator is simply the sample mean:
MLE <- |
53,711 | Is the Jackknife estimation better than Maximum Likelihood Estimator? | Comparing maximum likelihood estimation (MLE) and the jackknife is like comparing apples and oranges. These techniques accomplish different things.
Maximum likelihood estimation is a method for estimating parameters and fitting models. Since it's based on maximizing the likelihood function, it's a model-based algorithm.
The jackknife is a method for estimating the bias and variance of estimators. It doesn't make distributional assumptions about the data generating process, so it's nonparametric.
Let's highlight the difference between estimating a parameter and estimating the variance of that estimator. I'll use @Repmat's example.
Let $x_1,\ldots,x_n$ be independent draws from a population with mean $\mu$. The sample average $\bar{x} = \sum_{i=1}^nx_i/n$ is an estimator of the mean $\mu$.
How accurately does $\bar{x}$ estimate $\mu$? In other words, what is the variance of $\bar{x}$?
The jackknife's answer is:
$$
\begin{aligned}
\widehat{\operatorname{var}}_{\text{jack}} &=
\frac{n-1}{n}\sum_{i=1}^n\left(\bar{x}_{(i)} - \bar{x}_{(.)}\right)^2
\end{aligned}
$$
where the $\bar{x}_{(i)}$ are the jackknife values:
$$
\begin{aligned}
\bar{x}_{(i)} = \sum_{j\neq i}\frac{x_j}{n-1}
\end{aligned}
$$
and $\bar{x}_{(.)}$ is the average of the $\bar{x}_{(i)}$s.
Now let's assume further that the $x_i$s are independent draws from a Normal distribution with mean $\mu$ and variance $\sigma^2$. Since we've made a distributional assumption, we can use maximum likelihood to estimate $\mu$ and $\sigma^2$.
$$
\begin{aligned}
\hat{\mu}_{\text{MLE}} = \bar{x}, \quad \hat{\sigma}^2_{\text{MLE}} = \sum_{i=1}^n\frac{(x_i - \bar{x})^2}{n}
\end{aligned}
$$
In the Normal model, how accurate is $\bar{x}$ as an estimate of $\mu$? We start by computing the variance of $\bar{x}$ in terms of $\sigma^2$.
\begin{align}
\operatorname{var}(\bar{x})
= \sum_{i=1}^n\frac{\operatorname{var}(x_i)}{n^2}
= \frac{\sigma^2}{n}
\end{align}
And since we don't actually know $\sigma^2$, we use the plug-in principle and substitute $\hat{\sigma}^2$ for $\sigma^2$.
To sum up, we have an estimator $\bar{x}$ of the mean $\mu$ and two estimates of the variance of that estimator, one derived with the jackknife and the other with maximum likelihood. Let's calculate them on some data.
set.seed(1234)
# We simulate some normal data but the jackknife makes no distributional assumptions.
n <- 1000
x <- rnorm(n)
# The MLE estimate of the population mean is the sample mean:
xbar <- mean(x)
# The MLE estimate of the population variance:
s2 <- sum((x - xbar)^2) / n
# The plug-in estimate of the variance of the sample mean:
s2 / n
#> [1] 0.0009936879
Now use the jackknife implementation in the bootstrap package.
library("bootstrap")
# The jackknife estimate of the variance of the sample mean:
(jackknife(x, mean)$jack.se)^2
#> [1] 0.0009946825
Okay. So the jackknife has a slightly bigger estimate for the variance of $\bar{x}$. Does it mean that it's better? The MLE estimate of $\sigma^2$ is biased by a factor of $n/(n-1)$: $\widehat{\sigma}^2_{\text{MLE}}$ is a little too small on average, so when we plug it in for $\sigma^2$ we underestimate the variance of $\bar{x}$. The jackknife estimate is corrected for that bias, hence it's better in a certain sense. For large samples the difference is negligible.
PS: With some algebra you can show that the difference between the MLE and the jackknife estimates of the variance of $\bar{x}$ is a factor of $n/(n-1)$.
PPS: What @Repmap calls MLE_se is the MLE estimate $\hat{\sigma}$ of the population standard deviation $\sigma$. Not an estimate of the standard error $\widehat{\operatorname{se}}$ of the sample mean $\bar{x}$. | Is the Jackknife estimation better than Maximum Likelihood Estimator? | Comparing maximum likelihood estimation (MLE) and the jackknife is like comparing apples and oranges. These techniques accomplish different things.
Maximum likelihood estimation is a method for estim | Is the Jackknife estimation better than Maximum Likelihood Estimator?
Comparing maximum likelihood estimation (MLE) and the jackknife is like comparing apples and oranges. These techniques accomplish different things.
Maximum likelihood estimation is a method for estimating parameters and fitting models. Since it's based on maximizing the likelihood function, it's a model-based algorithm.
The jackknife is a method for estimating the bias and variance of estimators. It doesn't make distributional assumptions about the data generating process, so it's nonparametric.
Let's highlight the difference between estimating a parameter and estimating the variance of that estimator. I'll use @Repmat's example.
Let $x_1,\ldots,x_n$ be independent draws from a population with mean $\mu$. The sample average $\bar{x} = \sum_{i=1}^nx_i/n$ is an estimator of the mean $\mu$.
How accurately does $\bar{x}$ estimate $\mu$? In other words, what is the variance of $\bar{x}$?
The jackknife's answer is:
$$
\begin{aligned}
\widehat{\operatorname{var}}_{\text{jack}} &=
\frac{n-1}{n}\sum_{i=1}^n\left(\bar{x}_{(i)} - \bar{x}_{(.)}\right)^2
\end{aligned}
$$
where the $\bar{x}_{(i)}$ are the jackknife values:
$$
\begin{aligned}
\bar{x}_{(i)} = \sum_{j\neq i}\frac{x_j}{n-1}
\end{aligned}
$$
and $\bar{x}_{(.)}$ is the average of the $\bar{x}_{(i)}$s.
Now let's assume further that the $x_i$s are independent draws from a Normal distribution with mean $\mu$ and variance $\sigma^2$. Since we've made a distributional assumption, we can use maximum likelihood to estimate $\mu$ and $\sigma^2$.
$$
\begin{aligned}
\hat{\mu}_{\text{MLE}} = \bar{x}, \quad \hat{\sigma}^2_{\text{MLE}} = \sum_{i=1}^n\frac{(x_i - \bar{x})^2}{n}
\end{aligned}
$$
In the Normal model, how accurate is $\bar{x}$ as an estimate of $\mu$? We start by computing the variance of $\bar{x}$ in terms of $\sigma^2$.
\begin{align}
\operatorname{var}(\bar{x})
= \sum_{i=1}^n\frac{\operatorname{var}(x_i)}{n^2}
= \frac{\sigma^2}{n}
\end{align}
And since we don't actually know $\sigma^2$, we use the plug-in principle and substitute $\hat{\sigma}^2$ for $\sigma^2$.
To sum up, we have an estimator $\bar{x}$ of the mean $\mu$ and two estimates of the variance of that estimator, one derived with the jackknife and the other with maximum likelihood. Let's calculate them on some data.
set.seed(1234)
# We simulate some normal data but the jackknife makes no distributional assumptions.
n <- 1000
x <- rnorm(n)
# The MLE estimate of the population mean is the sample mean:
xbar <- mean(x)
# The MLE estimate of the population variance:
s2 <- sum((x - xbar)^2) / n
# The plug-in estimate of the variance of the sample mean:
s2 / n
#> [1] 0.0009936879
Now use the jackknife implementation in the bootstrap package.
library("bootstrap")
# The jackknife estimate of the variance of the sample mean:
(jackknife(x, mean)$jack.se)^2
#> [1] 0.0009946825
Okay. So the jackknife has a slightly bigger estimate for the variance of $\bar{x}$. Does it mean that it's better? The MLE estimate of $\sigma^2$ is biased by a factor of $n/(n-1)$: $\widehat{\sigma}^2_{\text{MLE}}$ is a little too small on average, so when we plug it in for $\sigma^2$ we underestimate the variance of $\bar{x}$. The jackknife estimate is corrected for that bias, hence it's better in a certain sense. For large samples the difference is negligible.
PS: With some algebra you can show that the difference between the MLE and the jackknife estimates of the variance of $\bar{x}$ is a factor of $n/(n-1)$.
PPS: What @Repmap calls MLE_se is the MLE estimate $\hat{\sigma}$ of the population standard deviation $\sigma$. Not an estimate of the standard error $\widehat{\operatorname{se}}$ of the sample mean $\bar{x}$. | Is the Jackknife estimation better than Maximum Likelihood Estimator?
Comparing maximum likelihood estimation (MLE) and the jackknife is like comparing apples and oranges. These techniques accomplish different things.
Maximum likelihood estimation is a method for estim |
53,712 | What does it mean to say that "the population mean is a single fixed number?" | Consider a finite population of values -- let's say you have a billion of them
The mean of that population is a single value -- you can compute it. If you recalculate it again on the same population, you use all the same values in your mean, so it's the same number every time.
By contrast if you take new samples the sample mean is different from one sample to the next, because the samples contain different individuals. | What does it mean to say that "the population mean is a single fixed number?" | Consider a finite population of values -- let's say you have a billion of them
The mean of that population is a single value -- you can compute it. If you recalculate it again on the same population, | What does it mean to say that "the population mean is a single fixed number?"
Consider a finite population of values -- let's say you have a billion of them
The mean of that population is a single value -- you can compute it. If you recalculate it again on the same population, you use all the same values in your mean, so it's the same number every time.
By contrast if you take new samples the sample mean is different from one sample to the next, because the samples contain different individuals. | What does it mean to say that "the population mean is a single fixed number?"
Consider a finite population of values -- let's say you have a billion of them
The mean of that population is a single value -- you can compute it. If you recalculate it again on the same population, |
53,713 | What does it mean to say that "the population mean is a single fixed number?" | Here's how it finally made sense to me.
When we say something like, "human male height is normally distributed", we really mean it. We actually believe that, whoever is in charge of this thing, has a book, and on a page of that book is written something like:
Human males. Distributed like $N(70, 5)$, in units of inches.
When whoever is in charge needs a new male, they fire up their random number generator
Ok, I need a new male. (Generates sample point), ok, 72 inches.
We don't get to see the book, we just get to see a bunch of males. So the best we can do is use this bunch of males to attempt to infer what is in the book.
What is in the book is the population parameter. What we get by using statistics and data is a parameter estimate. | What does it mean to say that "the population mean is a single fixed number?" | Here's how it finally made sense to me.
When we say something like, "human male height is normally distributed", we really mean it. We actually believe that, whoever is in charge of this thing, has a | What does it mean to say that "the population mean is a single fixed number?"
Here's how it finally made sense to me.
When we say something like, "human male height is normally distributed", we really mean it. We actually believe that, whoever is in charge of this thing, has a book, and on a page of that book is written something like:
Human males. Distributed like $N(70, 5)$, in units of inches.
When whoever is in charge needs a new male, they fire up their random number generator
Ok, I need a new male. (Generates sample point), ok, 72 inches.
We don't get to see the book, we just get to see a bunch of males. So the best we can do is use this bunch of males to attempt to infer what is in the book.
What is in the book is the population parameter. What we get by using statistics and data is a parameter estimate. | What does it mean to say that "the population mean is a single fixed number?"
Here's how it finally made sense to me.
When we say something like, "human male height is normally distributed", we really mean it. We actually believe that, whoever is in charge of this thing, has a |
53,714 | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation? | There are several levels of confusion exposed by your question:
By reading the code of the sensitivity test procedure, I find the
steps in Markov Chain is quite similar to random walk.
It would be most profitable to read further than "the code" about Markov chain Monte Carlo methods. For instance, the Wikipedia entry on the Metropolis-Hastings algorithm is pretty informative. And Charlie Geyer has a very good introduction to MCMC available online. A random walk is a special case of Markov chain with (a) symmetry constraints in the moves and (b) no stationary distribution in most cases. Markov chains produced by MCMC must have a stationary distribution, which is the distribution of interest.
Also, from my understanding of Markov Chain, a transition matrix is
generally prescribed for such simulations.
Markov chain Monte Carlo methods are producing Markov chains and are justified by Markov chain theory. In discrete (finite or countable) state spaces, the Markov chains are defined by a transition matrix $(K(x,y))_{(x,y)\in\mathfrak{X}^2}$ while in general spaces the Markov chains are defined by a transition kernel.
So I'm confused whether or not MCMC needs a transition matrix:
To be implemented, MCMC requires a practical solution to the generation from the transition kernel, i.e., to be able to generate $X_{t+1}$ given $X_t$. For instance, the Metropolis-Hastings algorithm relies on an auxiliary transition kernel $Q$ and proceeds in two steps:
Generate $Y_t\sim Q(x_t,y)$ when $X_t=x_t$
Take $$X_{t+1}=\begin{cases} Y_t &\text{with probability }\alpha(x_t,y_t)\\X_t &\text{with probability }1-\alpha(x_t,y_t)\\\end{cases}$$where$$\alpha(x,y)=\frac{\pi(y)Q(y,x)}{\pi(x)Q(x,y)} \wedge 1$$
This is an implementable algorithm even though the associated transition kernel $K(x,y)$ is usually not available in closed form because the rejection probability
$$\beta(x)=\int_\mathfrak{X} \{1-\alpha(x,y)\} Q(x,\text{d}y)$$
most often cannot be derived.
If so, how to generate the transition matrix of Markov Chain needed
for MCMC simulation? If not, why can't such a transition matrix be generated for Markov Chain?
It is obviously possible to both be able to generate from the MCMC transition kernel (as otherwise the algorithm could not run) and be unable to produce the transition kernel in closed form. | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation | There are several levels of confusion exposed by your question:
By reading the code of the sensitivity test procedure, I find the
steps in Markov Chain is quite similar to random walk.
It would b | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation?
There are several levels of confusion exposed by your question:
By reading the code of the sensitivity test procedure, I find the
steps in Markov Chain is quite similar to random walk.
It would be most profitable to read further than "the code" about Markov chain Monte Carlo methods. For instance, the Wikipedia entry on the Metropolis-Hastings algorithm is pretty informative. And Charlie Geyer has a very good introduction to MCMC available online. A random walk is a special case of Markov chain with (a) symmetry constraints in the moves and (b) no stationary distribution in most cases. Markov chains produced by MCMC must have a stationary distribution, which is the distribution of interest.
Also, from my understanding of Markov Chain, a transition matrix is
generally prescribed for such simulations.
Markov chain Monte Carlo methods are producing Markov chains and are justified by Markov chain theory. In discrete (finite or countable) state spaces, the Markov chains are defined by a transition matrix $(K(x,y))_{(x,y)\in\mathfrak{X}^2}$ while in general spaces the Markov chains are defined by a transition kernel.
So I'm confused whether or not MCMC needs a transition matrix:
To be implemented, MCMC requires a practical solution to the generation from the transition kernel, i.e., to be able to generate $X_{t+1}$ given $X_t$. For instance, the Metropolis-Hastings algorithm relies on an auxiliary transition kernel $Q$ and proceeds in two steps:
Generate $Y_t\sim Q(x_t,y)$ when $X_t=x_t$
Take $$X_{t+1}=\begin{cases} Y_t &\text{with probability }\alpha(x_t,y_t)\\X_t &\text{with probability }1-\alpha(x_t,y_t)\\\end{cases}$$where$$\alpha(x,y)=\frac{\pi(y)Q(y,x)}{\pi(x)Q(x,y)} \wedge 1$$
This is an implementable algorithm even though the associated transition kernel $K(x,y)$ is usually not available in closed form because the rejection probability
$$\beta(x)=\int_\mathfrak{X} \{1-\alpha(x,y)\} Q(x,\text{d}y)$$
most often cannot be derived.
If so, how to generate the transition matrix of Markov Chain needed
for MCMC simulation? If not, why can't such a transition matrix be generated for Markov Chain?
It is obviously possible to both be able to generate from the MCMC transition kernel (as otherwise the algorithm could not run) and be unable to produce the transition kernel in closed form. | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation
There are several levels of confusion exposed by your question:
By reading the code of the sensitivity test procedure, I find the
steps in Markov Chain is quite similar to random walk.
It would b |
53,715 | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation? | A transition matrix determines the movement of a Markov chain when the space over which the chain is defined (the state space) is finite or countable. If the Markov chain is at state $x$, element $(x,y)$ in the transition matrix is the probability of moving to $y$. For example, consider a Markov chain that has only two possible states, $\{0, 1\}$. Then the transition matrix be $P(x, y)$,
$$P(x,y) = \left[\begin{array}{cc} 1/2 & 1/2\\ 1/3 & 2/3 \end{array}\right], $$
determines how the Markov chain moves. For example, $P(x=1, y=1) = Pr(y = 1|x = 1) = 2/3$.
I am a little unfamiliar with using transition matrixes, but I know you can use R package markovchain to deal with such Markov chains. Read the vignette here for help.
My explanation below should answer your second question.
Consider the case where the Markov chain can move to infinite states. For example when the state space is $\mathbb{R} = (-\infty, \infty)$ or even $(0,1)$. It is then impossible to write down a transition matrix, since the number of possible states are uncountably infinite. In addition, the probably of going from one state to any other state is exactly 0, due to the infinite size of the state space.
Such a space for a Markov chain is called general state space. The transition of the Markov chain over such a state space is instead defined with a Markov transition kernel $P(x, A)$, where $x$ is an element, and $A$ is a measurable set in the state space.
You can find some theoretical references for general state space Markov chains here:
http://projecteuclid.org/euclid.ps/1099928648
http://www.stat.umn.edu/geyer/8112/notes/markov.pdf
https://perso.univ-rennes1.fr/dimitri.petritis/ps/markov.pdf
For MCMC, if you are in a general state space, you don't need to necessarily understand what $P(x,A)$ exactly looks like as long as you have an algorithm that tells you exactly how the Markov chain moves from one step to the other. One of the most common MCMC techniques is the Metropolis-Hastings algorithm. | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation | A transition matrix determines the movement of a Markov chain when the space over which the chain is defined (the state space) is finite or countable. If the Markov chain is at state $x$, element $(x, | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation?
A transition matrix determines the movement of a Markov chain when the space over which the chain is defined (the state space) is finite or countable. If the Markov chain is at state $x$, element $(x,y)$ in the transition matrix is the probability of moving to $y$. For example, consider a Markov chain that has only two possible states, $\{0, 1\}$. Then the transition matrix be $P(x, y)$,
$$P(x,y) = \left[\begin{array}{cc} 1/2 & 1/2\\ 1/3 & 2/3 \end{array}\right], $$
determines how the Markov chain moves. For example, $P(x=1, y=1) = Pr(y = 1|x = 1) = 2/3$.
I am a little unfamiliar with using transition matrixes, but I know you can use R package markovchain to deal with such Markov chains. Read the vignette here for help.
My explanation below should answer your second question.
Consider the case where the Markov chain can move to infinite states. For example when the state space is $\mathbb{R} = (-\infty, \infty)$ or even $(0,1)$. It is then impossible to write down a transition matrix, since the number of possible states are uncountably infinite. In addition, the probably of going from one state to any other state is exactly 0, due to the infinite size of the state space.
Such a space for a Markov chain is called general state space. The transition of the Markov chain over such a state space is instead defined with a Markov transition kernel $P(x, A)$, where $x$ is an element, and $A$ is a measurable set in the state space.
You can find some theoretical references for general state space Markov chains here:
http://projecteuclid.org/euclid.ps/1099928648
http://www.stat.umn.edu/geyer/8112/notes/markov.pdf
https://perso.univ-rennes1.fr/dimitri.petritis/ps/markov.pdf
For MCMC, if you are in a general state space, you don't need to necessarily understand what $P(x,A)$ exactly looks like as long as you have an algorithm that tells you exactly how the Markov chain moves from one step to the other. One of the most common MCMC techniques is the Metropolis-Hastings algorithm. | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation
A transition matrix determines the movement of a Markov chain when the space over which the chain is defined (the state space) is finite or countable. If the Markov chain is at state $x$, element $(x, |
53,716 | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation? | Every Markov Chain can be viewed as a random walk.
If you're implementing MCMC, you don't need to explicitly specify or know the transition function(matrix), as Greenparker suggested, M-H algorithm is a common technique, which allows you achieve the stationary distribution.
If you know and prescribe transition matrix beforehand, then you could just keep doing multiplication till convergence, I don't think MCMC would be necessary in this scenario. | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation | Every Markov Chain can be viewed as a random walk.
If you're implementing MCMC, you don't need to explicitly specify or know the transition function(matrix), as Greenparker suggested, M-H algorithm is | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation?
Every Markov Chain can be viewed as a random walk.
If you're implementing MCMC, you don't need to explicitly specify or know the transition function(matrix), as Greenparker suggested, M-H algorithm is a common technique, which allows you achieve the stationary distribution.
If you know and prescribe transition matrix beforehand, then you could just keep doing multiplication till convergence, I don't think MCMC would be necessary in this scenario. | How to generate the transition matrix of Markov Chain needed for Markov Chain Monte Carlo simulation
Every Markov Chain can be viewed as a random walk.
If you're implementing MCMC, you don't need to explicitly specify or know the transition function(matrix), as Greenparker suggested, M-H algorithm is |
53,717 | Objective function of canonical correlation analysis (CCA) | If $X$ is $n\times p$ and $Y$ is $n\times q$, then one can formulate the CCA optimization problem for the first canonical pair as follows:
$$\text{Maximize }\operatorname{corr}(Xa, Yb).$$
The value of the correlation does not depend on the lengths of $a$ and $b$, so they can be arbitrarily fixed. It is convenient to fix them such that the projections have unit variances:
$$\text{Maximize }\operatorname{corr}(Xa, Yb) \text{ subject to } a^\top \Sigma_X a=1 \text{ and } b^\top \Sigma_Yb=1,$$
because then the correlation equals the covariance:
$$\text{Maximize } a^\top \Sigma_{XY}b \text{ subject to } a^\top \Sigma_X a=1 \text{ and } b^\top \Sigma_Yb=1,$$
where $\Sigma_{XY}$ is the cross-covariance matrix given by $X^\top Y/n$.
We can now generalize it to more than one dimension as follows:
$$\text{Maximize }\operatorname{tr}(A^\top \Sigma_{XY}B) \text{ subject to } A^\top \Sigma_X A=I \text{ and } B^\top \Sigma_Y B=I,$$
where the trace forms precisely the sum over successive canonical correlation coefficients, as you hypothesized in your question. You only had the constraints on $A$ and $B$ wrong.
The standard way to solve CCA problem is to define substitutions $\tilde A = \Sigma_X^{1/2} A$ and $\tilde B = \Sigma_Y^{1/2} B$ (conceptually this is equivalent to wightening both $X$ and $Y$), obtaining
$$\text{Maximize }\operatorname{tr}(\tilde A^\top \Sigma_X^{-1/2} \Sigma_{XY}\Sigma_Y^{-1/2} \tilde B) \text{ subject to } \tilde A^\top \tilde A=I \text{ and } \tilde B^\top \tilde B=I.$$
This is now easy to solve because of the orthogonality constraints; the solution is given by left and right singular vectors of $\Sigma_X^{-1/2} \Sigma_{XY}\Sigma_Y^{-1/2}$ (that can then easily be back-transformed to $A$ and $B$ without tildes).
Relationship to reduced-rank regression
CCA can be formulated as a reduced-rank regression problem. Namely, $A$ and $B$ corresponding to the first $k$ canonical pairs minimize the following cost function:
$$\Big\|(Y-XAB^\top)\Sigma_Y^{-1/2}\Big\|^2 =
\Big\|Y\Sigma_Y^{-1/2}-XAB^\top\Sigma_Y^{-1/2}\Big\|^2.$$
See e.g. Torre, 2009, Least Squares Framework for Component Analysis, page 6 (but the text is quite dense and might be a bit hard to follow). This is called reduced-rank regression because the matrix of regression coefficients $AB^\top\Sigma_Y^{-1/2}$ is of low rank $k$.
In contrast, standard OLS regression minimizes
$$\|Y-XV\|^2$$
without any rank constraint on $V$. The solution $V_\mathrm{OLS}$ will generally be full rank, i.e. rank $\min(p,q)$.
Even in the $k=p=q$ situation there still remains one crucial difference: for CCA one needs to whiten dependent variables $Y$ by replacing it with $Y\Sigma_Y^{-1/2}$. This is because regression tries to explain as much variance in $Y$ as possible, whereas CCA does not care about the variance at all, it only cares about correlation. If $Y$ is whitened, then its variance in all directions is the same, and the regression loss function starts maximizing the correlation.
(I think there is no way to obtain $A$ and $B$ from $V_\mathrm{OLS}$.) | Objective function of canonical correlation analysis (CCA) | If $X$ is $n\times p$ and $Y$ is $n\times q$, then one can formulate the CCA optimization problem for the first canonical pair as follows:
$$\text{Maximize }\operatorname{corr}(Xa, Yb).$$
The value of | Objective function of canonical correlation analysis (CCA)
If $X$ is $n\times p$ and $Y$ is $n\times q$, then one can formulate the CCA optimization problem for the first canonical pair as follows:
$$\text{Maximize }\operatorname{corr}(Xa, Yb).$$
The value of the correlation does not depend on the lengths of $a$ and $b$, so they can be arbitrarily fixed. It is convenient to fix them such that the projections have unit variances:
$$\text{Maximize }\operatorname{corr}(Xa, Yb) \text{ subject to } a^\top \Sigma_X a=1 \text{ and } b^\top \Sigma_Yb=1,$$
because then the correlation equals the covariance:
$$\text{Maximize } a^\top \Sigma_{XY}b \text{ subject to } a^\top \Sigma_X a=1 \text{ and } b^\top \Sigma_Yb=1,$$
where $\Sigma_{XY}$ is the cross-covariance matrix given by $X^\top Y/n$.
We can now generalize it to more than one dimension as follows:
$$\text{Maximize }\operatorname{tr}(A^\top \Sigma_{XY}B) \text{ subject to } A^\top \Sigma_X A=I \text{ and } B^\top \Sigma_Y B=I,$$
where the trace forms precisely the sum over successive canonical correlation coefficients, as you hypothesized in your question. You only had the constraints on $A$ and $B$ wrong.
The standard way to solve CCA problem is to define substitutions $\tilde A = \Sigma_X^{1/2} A$ and $\tilde B = \Sigma_Y^{1/2} B$ (conceptually this is equivalent to wightening both $X$ and $Y$), obtaining
$$\text{Maximize }\operatorname{tr}(\tilde A^\top \Sigma_X^{-1/2} \Sigma_{XY}\Sigma_Y^{-1/2} \tilde B) \text{ subject to } \tilde A^\top \tilde A=I \text{ and } \tilde B^\top \tilde B=I.$$
This is now easy to solve because of the orthogonality constraints; the solution is given by left and right singular vectors of $\Sigma_X^{-1/2} \Sigma_{XY}\Sigma_Y^{-1/2}$ (that can then easily be back-transformed to $A$ and $B$ without tildes).
Relationship to reduced-rank regression
CCA can be formulated as a reduced-rank regression problem. Namely, $A$ and $B$ corresponding to the first $k$ canonical pairs minimize the following cost function:
$$\Big\|(Y-XAB^\top)\Sigma_Y^{-1/2}\Big\|^2 =
\Big\|Y\Sigma_Y^{-1/2}-XAB^\top\Sigma_Y^{-1/2}\Big\|^2.$$
See e.g. Torre, 2009, Least Squares Framework for Component Analysis, page 6 (but the text is quite dense and might be a bit hard to follow). This is called reduced-rank regression because the matrix of regression coefficients $AB^\top\Sigma_Y^{-1/2}$ is of low rank $k$.
In contrast, standard OLS regression minimizes
$$\|Y-XV\|^2$$
without any rank constraint on $V$. The solution $V_\mathrm{OLS}$ will generally be full rank, i.e. rank $\min(p,q)$.
Even in the $k=p=q$ situation there still remains one crucial difference: for CCA one needs to whiten dependent variables $Y$ by replacing it with $Y\Sigma_Y^{-1/2}$. This is because regression tries to explain as much variance in $Y$ as possible, whereas CCA does not care about the variance at all, it only cares about correlation. If $Y$ is whitened, then its variance in all directions is the same, and the regression loss function starts maximizing the correlation.
(I think there is no way to obtain $A$ and $B$ from $V_\mathrm{OLS}$.) | Objective function of canonical correlation analysis (CCA)
If $X$ is $n\times p$ and $Y$ is $n\times q$, then one can formulate the CCA optimization problem for the first canonical pair as follows:
$$\text{Maximize }\operatorname{corr}(Xa, Yb).$$
The value of |
53,718 | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not significant"? | The reviewer is apparently trying to use the standard deviation as some sort of ad-hoc statistical test, but this doesn't work. The reviewer's snarky "unless my statistics is failing me" comment is therefore funny (or maddening, if this gets your paper rejected).
Specifically, standard deviation tells us how spread out the values are around the mean value. However, in most hypothesis-testing situations, we are interested in determining the means of each group and whether these means differ. To do this, we need to determine how accurately we know the mean of each group, and the relevant statistic here is the standard error of the mean, not the standard deviation. (These are related, in that $\textrm{se}_\textrm{mean} = \frac{s}{\sqrt{n}}$, where $s$ is the sample standard deviation and $n$ is the number of samples).
In a $t$-test, the denominator is often the standard error of the mean (or something like it for two-sample t-tests).
It's easy simulate data that shows it is possible to find significant differences between groups that have a standard deviation of at least one. For example,
a = rnorm(100, 0, 1) # Draws 100 random points from N(0,1)
b = rnorm(100, 1, 1) # Draws 100 random points from N(1,1)
t.test(a, b)
# Welch Two Sample t-test
#
# data: a and b
# t = -8.0116, df = 190.746, p-value = 1.097e-13
# alternative hypothesis: true difference in means is not equal to 0
# 95 percent confidence interval:
# -1.4188010 -0.8581969
# sample estimates:
# mean of x mean of y
# -0.1154137 1.0230852
Similarly, you can imagine a scenario in which the data where each standard error of the mean is much larger than one, yet a $t$-test finds a significant difference. For example, suppose the means were -100 and 100: with enough data, we'd be able to tell them apart even if the standard deviation were large (e.g., $\sigma=30$ for $n=100$)
In summary, the reviewer has totally failed at statistics, which makes his/her snarky comment funny. | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not | The reviewer is apparently trying to use the standard deviation as some sort of ad-hoc statistical test, but this doesn't work. The reviewer's snarky "unless my statistics is failing me" comment is th | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not significant"?
The reviewer is apparently trying to use the standard deviation as some sort of ad-hoc statistical test, but this doesn't work. The reviewer's snarky "unless my statistics is failing me" comment is therefore funny (or maddening, if this gets your paper rejected).
Specifically, standard deviation tells us how spread out the values are around the mean value. However, in most hypothesis-testing situations, we are interested in determining the means of each group and whether these means differ. To do this, we need to determine how accurately we know the mean of each group, and the relevant statistic here is the standard error of the mean, not the standard deviation. (These are related, in that $\textrm{se}_\textrm{mean} = \frac{s}{\sqrt{n}}$, where $s$ is the sample standard deviation and $n$ is the number of samples).
In a $t$-test, the denominator is often the standard error of the mean (or something like it for two-sample t-tests).
It's easy simulate data that shows it is possible to find significant differences between groups that have a standard deviation of at least one. For example,
a = rnorm(100, 0, 1) # Draws 100 random points from N(0,1)
b = rnorm(100, 1, 1) # Draws 100 random points from N(1,1)
t.test(a, b)
# Welch Two Sample t-test
#
# data: a and b
# t = -8.0116, df = 190.746, p-value = 1.097e-13
# alternative hypothesis: true difference in means is not equal to 0
# 95 percent confidence interval:
# -1.4188010 -0.8581969
# sample estimates:
# mean of x mean of y
# -0.1154137 1.0230852
Similarly, you can imagine a scenario in which the data where each standard error of the mean is much larger than one, yet a $t$-test finds a significant difference. For example, suppose the means were -100 and 100: with enough data, we'd be able to tell them apart even if the standard deviation were large (e.g., $\sigma=30$ for $n=100$)
In summary, the reviewer has totally failed at statistics, which makes his/her snarky comment funny. | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not
The reviewer is apparently trying to use the standard deviation as some sort of ad-hoc statistical test, but this doesn't work. The reviewer's snarky "unless my statistics is failing me" comment is th |
53,719 | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not significant"? | It's unclear without more information. Here's an example where the standard deviation is less than 1.0, but the test is significant:
# draw 100 samples from two random Normal distributions with small standard deviations,
# with different means:
x <- rnorm(100, 1, .5)
x2 <- rnorm(100, 3, .5)
# note the standard deviation of the difference in means that will be tested:
sd(x-x2)
# here's the significant result:
t.test(x, x2)
Does that help? | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not | It's unclear without more information. Here's an example where the standard deviation is less than 1.0, but the test is significant:
# draw 100 samples from two random Normal distributions with small | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not significant"?
It's unclear without more information. Here's an example where the standard deviation is less than 1.0, but the test is significant:
# draw 100 samples from two random Normal distributions with small standard deviations,
# with different means:
x <- rnorm(100, 1, .5)
x2 <- rnorm(100, 3, .5)
# note the standard deviation of the difference in means that will be tested:
sd(x-x2)
# here's the significant result:
t.test(x, x2)
Does that help? | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not
It's unclear without more information. Here's an example where the standard deviation is less than 1.0, but the test is significant:
# draw 100 samples from two random Normal distributions with small |
53,720 | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not significant"? | Most likely the reviewer means that your parameter is closer than one standard deviation to the null hypothesis value. For instance, your regression slope coefficient is 0.5 while its standard deviation is 1. I emphasized to assume that the reviewer is using the correct standard deviation, i.e. adjusted to the sample size blah-blah. In this case if you're testing whether there is a slope in the regression, you compare 0.5 to 0, and observe that it's closer than the standard deviation.
For instance, consider gamma distribution with parameters $\alpha=0.1$ and $\beta=10000$, its mean is 1000 and $\sigma=3162$. If you test the distance between the mean and 1e-10 then the one tailed test would give you 5% significance. It's very skewed distribution.
Can you construct an example where p-value is significant while the distance is less than the standard deviation? Yes, of course. However, in most regressions this simple heuristic based on the standard deviations works fine. Moreover, to determine significance, you usually have to assume the probability distribution. While a simple comparison to the standard deviation does not require any assumptions on distributions. Therefore, I'd side with your reviewer in this case. Unless you have reasons to believe that she can't calculate the appropriate standard deviation of the parameter, which is unlikely (why would a reviewer be incompetent?).
Also, in applied work folks (like myself) speak in terms of standard deviations all the time. For instance, when talking about the accuracy of an instrument or economic significance of the coefficient. It's a very good measure of the dispersion. | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not | Most likely the reviewer means that your parameter is closer than one standard deviation to the null hypothesis value. For instance, your regression slope coefficient is 0.5 while its standard deviati | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not significant"?
Most likely the reviewer means that your parameter is closer than one standard deviation to the null hypothesis value. For instance, your regression slope coefficient is 0.5 while its standard deviation is 1. I emphasized to assume that the reviewer is using the correct standard deviation, i.e. adjusted to the sample size blah-blah. In this case if you're testing whether there is a slope in the regression, you compare 0.5 to 0, and observe that it's closer than the standard deviation.
For instance, consider gamma distribution with parameters $\alpha=0.1$ and $\beta=10000$, its mean is 1000 and $\sigma=3162$. If you test the distance between the mean and 1e-10 then the one tailed test would give you 5% significance. It's very skewed distribution.
Can you construct an example where p-value is significant while the distance is less than the standard deviation? Yes, of course. However, in most regressions this simple heuristic based on the standard deviations works fine. Moreover, to determine significance, you usually have to assume the probability distribution. While a simple comparison to the standard deviation does not require any assumptions on distributions. Therefore, I'd side with your reviewer in this case. Unless you have reasons to believe that she can't calculate the appropriate standard deviation of the parameter, which is unlikely (why would a reviewer be incompetent?).
Also, in applied work folks (like myself) speak in terms of standard deviations all the time. For instance, when talking about the accuracy of an instrument or economic significance of the coefficient. It's a very good measure of the dispersion. | Why is this reviewer's comment funny: "Unless my statistics is failing me, a less than 1.0 SD is not
Most likely the reviewer means that your parameter is closer than one standard deviation to the null hypothesis value. For instance, your regression slope coefficient is 0.5 while its standard deviati |
53,721 | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn from two different gamma distributions" | Sorry it took me a while to post this answer.
To re-phrase the question again; we are given the sample $X_1,...,X_n$ where $X_i \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$. The hypothesis test of interest is
$$
H_0:\; p=1\;\;\;\;\;H_1:\; p \in (0,1)
$$
A possible issue with using a likelihood ratio test in this setting is that the null hypothesis, $p=1$, lies on the boundary of the parameter space $p \in (0,1]$*, causing the asymptotic Chi-squared distribution to no longer hold. This is a well researched topic and there are many papers discussing this issue and proposing new, more robust, hypothesis tests. See here, here,here, and here for a few examples (google "likelihood ratio test boundary of parameter space" and many more results will pop-up). The literature on this topic is very abstruse, but it still may be worth your time if the problem is important to you.
*In fact, the hypotheses are technically not even nested since the true parameter space for a two state mixture model is $p \in (0,1)$ (if $p$ was allowed to be exactly 1 then $\alpha_2$ and $\beta_2$ could change value without effecting the likelihood causing an identification problem) .
The purpose of my answer is to propose 2 alternative testing techniques which, although maybe not exactly what you had in mind, are easier to understand and circumvent the boundary issue.
1. Bayesian Model Comparison
This is not hypothesis testing in the way most people think about it. However, it can be used to accomplish the same objectives and many practitioners actually prefer this technique over traditional hypothesis testing, especially for problems related to the one you're asking about.
Instead of obtaining a p-value which tells you the probability of obtaining data that is "as extreme or more extreme" then that observed conditional on the null hypothesis; Bayesian model comparison uses the posterior probability of the hypothesis/model, $P(H_i|X)$.
The nice thing about Bayesian model comparison is that $P(H_i|X)$ is much easier to interpret than a p-value, it is simply the probability that the hypothesis/model $H_i$ is true given the data. The bad thing about this approach is that estimating the exact value of $P(H_i|X)$ can be extremely computationally burdensome and in some cases virtually impossible.
Fortunately you can approximate $P(H_i|X)$ with the Bayesian Information Criterion (BIC). In the case of just two models, $H_0$ and $H_1$, The posterior probabilities can be approximated as
$$
P(H_0|X) \approx \bigg[1+\exp\bigg(\frac{B_0 -B_1}{2}\bigg)\bigg]^{-1}\;\;\;\;\;P(H_1|X)=1-P(H_0|X)
$$
where $B_0$ is the BIC of the model estimated under $H_0$ (a single gamma) and $B_1$ is the BIC of the model estimated under $H_1$ (the 2 gamma mixture).
The BIC approximation is an asymptotic one meaning that it gets better as the amount of data increases and may be very unreliable when the number of observations is close to the number of parameters. See This question and answer and the references cited therein for more information on this technique and it's assumptions.
2. Simulate The Null Distribution
Where $\Theta$ is the parameter space, $\Theta_0 \subset \Theta$ is the parameter space under the null hypothesis, and $\hat \theta$ are the parameter estimates; the distribution under the null hypothesis is $f(\hat \theta | \theta \in \Theta_0)$.
The basic idea here is that if we can simulate draws from $f(\hat \theta | \theta \in \Theta_0)$ we can empirically estimate a p-value.
For this to work here, you have to make a more restrictive null hypothesis then the one given above, specifically
$$
H_0:\; p=1,\;\; \alpha_1=\alpha_0,\;\; \beta_1=\beta_0
$$
You would proceed with the simulation as follows
draw a sample of size $n$ from $\Gamma(\alpha_0,\beta_0)$
fit the sample with the mixture model $p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$ and collect the point estimate $\hat p$
Repeat the above 2 steps until you have collected a large sample of estimates $\hat p_1,...,\hat p_G$
Once you fit the mixture model to your original data $X_1,..,X_n$ and collect the resulting point estimate $\hat p_{*}$, the p-value can be approximated as;
$$
Pr(\hat p_{*}|p=1)=\int_0^{\hat p_{*}} f(t|p=1)dt \approx \frac{1}{G}\sum_{g=1}^G I(\hat p_{*}>\hat p_g)
$$
The term on the right most side is called the Empirical Distribution Function. $I()$ is an indicator function that is equal to 1 when the statment inside it is true and zero otherwise.
Notice that if you were to replace $\hat p_{*}$ with $\hat p_{g}$ there is a $(1-\alpha)$% chance of falsely rejecting the null hypothesis for $\hat p_{g}$, reflecting a proper type one error rate.
The obvious downsides of this technique are it's computational burden and restriction of the null hypothesis. This method is crude when compared to the techniques proposed in the references above. | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn f | Sorry it took me a while to post this answer.
To re-phrase the question again; we are given the sample $X_1,...,X_n$ where $X_i \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$. The | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn from two different gamma distributions"
Sorry it took me a while to post this answer.
To re-phrase the question again; we are given the sample $X_1,...,X_n$ where $X_i \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$. The hypothesis test of interest is
$$
H_0:\; p=1\;\;\;\;\;H_1:\; p \in (0,1)
$$
A possible issue with using a likelihood ratio test in this setting is that the null hypothesis, $p=1$, lies on the boundary of the parameter space $p \in (0,1]$*, causing the asymptotic Chi-squared distribution to no longer hold. This is a well researched topic and there are many papers discussing this issue and proposing new, more robust, hypothesis tests. See here, here,here, and here for a few examples (google "likelihood ratio test boundary of parameter space" and many more results will pop-up). The literature on this topic is very abstruse, but it still may be worth your time if the problem is important to you.
*In fact, the hypotheses are technically not even nested since the true parameter space for a two state mixture model is $p \in (0,1)$ (if $p$ was allowed to be exactly 1 then $\alpha_2$ and $\beta_2$ could change value without effecting the likelihood causing an identification problem) .
The purpose of my answer is to propose 2 alternative testing techniques which, although maybe not exactly what you had in mind, are easier to understand and circumvent the boundary issue.
1. Bayesian Model Comparison
This is not hypothesis testing in the way most people think about it. However, it can be used to accomplish the same objectives and many practitioners actually prefer this technique over traditional hypothesis testing, especially for problems related to the one you're asking about.
Instead of obtaining a p-value which tells you the probability of obtaining data that is "as extreme or more extreme" then that observed conditional on the null hypothesis; Bayesian model comparison uses the posterior probability of the hypothesis/model, $P(H_i|X)$.
The nice thing about Bayesian model comparison is that $P(H_i|X)$ is much easier to interpret than a p-value, it is simply the probability that the hypothesis/model $H_i$ is true given the data. The bad thing about this approach is that estimating the exact value of $P(H_i|X)$ can be extremely computationally burdensome and in some cases virtually impossible.
Fortunately you can approximate $P(H_i|X)$ with the Bayesian Information Criterion (BIC). In the case of just two models, $H_0$ and $H_1$, The posterior probabilities can be approximated as
$$
P(H_0|X) \approx \bigg[1+\exp\bigg(\frac{B_0 -B_1}{2}\bigg)\bigg]^{-1}\;\;\;\;\;P(H_1|X)=1-P(H_0|X)
$$
where $B_0$ is the BIC of the model estimated under $H_0$ (a single gamma) and $B_1$ is the BIC of the model estimated under $H_1$ (the 2 gamma mixture).
The BIC approximation is an asymptotic one meaning that it gets better as the amount of data increases and may be very unreliable when the number of observations is close to the number of parameters. See This question and answer and the references cited therein for more information on this technique and it's assumptions.
2. Simulate The Null Distribution
Where $\Theta$ is the parameter space, $\Theta_0 \subset \Theta$ is the parameter space under the null hypothesis, and $\hat \theta$ are the parameter estimates; the distribution under the null hypothesis is $f(\hat \theta | \theta \in \Theta_0)$.
The basic idea here is that if we can simulate draws from $f(\hat \theta | \theta \in \Theta_0)$ we can empirically estimate a p-value.
For this to work here, you have to make a more restrictive null hypothesis then the one given above, specifically
$$
H_0:\; p=1,\;\; \alpha_1=\alpha_0,\;\; \beta_1=\beta_0
$$
You would proceed with the simulation as follows
draw a sample of size $n$ from $\Gamma(\alpha_0,\beta_0)$
fit the sample with the mixture model $p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$ and collect the point estimate $\hat p$
Repeat the above 2 steps until you have collected a large sample of estimates $\hat p_1,...,\hat p_G$
Once you fit the mixture model to your original data $X_1,..,X_n$ and collect the resulting point estimate $\hat p_{*}$, the p-value can be approximated as;
$$
Pr(\hat p_{*}|p=1)=\int_0^{\hat p_{*}} f(t|p=1)dt \approx \frac{1}{G}\sum_{g=1}^G I(\hat p_{*}>\hat p_g)
$$
The term on the right most side is called the Empirical Distribution Function. $I()$ is an indicator function that is equal to 1 when the statment inside it is true and zero otherwise.
Notice that if you were to replace $\hat p_{*}$ with $\hat p_{g}$ there is a $(1-\alpha)$% chance of falsely rejecting the null hypothesis for $\hat p_{g}$, reflecting a proper type one error rate.
The obvious downsides of this technique are it's computational burden and restriction of the null hypothesis. This method is crude when compared to the techniques proposed in the references above. | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn f
Sorry it took me a while to post this answer.
To re-phrase the question again; we are given the sample $X_1,...,X_n$ where $X_i \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$. The |
53,722 | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn from two different gamma distributions" | We can phrase this problem as follows, following Zachary's suggestion:
We are given a set of sampled points $X = \{x_1, \dots, x_n\}$.
We assume that $X \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$, where $p \in (0, 1]$ is the mixture weight of the first component. Denote the parameters of the mixture by $\theta = (p, \alpha_1, \beta_1, \alpha_2, \beta_2)$, which is in the space of possible parameters $\Theta$.
Our null hypothesis is $H_0: \theta \in \Theta_0$, where $\Theta_0$ is the set of parameters with $p = 1$. The alternative is $H_1: \theta \not\in \Theta_0$.
This fits the setup for a composite-hypothesis likelihood ratio test. That is, we want to find the likelihood ratio
$$
\Lambda(X) = \frac{\sup_{\theta \in \Theta_0} L(\theta; X)}{\sup_{\theta \in \Theta} L(\theta; X)}
.$$
Let's call the numerator $\lambda_0$ and the denominator $\lambda$ for convenience.
Note that $\lambda \ge \lambda_0$: any value that $L(\theta; X)$ can take on with $\theta \in \Theta_0$ is also a valid result for $\theta \in \Theta$. So the mixture model will necessarily fit the data better; the question is, how much better?
Well, it turns out that as $n \to \infty$, $-2\log\Lambda$ becomes $\chi^2$-distributed under $H_0$, with degrees of freedom $\dim \Theta - \dim \Theta_0 = 5 - 2 = 3$.1 So we can compute $-2 \log \Lambda$, compare to the (say) 95th percentile of the $\chi^2(3)$ distribution (let's call that threshold $c$), and then reject if $-2 \log \Lambda > c$ (i.e. the mixture model is much more likely than the simple model).
Now, how to compute $\Lambda(X)$?
First, the numerator $\lambda_0$. $\theta \in \Theta_0$ corresponds to a single gamma distribution, so $\sup_{\theta \in \Theta_0} p(X; \theta)$ is the likelihood of the maximum likelihood estimator for the gamma distribution. This is easy enough to find numerically to arbitrary precision.
Next, the denominator $\lambda$. Same thing, but for mixtures of two gammas. This is a harder problem; you should be able to get pretty good results with say EM, or there's some variants specifically for mixtures of gammas2, but you can't be sure that you got the best one. Instead, you get some $\hat{\lambda} \le \lambda$, and who knows how big that gap is.
Luckily, though, we don't actually need to find the best $\theta \in \Theta$ to be confident we should reject $H_0$. If we have a $\hat\lambda$ such that $-2 \log \frac{\lambda_0}{\hat \lambda} > c$, then since $\hat\lambda \le \lambda$ we know that $-2 \log \frac{\lambda_0}{\lambda} > c$. Not finding the best mixture reduces the power of our test but doesn't increase the probability of a false rejection.
1. As I've written it, $\dim \Theta_0$ is actually 4, but clearly the model under $H_0$ should be two-dimensional. We could alternatively have defined $\Theta$ that if $p = 1$, necessarily $\alpha_2 = \beta_2 = 0$, or something along those lines.
2. Schwander and Nielsen. Fast learning of Gamma mixture models with k-MLE. SIMBAD 2013. (doi) (author's site) | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn f | We can phrase this problem as follows, following Zachary's suggestion:
We are given a set of sampled points $X = \{x_1, \dots, x_n\}$.
We assume that $X \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamm | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn from two different gamma distributions"
We can phrase this problem as follows, following Zachary's suggestion:
We are given a set of sampled points $X = \{x_1, \dots, x_n\}$.
We assume that $X \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamma(\alpha_2, \beta_2)$, where $p \in (0, 1]$ is the mixture weight of the first component. Denote the parameters of the mixture by $\theta = (p, \alpha_1, \beta_1, \alpha_2, \beta_2)$, which is in the space of possible parameters $\Theta$.
Our null hypothesis is $H_0: \theta \in \Theta_0$, where $\Theta_0$ is the set of parameters with $p = 1$. The alternative is $H_1: \theta \not\in \Theta_0$.
This fits the setup for a composite-hypothesis likelihood ratio test. That is, we want to find the likelihood ratio
$$
\Lambda(X) = \frac{\sup_{\theta \in \Theta_0} L(\theta; X)}{\sup_{\theta \in \Theta} L(\theta; X)}
.$$
Let's call the numerator $\lambda_0$ and the denominator $\lambda$ for convenience.
Note that $\lambda \ge \lambda_0$: any value that $L(\theta; X)$ can take on with $\theta \in \Theta_0$ is also a valid result for $\theta \in \Theta$. So the mixture model will necessarily fit the data better; the question is, how much better?
Well, it turns out that as $n \to \infty$, $-2\log\Lambda$ becomes $\chi^2$-distributed under $H_0$, with degrees of freedom $\dim \Theta - \dim \Theta_0 = 5 - 2 = 3$.1 So we can compute $-2 \log \Lambda$, compare to the (say) 95th percentile of the $\chi^2(3)$ distribution (let's call that threshold $c$), and then reject if $-2 \log \Lambda > c$ (i.e. the mixture model is much more likely than the simple model).
Now, how to compute $\Lambda(X)$?
First, the numerator $\lambda_0$. $\theta \in \Theta_0$ corresponds to a single gamma distribution, so $\sup_{\theta \in \Theta_0} p(X; \theta)$ is the likelihood of the maximum likelihood estimator for the gamma distribution. This is easy enough to find numerically to arbitrary precision.
Next, the denominator $\lambda$. Same thing, but for mixtures of two gammas. This is a harder problem; you should be able to get pretty good results with say EM, or there's some variants specifically for mixtures of gammas2, but you can't be sure that you got the best one. Instead, you get some $\hat{\lambda} \le \lambda$, and who knows how big that gap is.
Luckily, though, we don't actually need to find the best $\theta \in \Theta$ to be confident we should reject $H_0$. If we have a $\hat\lambda$ such that $-2 \log \frac{\lambda_0}{\hat \lambda} > c$, then since $\hat\lambda \le \lambda$ we know that $-2 \log \frac{\lambda_0}{\lambda} > c$. Not finding the best mixture reduces the power of our test but doesn't increase the probability of a false rejection.
1. As I've written it, $\dim \Theta_0$ is actually 4, but clearly the model under $H_0$ should be two-dimensional. We could alternatively have defined $\Theta$ that if $p = 1$, necessarily $\alpha_2 = \beta_2 = 0$, or something along those lines.
2. Schwander and Nielsen. Fast learning of Gamma mixture models with k-MLE. SIMBAD 2013. (doi) (author's site) | How to test H0: "this sample is drawn from a gamma distribution" against HA: "this sample is drawn f
We can phrase this problem as follows, following Zachary's suggestion:
We are given a set of sampled points $X = \{x_1, \dots, x_n\}$.
We assume that $X \sim p \Gamma(\alpha_1, \beta_1) + (1-p) \Gamm |
53,723 | Does the Granger Causality test in the "vars" package make sense? | When I first used the causality function of the vars package I had the same doubt. Here is what I thought.
Imagine a trivariate VAR(1) model:
$$
Y_t = a_0 + a_1 Y_{t-1} + a_2 X_{t-1} + a_3 Z_{t-1} + \epsilon_{y,t} \\
X_t = b_0 + b_1 Y_{t-1} + b_2 X_{t-1} + b_3 Z_{t-1} + \epsilon_{x,t} \\
Z_t = c_0 + c_1 Y_{t-1} + c_2 X_{t-1} + c_3 Z_{t-1} + \epsilon_{z,t}
$$
In order to $X_t$ not Granger cause $Y_t$ you need to make sure that $H_0: a_2 = a_3 = 0$ or $H_0: a_2 = c_2 = 0$. You can build an $F$-test for that, but what the causality function seems to be doing is checking if $X_t$ Granger causes all other variables in the model. Why would they do that? Maybe because it is simpler. That would be the same as to just check if $H_0: a_2 = c_2 = 0$. This is a much more simple $F$-test, and it extends easily to more than three variables. So, if I am correct, this is not a bug, they simple took the easiest path. So if the null is not rejected, this means that $X_t$ does not Granger cause $Y_t$ and $Z_t$. But if the null is rejected, the test doesn't say much, $X_t$ may be Granger causing either $Y_t$, $Z_t$ or both.
Note that in the help of the causality function they only show a bivariate case, but from that example you can infer that the trivariate case would be as I described. To make sure that this is the case, one can build the corresponding $F$-test and check if the values are equal. Also, if you want to specify the response variable you can use the grangertest function of the lmtest package. | Does the Granger Causality test in the "vars" package make sense? | When I first used the causality function of the vars package I had the same doubt. Here is what I thought.
Imagine a trivariate VAR(1) model:
$$
Y_t = a_0 + a_1 Y_{t-1} + a_2 X_{t-1} + a_3 Z_{t-1} + \ | Does the Granger Causality test in the "vars" package make sense?
When I first used the causality function of the vars package I had the same doubt. Here is what I thought.
Imagine a trivariate VAR(1) model:
$$
Y_t = a_0 + a_1 Y_{t-1} + a_2 X_{t-1} + a_3 Z_{t-1} + \epsilon_{y,t} \\
X_t = b_0 + b_1 Y_{t-1} + b_2 X_{t-1} + b_3 Z_{t-1} + \epsilon_{x,t} \\
Z_t = c_0 + c_1 Y_{t-1} + c_2 X_{t-1} + c_3 Z_{t-1} + \epsilon_{z,t}
$$
In order to $X_t$ not Granger cause $Y_t$ you need to make sure that $H_0: a_2 = a_3 = 0$ or $H_0: a_2 = c_2 = 0$. You can build an $F$-test for that, but what the causality function seems to be doing is checking if $X_t$ Granger causes all other variables in the model. Why would they do that? Maybe because it is simpler. That would be the same as to just check if $H_0: a_2 = c_2 = 0$. This is a much more simple $F$-test, and it extends easily to more than three variables. So, if I am correct, this is not a bug, they simple took the easiest path. So if the null is not rejected, this means that $X_t$ does not Granger cause $Y_t$ and $Z_t$. But if the null is rejected, the test doesn't say much, $X_t$ may be Granger causing either $Y_t$, $Z_t$ or both.
Note that in the help of the causality function they only show a bivariate case, but from that example you can infer that the trivariate case would be as I described. To make sure that this is the case, one can build the corresponding $F$-test and check if the values are equal. Also, if you want to specify the response variable you can use the grangertest function of the lmtest package. | Does the Granger Causality test in the "vars" package make sense?
When I first used the causality function of the vars package I had the same doubt. Here is what I thought.
Imagine a trivariate VAR(1) model:
$$
Y_t = a_0 + a_1 Y_{t-1} + a_2 X_{t-1} + a_3 Z_{t-1} + \ |
53,724 | Does the Granger Causality test in the "vars" package make sense? | As Regis suggested, using the function grangertest from the package lmtest is a way to produce the pairwise test result. This should be equivalent to using the causality function from vars on a bivariate VAR model. In the following, the F-statistics are indentical, and the p-values are slightly different:
data(ChickEgg)
res1 = grangertest(egg ~ chicken, data = ChickEgg, order = 3)
res2 = causality(VAR(ChickEgg, p = 3), cause = 'chicken')
res1$F[2]
res1[4][2,]
res2$Granger$statistic
res2$Granger$p.value
For multivariate VAR, we can examine each pair of variables separately. | Does the Granger Causality test in the "vars" package make sense? | As Regis suggested, using the function grangertest from the package lmtest is a way to produce the pairwise test result. This should be equivalent to using the causality function from vars on a bivari | Does the Granger Causality test in the "vars" package make sense?
As Regis suggested, using the function grangertest from the package lmtest is a way to produce the pairwise test result. This should be equivalent to using the causality function from vars on a bivariate VAR model. In the following, the F-statistics are indentical, and the p-values are slightly different:
data(ChickEgg)
res1 = grangertest(egg ~ chicken, data = ChickEgg, order = 3)
res2 = causality(VAR(ChickEgg, p = 3), cause = 'chicken')
res1$F[2]
res1[4][2,]
res2$Granger$statistic
res2$Granger$p.value
For multivariate VAR, we can examine each pair of variables separately. | Does the Granger Causality test in the "vars" package make sense?
As Regis suggested, using the function grangertest from the package lmtest is a way to produce the pairwise test result. This should be equivalent to using the causality function from vars on a bivari |
53,725 | Does the Granger Causality test in the "vars" package make sense? | Another simple solution is the following:
NAMES = colnames(df)
k = ncol(df)
for (j in 1:k) {
for (i in 1:k) {
if (i != j) {
print(paste(NAMES[j], "->", NAMES[i]))
VARest = vars::VAR(df[,c(j,i)], p=1)
print(causality(VARest, cause=NAMES[j]))
}
}
} | Does the Granger Causality test in the "vars" package make sense? | Another simple solution is the following:
NAMES = colnames(df)
k = ncol(df)
for (j in 1:k) {
for (i in 1:k) {
if (i != j) {
print(paste(NAMES[j], "->", NAMES[i]))
| Does the Granger Causality test in the "vars" package make sense?
Another simple solution is the following:
NAMES = colnames(df)
k = ncol(df)
for (j in 1:k) {
for (i in 1:k) {
if (i != j) {
print(paste(NAMES[j], "->", NAMES[i]))
VARest = vars::VAR(df[,c(j,i)], p=1)
print(causality(VARest, cause=NAMES[j]))
}
}
} | Does the Granger Causality test in the "vars" package make sense?
Another simple solution is the following:
NAMES = colnames(df)
k = ncol(df)
for (j in 1:k) {
for (i in 1:k) {
if (i != j) {
print(paste(NAMES[j], "->", NAMES[i]))
|
53,726 | Does the Granger Causality test in the "vars" package make sense? | An workaround to test cause and response within the vars pkg with the help of the "exogen" settings in the VAR function:
granger_bivariate <- function(varest, causal, dep){
dtmat <- varest$datamat
mat_target <- dtmat[, c(causal, dep)]
other_as_exo <- dtmat[, setdiff(names(dtmat),c(causal, dep,'const',names(dtmat)[grepl(paste0('^',causal),names(dtmat)) | grepl(paste0('^',dep),names(dtmat)) ]))]
var_target <- VAR(mat_target, p = varest$p, exogen = other_as_exo)
gr_target <- causality(var_target, cause = causal)
g1 <- gr_target$Granger
result <- cbind(g1$statistic[1,1], g1$p.value)
return(result)
}
This works since the causality test ignore the variables in the exogen setting. | Does the Granger Causality test in the "vars" package make sense? | An workaround to test cause and response within the vars pkg with the help of the "exogen" settings in the VAR function:
granger_bivariate <- function(varest, causal, dep){
dtmat <- varest$datamat
| Does the Granger Causality test in the "vars" package make sense?
An workaround to test cause and response within the vars pkg with the help of the "exogen" settings in the VAR function:
granger_bivariate <- function(varest, causal, dep){
dtmat <- varest$datamat
mat_target <- dtmat[, c(causal, dep)]
other_as_exo <- dtmat[, setdiff(names(dtmat),c(causal, dep,'const',names(dtmat)[grepl(paste0('^',causal),names(dtmat)) | grepl(paste0('^',dep),names(dtmat)) ]))]
var_target <- VAR(mat_target, p = varest$p, exogen = other_as_exo)
gr_target <- causality(var_target, cause = causal)
g1 <- gr_target$Granger
result <- cbind(g1$statistic[1,1], g1$p.value)
return(result)
}
This works since the causality test ignore the variables in the exogen setting. | Does the Granger Causality test in the "vars" package make sense?
An workaround to test cause and response within the vars pkg with the help of the "exogen" settings in the VAR function:
granger_bivariate <- function(varest, causal, dep){
dtmat <- varest$datamat
|
53,727 | Interpreting Mediation Output when Direct Effect is not Stat. Sig but ACME and Total are | This does suggest a "full" mediation, in which all of the IV's influence is mediated. The ACME being significant shows that the mediating process appears to be present. On the other hand, you don't have evidence that there is an ADE (insignificant result). The reason the total effect is larger than the ACME alone is that the estimate still includes the estimate for the ADE, however uncertain that estimate is for the ADE. The total effect has all the certainty of the ACME and the uncertainty of the ADE, but is still statistically significant because of the apparent strength of the ACME observation.
With that said, I would be careful in discussing these results since this analysis isn't designed to prove that there is no direct effect, but rather show that there is one. Your insignificant estimate for the ADE reflects a lack of evidence, not contrary evidence per se. | Interpreting Mediation Output when Direct Effect is not Stat. Sig but ACME and Total are | This does suggest a "full" mediation, in which all of the IV's influence is mediated. The ACME being significant shows that the mediating process appears to be present. On the other hand, you don't ha | Interpreting Mediation Output when Direct Effect is not Stat. Sig but ACME and Total are
This does suggest a "full" mediation, in which all of the IV's influence is mediated. The ACME being significant shows that the mediating process appears to be present. On the other hand, you don't have evidence that there is an ADE (insignificant result). The reason the total effect is larger than the ACME alone is that the estimate still includes the estimate for the ADE, however uncertain that estimate is for the ADE. The total effect has all the certainty of the ACME and the uncertainty of the ADE, but is still statistically significant because of the apparent strength of the ACME observation.
With that said, I would be careful in discussing these results since this analysis isn't designed to prove that there is no direct effect, but rather show that there is one. Your insignificant estimate for the ADE reflects a lack of evidence, not contrary evidence per se. | Interpreting Mediation Output when Direct Effect is not Stat. Sig but ACME and Total are
This does suggest a "full" mediation, in which all of the IV's influence is mediated. The ACME being significant shows that the mediating process appears to be present. On the other hand, you don't ha |
53,728 | Is this alternative method to Metropolis-Hastings salvageable? What is it called? | The method you suggest is called "importance sampling", and its success depends on finding a good importance distribution $Q$, which should be as similar as possible to $fP$. Note that it does not replace Metropolis-Hastings (MH), since you can still use MH to sample from $Q$. | Is this alternative method to Metropolis-Hastings salvageable? What is it called? | The method you suggest is called "importance sampling", and its success depends on finding a good importance distribution $Q$, which should be as similar as possible to $fP$. Note that it does not rep | Is this alternative method to Metropolis-Hastings salvageable? What is it called?
The method you suggest is called "importance sampling", and its success depends on finding a good importance distribution $Q$, which should be as similar as possible to $fP$. Note that it does not replace Metropolis-Hastings (MH), since you can still use MH to sample from $Q$. | Is this alternative method to Metropolis-Hastings salvageable? What is it called?
The method you suggest is called "importance sampling", and its success depends on finding a good importance distribution $Q$, which should be as similar as possible to $fP$. Note that it does not rep |
53,729 | Why is a binomial distribution bell-shaped? | The binomial distribution arises as the number of successes in $n$ Bernoulli trials. Each trial is either a success or not, so the number of successes in $n$ trials can be any of the values $0, 1, 2, ..., n$. For example, the number of heads in three tosses of a coin can be 0, 1, 2 or 3.
If one divides by the number of trials to get the proportion of successes in $n$ trials, then the possible values would be $0, \frac{_1}{^n}, \frac{_2}{^n},...,\frac{_{n-1}}{^n},1$. That could be called a scaled binomial.
Which you use depends on what thing you're interested in modelling.
However, this is clearly an average height on a man
That's not correct. That histogram summarizes individual heights, not averages of heights. Also heights are not actually normally distributed. For some purposes it's not too bad an approximation, but the distribution of heights is (plainly) not actually normal. There's zero chance of a negative height, for one thing, but normal distributions all have non-zero chance of a negative value (though the probability of it may possibly be extremely small in some situations).
Perhaps the most confusing part is why the binomial distribution is close to normal, but not actually normal.
Well, the most obvious difference (one of many) is that it's a count -- a discrete distribution; a binomial cumulative distribution function (cdf) is always a step function. Normal distributions are continuous; their cdfs are never step functions.
We can often see that it has what looks like a bell shape for some moderate-sized, $n$, that only happens for sure as $n$ grows large enough (though if $p$ is middling, what counts as 'large enough' to look bell-shaped may be pretty small). For small $n$ it's often not very bell shaped, it's just a few spikes (e.g. I wouldn't call $n=2$ bell shaped for any $p$).
If $p$ is very close to $0$ or $1$ it may take a very large $n$ before it starts to look bell-shaped -- here's an example with $n=100$ that doesn't look at all bell-shaped --
but it will start to look more bell shaped eventually as $n$ increases.
(In the limit as $n$ goes to infinity, the central limit theorem tells us that the cdf of a standardized binomial variate will converge to the standard normal cdf.)
As for why that happens at some more-or-less moderate sample size, it's because it's the sum of many independent parts (the individual trials); convolutions of densities (or pmfs in the case of discrete variables) become more bell shaped (under certain conditions, all of which will be satisfied with independent Bernoulli trials) as you add more into the mix.
Consider adding two (independent) such 0-1 variables. The probability that they're both 1 (giving a total of 2) is $p^2$ and that they're both 0 is $(1-p)^2$, but the probability that one is 0 and the other is 1 is $2p(1-p)$ (these all come via elementary probability considerations). If $p$ is between 1/3 and 2/3, that probability will exceed the two end points (that is, extreme sums are harder to get than ones in the middle, because middling results can occur in many more ways), and as you add more terms, the extremes become rarer and the center gets that characteristic "bump".
More precisely, the cdf of a standardized binomial will become closer to the cdf of a standard normal as $n$ grows larger. The Berry-Esseen theorem tells us something about how far from normal it might be at some $n$ (but it's a worst-case; the binomial will tend to be closer than that bound suggests). | Why is a binomial distribution bell-shaped? | The binomial distribution arises as the number of successes in $n$ Bernoulli trials. Each trial is either a success or not, so the number of successes in $n$ trials can be any of the values $0, 1, 2, | Why is a binomial distribution bell-shaped?
The binomial distribution arises as the number of successes in $n$ Bernoulli trials. Each trial is either a success or not, so the number of successes in $n$ trials can be any of the values $0, 1, 2, ..., n$. For example, the number of heads in three tosses of a coin can be 0, 1, 2 or 3.
If one divides by the number of trials to get the proportion of successes in $n$ trials, then the possible values would be $0, \frac{_1}{^n}, \frac{_2}{^n},...,\frac{_{n-1}}{^n},1$. That could be called a scaled binomial.
Which you use depends on what thing you're interested in modelling.
However, this is clearly an average height on a man
That's not correct. That histogram summarizes individual heights, not averages of heights. Also heights are not actually normally distributed. For some purposes it's not too bad an approximation, but the distribution of heights is (plainly) not actually normal. There's zero chance of a negative height, for one thing, but normal distributions all have non-zero chance of a negative value (though the probability of it may possibly be extremely small in some situations).
Perhaps the most confusing part is why the binomial distribution is close to normal, but not actually normal.
Well, the most obvious difference (one of many) is that it's a count -- a discrete distribution; a binomial cumulative distribution function (cdf) is always a step function. Normal distributions are continuous; their cdfs are never step functions.
We can often see that it has what looks like a bell shape for some moderate-sized, $n$, that only happens for sure as $n$ grows large enough (though if $p$ is middling, what counts as 'large enough' to look bell-shaped may be pretty small). For small $n$ it's often not very bell shaped, it's just a few spikes (e.g. I wouldn't call $n=2$ bell shaped for any $p$).
If $p$ is very close to $0$ or $1$ it may take a very large $n$ before it starts to look bell-shaped -- here's an example with $n=100$ that doesn't look at all bell-shaped --
but it will start to look more bell shaped eventually as $n$ increases.
(In the limit as $n$ goes to infinity, the central limit theorem tells us that the cdf of a standardized binomial variate will converge to the standard normal cdf.)
As for why that happens at some more-or-less moderate sample size, it's because it's the sum of many independent parts (the individual trials); convolutions of densities (or pmfs in the case of discrete variables) become more bell shaped (under certain conditions, all of which will be satisfied with independent Bernoulli trials) as you add more into the mix.
Consider adding two (independent) such 0-1 variables. The probability that they're both 1 (giving a total of 2) is $p^2$ and that they're both 0 is $(1-p)^2$, but the probability that one is 0 and the other is 1 is $2p(1-p)$ (these all come via elementary probability considerations). If $p$ is between 1/3 and 2/3, that probability will exceed the two end points (that is, extreme sums are harder to get than ones in the middle, because middling results can occur in many more ways), and as you add more terms, the extremes become rarer and the center gets that characteristic "bump".
More precisely, the cdf of a standardized binomial will become closer to the cdf of a standard normal as $n$ grows larger. The Berry-Esseen theorem tells us something about how far from normal it might be at some $n$ (but it's a worst-case; the binomial will tend to be closer than that bound suggests). | Why is a binomial distribution bell-shaped?
The binomial distribution arises as the number of successes in $n$ Bernoulli trials. Each trial is either a success or not, so the number of successes in $n$ trials can be any of the values $0, 1, 2, |
53,730 | Trouble in fitting data to a curve (NLS) | I wish I had a dollar for every hour people have wasted trying to do
nonlinear parameter estimation with R.
Here is the solution to your problem together with the estimated std devs calculated
by the delta method and plot of solution is above.
y0 85.557909 3.0989e-01
a1 125.20943 1.3766e+01
a2 1394.155 4.4952e+03
b1 0.062640298 4.1774e-03
b2 0.43392314 2.9936e-01
I used the same AD Model Builder code as I posted there, but modified slightly for
your problem. As I stated there this is a general technique for
attacking these sorts of problems.
How to choose initial values for nonlinear least squares fit
This is the AD Model Builder code for your problem.
DATA_SECTION
init_int n
int mid
!! mid=75;
init_matrix data(1,n,1,3)
vector t(1,n)
vector P(1,n)
!! t=column(data,2);
!! P=column(data,3); //use column 3
number tmax
number Pmax
!! tmax=max(t);
!! Pmax=max(P);
!! t/=tmax;
!! P/=Pmax;
PARAMETER_SECTION
init_number L1(3) //(3) means estimate in phase 3
init_number Lmid(3)
init_number Ln(3)
vector L(1,3)
init_number log_b1 // estimate in phase 1
init_number log_diff // estimate in phase 2
sdreport_number b1
sdreport_number b2
matrix M(1,3,1,3);
objective_function_value f
sdreport_vector v(1,3)
sdreport_number real_y0
sdreport_number real_a1
sdreport_number real_a2
sdreport_number real_b1
sdreport_number real_b2
vector pred(1,n);
PROCEDURE_SECTION
L(1)=L1;
L(2)=Lmid;
L(3)=Ln;
b1=exp(log_b1); // this parameterization ensures that b1<b2
b2=b1+exp(log_diff);
M(1,1)=exp(-b1*t(1));
M(1,2)=exp(-b2*t(1));
M(1,3)=1;
M(2,1)=exp(-b1*t(mid));
M(2,2)=exp(-b2*t(mid));
M(2,3)=1;
M(3,1)=exp(-b1*t(n));
M(3,2)=exp(-b2*t(n));
M(3,3)=1;
v=solve(M,L); // solve for standard parameters
// v is vector corresponding to the parameters b_1 b_2 P_0
pred=v(3)-v(1)*exp(-b1*t)-v(2)*exp(-b2*t);
if (current_phase()<4)
f+=norm2(P-pred);
else // use concentrated likelihood so that Hessian is correct
f+=0.5*n*log(norm2(P-pred)); //concentrated likelihood
real_y0=v(3)*Pmax;
real_a1=v(1)*Pmax;
real_a2=v(2)*Pmax;
real_b1=b1/tmax;
real_b2=b2/tmax;
REPORT_SECTION
dvar_matrix tmp(1,2,1,n);
dvar_vector real_t=t*tmax;
dvar_vector real_pred=real_y0-real_a1*exp(-real_b1*real_t)
-real_a2*exp(-real_b2*real_t);
tmp(1)=real_t;
tmp(2)=real_pred;
ofstream ofs1("pred");
ofs1 << trans(tmp)<< endl;
tmp(2)=P*Pmax;
ofstream ofs2("obs");
ofs2 << trans(tmp)<< endl;
report << "y0 " << setprecision(8) << real_y0 << endl;
report << "a1 " << setprecision(8) << real_a1 << endl;
report << "a2 " << setprecision(8) << real_a2 << endl;
report << "b1 " << setprecision(8) << real_b1 << endl;
report << "b2 " << setprecision(8) << real_b2 << endl; | Trouble in fitting data to a curve (NLS) | I wish I had a dollar for every hour people have wasted trying to do
nonlinear parameter estimation with R.
Here is the solution to your problem together with the estimated std devs calculated
by the | Trouble in fitting data to a curve (NLS)
I wish I had a dollar for every hour people have wasted trying to do
nonlinear parameter estimation with R.
Here is the solution to your problem together with the estimated std devs calculated
by the delta method and plot of solution is above.
y0 85.557909 3.0989e-01
a1 125.20943 1.3766e+01
a2 1394.155 4.4952e+03
b1 0.062640298 4.1774e-03
b2 0.43392314 2.9936e-01
I used the same AD Model Builder code as I posted there, but modified slightly for
your problem. As I stated there this is a general technique for
attacking these sorts of problems.
How to choose initial values for nonlinear least squares fit
This is the AD Model Builder code for your problem.
DATA_SECTION
init_int n
int mid
!! mid=75;
init_matrix data(1,n,1,3)
vector t(1,n)
vector P(1,n)
!! t=column(data,2);
!! P=column(data,3); //use column 3
number tmax
number Pmax
!! tmax=max(t);
!! Pmax=max(P);
!! t/=tmax;
!! P/=Pmax;
PARAMETER_SECTION
init_number L1(3) //(3) means estimate in phase 3
init_number Lmid(3)
init_number Ln(3)
vector L(1,3)
init_number log_b1 // estimate in phase 1
init_number log_diff // estimate in phase 2
sdreport_number b1
sdreport_number b2
matrix M(1,3,1,3);
objective_function_value f
sdreport_vector v(1,3)
sdreport_number real_y0
sdreport_number real_a1
sdreport_number real_a2
sdreport_number real_b1
sdreport_number real_b2
vector pred(1,n);
PROCEDURE_SECTION
L(1)=L1;
L(2)=Lmid;
L(3)=Ln;
b1=exp(log_b1); // this parameterization ensures that b1<b2
b2=b1+exp(log_diff);
M(1,1)=exp(-b1*t(1));
M(1,2)=exp(-b2*t(1));
M(1,3)=1;
M(2,1)=exp(-b1*t(mid));
M(2,2)=exp(-b2*t(mid));
M(2,3)=1;
M(3,1)=exp(-b1*t(n));
M(3,2)=exp(-b2*t(n));
M(3,3)=1;
v=solve(M,L); // solve for standard parameters
// v is vector corresponding to the parameters b_1 b_2 P_0
pred=v(3)-v(1)*exp(-b1*t)-v(2)*exp(-b2*t);
if (current_phase()<4)
f+=norm2(P-pred);
else // use concentrated likelihood so that Hessian is correct
f+=0.5*n*log(norm2(P-pred)); //concentrated likelihood
real_y0=v(3)*Pmax;
real_a1=v(1)*Pmax;
real_a2=v(2)*Pmax;
real_b1=b1/tmax;
real_b2=b2/tmax;
REPORT_SECTION
dvar_matrix tmp(1,2,1,n);
dvar_vector real_t=t*tmax;
dvar_vector real_pred=real_y0-real_a1*exp(-real_b1*real_t)
-real_a2*exp(-real_b2*real_t);
tmp(1)=real_t;
tmp(2)=real_pred;
ofstream ofs1("pred");
ofs1 << trans(tmp)<< endl;
tmp(2)=P*Pmax;
ofstream ofs2("obs");
ofs2 << trans(tmp)<< endl;
report << "y0 " << setprecision(8) << real_y0 << endl;
report << "a1 " << setprecision(8) << real_a1 << endl;
report << "a2 " << setprecision(8) << real_a2 << endl;
report << "b1 " << setprecision(8) << real_b1 << endl;
report << "b2 " << setprecision(8) << real_b2 << endl; | Trouble in fitting data to a curve (NLS)
I wish I had a dollar for every hour people have wasted trying to do
nonlinear parameter estimation with R.
Here is the solution to your problem together with the estimated std devs calculated
by the |
53,731 | Trouble in fitting data to a curve (NLS) | EDITED TO REMOVE THE ORIGINAL SECOND POINT, WHICH WAS WRONG.
Two comments:
Looking at the data, something strange is happening at about 80 seconds and 140 seconds. If these are experimental artefacts, then it won't make sense to fit a complicated model. If those data show a real trend (not an experimental artefact), then you'll need a sinusoidal model to fit those twists and bends. Fitting the biexponential model assumes all variation is Gaussian, but those trends at 80 and 140 seconds seem too large to just be random variation.
Here are the results from GraphPad Prism, which are very similar to those that Dave got with ADBuilder (Prism file):
a. Look at the confidence intervals for A2 and tau2. They cross into negative values, which are impossible, and are quite wide. Your data really don't define these parameters.
b. The P value from the runs test is tiny. This means that the data systematically deviate from the curve. Whether a point is above or below the curve is not entirely random. This goes along with point 1 above.
c. If you try to extrapolate the curve down to Time=0, the Y values get very negative. Depending on the scientific context, this may suggest the model is not very sensible. Perhaps the model should include a (X - X0) term, where X0 is a constant (maybe 11.33, your first time point)? | Trouble in fitting data to a curve (NLS) | EDITED TO REMOVE THE ORIGINAL SECOND POINT, WHICH WAS WRONG.
Two comments:
Looking at the data, something strange is happening at about 80 seconds and 140 seconds. If these are experimental artefact | Trouble in fitting data to a curve (NLS)
EDITED TO REMOVE THE ORIGINAL SECOND POINT, WHICH WAS WRONG.
Two comments:
Looking at the data, something strange is happening at about 80 seconds and 140 seconds. If these are experimental artefacts, then it won't make sense to fit a complicated model. If those data show a real trend (not an experimental artefact), then you'll need a sinusoidal model to fit those twists and bends. Fitting the biexponential model assumes all variation is Gaussian, but those trends at 80 and 140 seconds seem too large to just be random variation.
Here are the results from GraphPad Prism, which are very similar to those that Dave got with ADBuilder (Prism file):
a. Look at the confidence intervals for A2 and tau2. They cross into negative values, which are impossible, and are quite wide. Your data really don't define these parameters.
b. The P value from the runs test is tiny. This means that the data systematically deviate from the curve. Whether a point is above or below the curve is not entirely random. This goes along with point 1 above.
c. If you try to extrapolate the curve down to Time=0, the Y values get very negative. Depending on the scientific context, this may suggest the model is not very sensible. Perhaps the model should include a (X - X0) term, where X0 is a constant (maybe 11.33, your first time point)? | Trouble in fitting data to a curve (NLS)
EDITED TO REMOVE THE ORIGINAL SECOND POINT, WHICH WAS WRONG.
Two comments:
Looking at the data, something strange is happening at about 80 seconds and 140 seconds. If these are experimental artefact |
53,732 | Trouble in fitting data to a curve (NLS) | NB in my discussion, I'll ignore the obvious lack of fit of the model and focus on the issues with getting estimates.
A model like this one can sometimes have a few issues (because of a ridge in the parameter space) but there seems to be no such difficulty here.
I did a one-exponential fit first, as you did. Then the very first two-exponential fit I tried worked.
Here's what I did, in order:
First, I tried plotting curves for a few values for y0, a1 and b1 until I found something that worked okay:
Step 0: find some values for a one-exponential model by trial and error
#Leaving out the values I tried that didn't work as well, I got to here:
plot(X1r_1_green~Time,data1)
t=10:189
f1=90-100*exp(-0.05*t)
lines(t,f1,col=4)
Step 1: fit a one-exponential model:
fit1exp=nls(X1r_1_green ~ y0-a1*exp(-b1*Time),
data=data1,start=c(y0=90,a1=100,b1=.05))
summary(fit1exp)
Formula: X1r_1_green ~ y0 - a1 * exp(-b1 * Time)
Parameters:
Estimate Std. Error t value Pr(>|t|)
y0 8.542e+01 3.007e-01 284.11 <2e-16 ***
a1 1.447e+02 7.140e+00 20.26 <2e-16 ***
b1 6.804e-02 2.707e-03 25.13 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.271 on 172 degrees of freedom
Number of iterations to convergence: 4
Achieved convergence tolerance: 3.197e-06
Step 2: Shrink those a little toward 0, put in rough guesses for the second parameter set (I made them in the ballpark of 15-20% of the first set, but it pays to be imprecise about this):
fit2exp=nls(X1r_1_green~ y0-a1*exp(-b1*Time)-a2*exp(-b2*Time),
data=data1,start=c(y0=80,a1=120,b1=.06,a2=20,b2=.01))
summary(fit2exp)
Formula: X1r_1_green ~ y0 - a1 * exp(-b1 * Time) - a2 * exp(-b2 * Time)
Parameters:
Estimate Std. Error t value Pr(>|t|)
y0 8.771e+01 1.205e+01 7.281 1.18e-11 ***
a1 1.509e+02 9.122e+00 16.539 < 2e-16 ***
b1 7.379e-02 7.828e-03 9.427 < 2e-16 ***
a2 5.132e+00 6.504e+00 0.789 0.431
b2 6.429e-03 3.356e-02 0.192 0.848
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.261 on 170 degrees of freedom
Number of iterations to convergence: 22
Achieved convergence tolerance: 4.992e-06
I did not expect that second model to work first try; usually you'd have to fiddle about a bit with a model like that one (or even better, use subject area knowledge to build better guesses -- but I don't have any of that).
Plotting that fit on the same plot as the initial fit:
Original by-eye fit of single exponential in blue, final two-exponential nls-fit in red. There's also a single-exponential nls-fit in green, but it's almost entirely overlapped by the red curve -- i.e. the second exponential adds almost nothing to the fit here.)
I spent longer editing your post than I did getting that fit working, and most of that was simply finding the blue curve by eye.
[While this seemed to work just fine, it definitely does pay to try other starting values and see whether you've converged to a local optimum, or perhaps got stuck in a really stretched out ridge and ended up some distance from where you might be. I'd normally try changing the default convergence criteria as well.] | Trouble in fitting data to a curve (NLS) | NB in my discussion, I'll ignore the obvious lack of fit of the model and focus on the issues with getting estimates.
A model like this one can sometimes have a few issues (because of a ridge in the p | Trouble in fitting data to a curve (NLS)
NB in my discussion, I'll ignore the obvious lack of fit of the model and focus on the issues with getting estimates.
A model like this one can sometimes have a few issues (because of a ridge in the parameter space) but there seems to be no such difficulty here.
I did a one-exponential fit first, as you did. Then the very first two-exponential fit I tried worked.
Here's what I did, in order:
First, I tried plotting curves for a few values for y0, a1 and b1 until I found something that worked okay:
Step 0: find some values for a one-exponential model by trial and error
#Leaving out the values I tried that didn't work as well, I got to here:
plot(X1r_1_green~Time,data1)
t=10:189
f1=90-100*exp(-0.05*t)
lines(t,f1,col=4)
Step 1: fit a one-exponential model:
fit1exp=nls(X1r_1_green ~ y0-a1*exp(-b1*Time),
data=data1,start=c(y0=90,a1=100,b1=.05))
summary(fit1exp)
Formula: X1r_1_green ~ y0 - a1 * exp(-b1 * Time)
Parameters:
Estimate Std. Error t value Pr(>|t|)
y0 8.542e+01 3.007e-01 284.11 <2e-16 ***
a1 1.447e+02 7.140e+00 20.26 <2e-16 ***
b1 6.804e-02 2.707e-03 25.13 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.271 on 172 degrees of freedom
Number of iterations to convergence: 4
Achieved convergence tolerance: 3.197e-06
Step 2: Shrink those a little toward 0, put in rough guesses for the second parameter set (I made them in the ballpark of 15-20% of the first set, but it pays to be imprecise about this):
fit2exp=nls(X1r_1_green~ y0-a1*exp(-b1*Time)-a2*exp(-b2*Time),
data=data1,start=c(y0=80,a1=120,b1=.06,a2=20,b2=.01))
summary(fit2exp)
Formula: X1r_1_green ~ y0 - a1 * exp(-b1 * Time) - a2 * exp(-b2 * Time)
Parameters:
Estimate Std. Error t value Pr(>|t|)
y0 8.771e+01 1.205e+01 7.281 1.18e-11 ***
a1 1.509e+02 9.122e+00 16.539 < 2e-16 ***
b1 7.379e-02 7.828e-03 9.427 < 2e-16 ***
a2 5.132e+00 6.504e+00 0.789 0.431
b2 6.429e-03 3.356e-02 0.192 0.848
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.261 on 170 degrees of freedom
Number of iterations to convergence: 22
Achieved convergence tolerance: 4.992e-06
I did not expect that second model to work first try; usually you'd have to fiddle about a bit with a model like that one (or even better, use subject area knowledge to build better guesses -- but I don't have any of that).
Plotting that fit on the same plot as the initial fit:
Original by-eye fit of single exponential in blue, final two-exponential nls-fit in red. There's also a single-exponential nls-fit in green, but it's almost entirely overlapped by the red curve -- i.e. the second exponential adds almost nothing to the fit here.)
I spent longer editing your post than I did getting that fit working, and most of that was simply finding the blue curve by eye.
[While this seemed to work just fine, it definitely does pay to try other starting values and see whether you've converged to a local optimum, or perhaps got stuck in a really stretched out ridge and ended up some distance from where you might be. I'd normally try changing the default convergence criteria as well.] | Trouble in fitting data to a curve (NLS)
NB in my discussion, I'll ignore the obvious lack of fit of the model and focus on the issues with getting estimates.
A model like this one can sometimes have a few issues (because of a ridge in the p |
53,733 | Trouble in fitting data to a curve (NLS) | If you realize that your model is partial linear, the whole task gets very simple. You only need starting values for the non-linear parameters and since these are related to the half-life, it's easy to make a decent guess from a plot:
tau1_ini <- log(2) / 18
tau2_ini <- log(2) / 180
Then you can fit:
fit <- nls(X1r_1_green ~ cbind(1, exp(-tau1 * Time), exp(-tau2 * Time)),
data = DF, algorithm = "plinear",
start = list(tau1 = tau1_ini, tau2 = tau2_ini))
summary(fit)
#Formula: X1r_1_green ~ cbind(1, exp(-tau1 * Time), exp(-tau2 * Time))
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#tau1 7.379e-02 7.828e-03 9.427 < 2e-16 ***
#tau2 6.429e-03 3.356e-02 0.192 0.848
#.lin1 8.771e+01 1.205e+01 7.280 1.18e-11 ***
#.lin2 -1.509e+02 9.122e+00 -16.539 < 2e-16 ***
#.lin3 -5.132e+00 6.505e+00 -0.789 0.431
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 3.261 on 170 degrees of freedom
#
#Number of iterations to convergence: 19
#Achieved convergence tolerance: 6.29e-06
That's the same result as in @Glen_b's answer.
Note that the parameter estimates are strongly correlated. You data doesn't really support such a complex model. | Trouble in fitting data to a curve (NLS) | If you realize that your model is partial linear, the whole task gets very simple. You only need starting values for the non-linear parameters and since these are related to the half-life, it's easy t | Trouble in fitting data to a curve (NLS)
If you realize that your model is partial linear, the whole task gets very simple. You only need starting values for the non-linear parameters and since these are related to the half-life, it's easy to make a decent guess from a plot:
tau1_ini <- log(2) / 18
tau2_ini <- log(2) / 180
Then you can fit:
fit <- nls(X1r_1_green ~ cbind(1, exp(-tau1 * Time), exp(-tau2 * Time)),
data = DF, algorithm = "plinear",
start = list(tau1 = tau1_ini, tau2 = tau2_ini))
summary(fit)
#Formula: X1r_1_green ~ cbind(1, exp(-tau1 * Time), exp(-tau2 * Time))
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#tau1 7.379e-02 7.828e-03 9.427 < 2e-16 ***
#tau2 6.429e-03 3.356e-02 0.192 0.848
#.lin1 8.771e+01 1.205e+01 7.280 1.18e-11 ***
#.lin2 -1.509e+02 9.122e+00 -16.539 < 2e-16 ***
#.lin3 -5.132e+00 6.505e+00 -0.789 0.431
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 3.261 on 170 degrees of freedom
#
#Number of iterations to convergence: 19
#Achieved convergence tolerance: 6.29e-06
That's the same result as in @Glen_b's answer.
Note that the parameter estimates are strongly correlated. You data doesn't really support such a complex model. | Trouble in fitting data to a curve (NLS)
If you realize that your model is partial linear, the whole task gets very simple. You only need starting values for the non-linear parameters and since these are related to the half-life, it's easy t |
53,734 | What is the correct definition of the Likelihood function? | Start with general definitions of likelihood. With likelihood you are not really interested in probabilities, but in likelihood of $\theta$ given your data. It is calculated using probability of data using some model with parameters $\theta$, i.e.
$$L(\theta|X) = \prod_i f_\theta(x_i)$$
Now, in your examples two different likelihoods are described. In the first case, you have a regression model of $Y$ conditional on $X$ and in the second case, you have a joint likelihood of $X$ and $Y$ in bivariate model. This is the same as you can have conditional probabilities and joint probabilities, they are both probabilities, so have the same properties, but describe different cases. | What is the correct definition of the Likelihood function? | Start with general definitions of likelihood. With likelihood you are not really interested in probabilities, but in likelihood of $\theta$ given your data. It is calculated using probability of data | What is the correct definition of the Likelihood function?
Start with general definitions of likelihood. With likelihood you are not really interested in probabilities, but in likelihood of $\theta$ given your data. It is calculated using probability of data using some model with parameters $\theta$, i.e.
$$L(\theta|X) = \prod_i f_\theta(x_i)$$
Now, in your examples two different likelihoods are described. In the first case, you have a regression model of $Y$ conditional on $X$ and in the second case, you have a joint likelihood of $X$ and $Y$ in bivariate model. This is the same as you can have conditional probabilities and joint probabilities, they are both probabilities, so have the same properties, but describe different cases. | What is the correct definition of the Likelihood function?
Start with general definitions of likelihood. With likelihood you are not really interested in probabilities, but in likelihood of $\theta$ given your data. It is calculated using probability of data |
53,735 | What is the correct definition of the Likelihood function? | The two likelihoods are related by the following equation:
$$P(Y \cap X\,|\,\Theta) = P(Y\,|\,X,\Theta)P(X\,|\,\Theta)$$
So, the joint probability of $Y$ and $X$ has to account for two things:
The probability of generating $Y$ given $X$ and $\Theta$
The probability of generating $X$ given $\Theta$
$P(Y\,|\,X,\Theta)$ only accounts for (1), and would be preferred when you only care about predicting Y when X is known. The joint likelihood looks at the probability of generating both X and Y given model parameter $\Theta$. This could be valuable if you want your model to predict X as well as Y given X. Put another way, $P(X\,|\,\Theta)$ is a way of measuring to what extent your model knows what kinds of $X$ are likely to occur in your dataset. | What is the correct definition of the Likelihood function? | The two likelihoods are related by the following equation:
$$P(Y \cap X\,|\,\Theta) = P(Y\,|\,X,\Theta)P(X\,|\,\Theta)$$
So, the joint probability of $Y$ and $X$ has to account for two things:
The p | What is the correct definition of the Likelihood function?
The two likelihoods are related by the following equation:
$$P(Y \cap X\,|\,\Theta) = P(Y\,|\,X,\Theta)P(X\,|\,\Theta)$$
So, the joint probability of $Y$ and $X$ has to account for two things:
The probability of generating $Y$ given $X$ and $\Theta$
The probability of generating $X$ given $\Theta$
$P(Y\,|\,X,\Theta)$ only accounts for (1), and would be preferred when you only care about predicting Y when X is known. The joint likelihood looks at the probability of generating both X and Y given model parameter $\Theta$. This could be valuable if you want your model to predict X as well as Y given X. Put another way, $P(X\,|\,\Theta)$ is a way of measuring to what extent your model knows what kinds of $X$ are likely to occur in your dataset. | What is the correct definition of the Likelihood function?
The two likelihoods are related by the following equation:
$$P(Y \cap X\,|\,\Theta) = P(Y\,|\,X,\Theta)P(X\,|\,\Theta)$$
So, the joint probability of $Y$ and $X$ has to account for two things:
The p |
53,736 | What is the correct definition of the Likelihood function? | It just seems that in the second case, $X$ and $Y$ are both modelled jointly in a generative model and you can write the joint likelihood as $P(X, Y | \theta)$
For example, now if you assume X and Y are independent, the joint log likelihood can be written as:
$$
\log L(\theta) = \log P(X |\theta) + \log P(Y|\theta)
$$ | What is the correct definition of the Likelihood function? | It just seems that in the second case, $X$ and $Y$ are both modelled jointly in a generative model and you can write the joint likelihood as $P(X, Y | \theta)$
For example, now if you assume X and Y a | What is the correct definition of the Likelihood function?
It just seems that in the second case, $X$ and $Y$ are both modelled jointly in a generative model and you can write the joint likelihood as $P(X, Y | \theta)$
For example, now if you assume X and Y are independent, the joint log likelihood can be written as:
$$
\log L(\theta) = \log P(X |\theta) + \log P(Y|\theta)
$$ | What is the correct definition of the Likelihood function?
It just seems that in the second case, $X$ and $Y$ are both modelled jointly in a generative model and you can write the joint likelihood as $P(X, Y | \theta)$
For example, now if you assume X and Y a |
53,737 | How to compute intraclass correlation (ICC) for THREE-level negative binomial hierarchical model? | I don't know if you still need the answer for this, but I'll try anyway.
The ICC for a two level negative binomial model (Tseloni and Pease, 2003) can be easily calculated by:
$$
\rho = \frac{\sigma_{j}^2}{\sigma_{j}^2 + \alpha}
$$
where $\sigma_{j}^2$ is the variance of between-group differences (level 2), and $\alpha$ is the variance at level 1, though the parameter reported by lme4 as the overdispersion parameter ($\theta$) is $\alpha^{-1}$.
So, in a three level model, there are two intra-class correlations that can be calculated: individuals within level-2 groups, and level-2 groups within level-3 groups.
Using a hypothetical example, if we had pupils nested in classes ($j$) nested in schools ($k$), the formulas for the ICC are:
$$
\rho_{class} = \frac{\sigma_{j}^2 + \sigma_{k}^2}{\sigma_{j}^2 +\sigma_k^2 + \alpha}
$$
$$
\rho_{school} = \frac{\sigma_{k}^2}{\sigma_{j}^2 +\sigma_k^2 + \alpha}
$$
where $\sigma_j^2$ is the between class variance, $\sigma_k^2$ is the between school variance, and $\alpha$ is the between pupil variance. $\rho_{class}$ would be the correlation between two pupils in the same class, and $\rho_{school}$ the correlation between two classes in the same school.
Now, to calculate these ICCs using lme4 we need to access the specific estimates stored in the glmerMod object generated by glmer.nb.
So assuming you have a three level model named m
class(m) ## "glmerMod"
### store the intercepts variance, which
### frustratingly, is also called theta in lme4 models
### and it's stored as the sd, hence the need to square it
var_k <- as.numeric(getME(m, "theta")[2]^2) # level 3 variance
var_j <- as.numeric(getME(m, "theta")[1]^2) # level 2 variance
### store the alpha value (which lme4 stores as theta = 1/alpha)
alpha <- 1/getME(m, "glmer.nb.theta")
### ICC for level 2
ICC_l2 <- (var_k + var_j)/(var_k + var_j + alpha)
### ICC for level 3
ICC_l3 <- var_k/(var_k + var_j + alpha)
References
Tseloni, A., & Pease, K. (2003). Repeat personal victimization. ‘Boosts’ or ‘flags’? British Journal of Criminology, 43(1), 196-212. doi:10.1093/bjc/43.1.19 | How to compute intraclass correlation (ICC) for THREE-level negative binomial hierarchical model? | I don't know if you still need the answer for this, but I'll try anyway.
The ICC for a two level negative binomial model (Tseloni and Pease, 2003) can be easily calculated by:
$$
\rho = \frac{\sigma_{ | How to compute intraclass correlation (ICC) for THREE-level negative binomial hierarchical model?
I don't know if you still need the answer for this, but I'll try anyway.
The ICC for a two level negative binomial model (Tseloni and Pease, 2003) can be easily calculated by:
$$
\rho = \frac{\sigma_{j}^2}{\sigma_{j}^2 + \alpha}
$$
where $\sigma_{j}^2$ is the variance of between-group differences (level 2), and $\alpha$ is the variance at level 1, though the parameter reported by lme4 as the overdispersion parameter ($\theta$) is $\alpha^{-1}$.
So, in a three level model, there are two intra-class correlations that can be calculated: individuals within level-2 groups, and level-2 groups within level-3 groups.
Using a hypothetical example, if we had pupils nested in classes ($j$) nested in schools ($k$), the formulas for the ICC are:
$$
\rho_{class} = \frac{\sigma_{j}^2 + \sigma_{k}^2}{\sigma_{j}^2 +\sigma_k^2 + \alpha}
$$
$$
\rho_{school} = \frac{\sigma_{k}^2}{\sigma_{j}^2 +\sigma_k^2 + \alpha}
$$
where $\sigma_j^2$ is the between class variance, $\sigma_k^2$ is the between school variance, and $\alpha$ is the between pupil variance. $\rho_{class}$ would be the correlation between two pupils in the same class, and $\rho_{school}$ the correlation between two classes in the same school.
Now, to calculate these ICCs using lme4 we need to access the specific estimates stored in the glmerMod object generated by glmer.nb.
So assuming you have a three level model named m
class(m) ## "glmerMod"
### store the intercepts variance, which
### frustratingly, is also called theta in lme4 models
### and it's stored as the sd, hence the need to square it
var_k <- as.numeric(getME(m, "theta")[2]^2) # level 3 variance
var_j <- as.numeric(getME(m, "theta")[1]^2) # level 2 variance
### store the alpha value (which lme4 stores as theta = 1/alpha)
alpha <- 1/getME(m, "glmer.nb.theta")
### ICC for level 2
ICC_l2 <- (var_k + var_j)/(var_k + var_j + alpha)
### ICC for level 3
ICC_l3 <- var_k/(var_k + var_j + alpha)
References
Tseloni, A., & Pease, K. (2003). Repeat personal victimization. ‘Boosts’ or ‘flags’? British Journal of Criminology, 43(1), 196-212. doi:10.1093/bjc/43.1.19 | How to compute intraclass correlation (ICC) for THREE-level negative binomial hierarchical model?
I don't know if you still need the answer for this, but I'll try anyway.
The ICC for a two level negative binomial model (Tseloni and Pease, 2003) can be easily calculated by:
$$
\rho = \frac{\sigma_{ |
53,738 | Is the $t$ test a special case of the generalized linear model? | As @Glen_b notes, the $t$-test does not assume the response is distributed as $t$. It assumes the response is normally distributed. (The normal is, of course, in the exponential family.) However, the motivation for the $t$-test is that the group SDs are not known a-priori, but are estimated from the data instead. That fact induces more uncertainty into your result, and so we need to take that additional uncertainty into account. When you do so, it turns out that the test statistic $(\bar x_1 - \bar x_2)/SE$ is distributed as $t$. That's it.
The $t$-test is, in fact, a special case of linear regression, which, in turn, is a special case of the generalized linear model.
Regarding your specific questions:
If you know the response is normally distributed, and you know the SD a-priori, you should use the $z$-test instead of the $t$-test.
The $t$ distribution is not a member of the exponential family, so no, that wouldn't be a GLM. | Is the $t$ test a special case of the generalized linear model? | As @Glen_b notes, the $t$-test does not assume the response is distributed as $t$. It assumes the response is normally distributed. (The normal is, of course, in the exponential family.) However, t | Is the $t$ test a special case of the generalized linear model?
As @Glen_b notes, the $t$-test does not assume the response is distributed as $t$. It assumes the response is normally distributed. (The normal is, of course, in the exponential family.) However, the motivation for the $t$-test is that the group SDs are not known a-priori, but are estimated from the data instead. That fact induces more uncertainty into your result, and so we need to take that additional uncertainty into account. When you do so, it turns out that the test statistic $(\bar x_1 - \bar x_2)/SE$ is distributed as $t$. That's it.
The $t$-test is, in fact, a special case of linear regression, which, in turn, is a special case of the generalized linear model.
Regarding your specific questions:
If you know the response is normally distributed, and you know the SD a-priori, you should use the $z$-test instead of the $t$-test.
The $t$ distribution is not a member of the exponential family, so no, that wouldn't be a GLM. | Is the $t$ test a special case of the generalized linear model?
As @Glen_b notes, the $t$-test does not assume the response is distributed as $t$. It assumes the response is normally distributed. (The normal is, of course, in the exponential family.) However, t |
53,739 | Why is RBF kernel used in SVM? | RUser4512 gave the correct answer: RBF kernel works well in practice and it is relatively easy to tune. It's the SVM equivalent to "no one's ever been fired for estimating an OLS regression:" it's accepted as a reasonable default method. Clearly OLS isn't perfect in every (or even many) scenarios, but it's a well-studied method, and widely understood. Likewise, the RBF kernel is well-studied and widely understood, and many SVM packages include it as a default method.
But the RBF kernel has a number of other properties. In these types of questions, when someone is asking about "why do we do things this way", I think it's important to also draw contrasts to other methods to develop context.
It is a stationary kernel, which means that it is invariant to translation. Suppose you are computing $K(x,y).$ A stationary kernel will yield the same value $K(x,y)$ for $K(x+c,y+c)$, where $c$ may be vector-valued of dimension to match the inputs. For the RBF, this is accomplished by working on the difference of the two vectors. For contrast, note that the linear kernel does not have the stationarity property.
The single-parameter version of the RBF kernel has the property that it is isotropic, i.e. the scaling by $\gamma$ occurs the same amount in all directions. This can be easily generalized, though, by slightly tweaking the RBF kernel to $K(x,y)=\exp\left(-(x-y)'\Gamma(x-y)\right)$ where $\Gamma$ is a p.s.d. matrix.
Another property of the RBF kernel is that it is infinitely smooth. This is aesthetically pleasing, and somewhat satisfying visually, but perhaps it is not the most important property. Compare the RBF kernel to the Matern kernel and you'll see that there some kernels are quite a bit more jagged!
The moral of the story is that kernel-based methods are very rich, and with a little bit of work, it's very practical to develop a kernel suited to your particular needs. But if one is using an RBF kernel as a default, you'll have a reasonable benchmark for comparison. | Why is RBF kernel used in SVM? | RUser4512 gave the correct answer: RBF kernel works well in practice and it is relatively easy to tune. It's the SVM equivalent to "no one's ever been fired for estimating an OLS regression:" it's acc | Why is RBF kernel used in SVM?
RUser4512 gave the correct answer: RBF kernel works well in practice and it is relatively easy to tune. It's the SVM equivalent to "no one's ever been fired for estimating an OLS regression:" it's accepted as a reasonable default method. Clearly OLS isn't perfect in every (or even many) scenarios, but it's a well-studied method, and widely understood. Likewise, the RBF kernel is well-studied and widely understood, and many SVM packages include it as a default method.
But the RBF kernel has a number of other properties. In these types of questions, when someone is asking about "why do we do things this way", I think it's important to also draw contrasts to other methods to develop context.
It is a stationary kernel, which means that it is invariant to translation. Suppose you are computing $K(x,y).$ A stationary kernel will yield the same value $K(x,y)$ for $K(x+c,y+c)$, where $c$ may be vector-valued of dimension to match the inputs. For the RBF, this is accomplished by working on the difference of the two vectors. For contrast, note that the linear kernel does not have the stationarity property.
The single-parameter version of the RBF kernel has the property that it is isotropic, i.e. the scaling by $\gamma$ occurs the same amount in all directions. This can be easily generalized, though, by slightly tweaking the RBF kernel to $K(x,y)=\exp\left(-(x-y)'\Gamma(x-y)\right)$ where $\Gamma$ is a p.s.d. matrix.
Another property of the RBF kernel is that it is infinitely smooth. This is aesthetically pleasing, and somewhat satisfying visually, but perhaps it is not the most important property. Compare the RBF kernel to the Matern kernel and you'll see that there some kernels are quite a bit more jagged!
The moral of the story is that kernel-based methods are very rich, and with a little bit of work, it's very practical to develop a kernel suited to your particular needs. But if one is using an RBF kernel as a default, you'll have a reasonable benchmark for comparison. | Why is RBF kernel used in SVM?
RUser4512 gave the correct answer: RBF kernel works well in practice and it is relatively easy to tune. It's the SVM equivalent to "no one's ever been fired for estimating an OLS regression:" it's acc |
53,740 | Why is RBF kernel used in SVM? | I think the good reasons to use RBF kernel are that they work well in practice and they are relatively easy to calibrate, as opposed to other kernels.
The polynomial kernel has three parameter (offset, scaling, degree).
The RBF kernel has one parameter and there are good heuristics to find it. See, per example : SVM rbf kernel - heuristic method for estimating gamma
Linear separability in the feature space may not be the reason. Indeed, it is easy, with a Gaussian kernel, to enforce separability and a perfect accuracy on the train set (setting $\gamma$ to a large value). However, these model have a very bad generalization.
Edit.
This short video shows the influence of the increase of the bandwith parameter on the decision boundary. | Why is RBF kernel used in SVM? | I think the good reasons to use RBF kernel are that they work well in practice and they are relatively easy to calibrate, as opposed to other kernels.
The polynomial kernel has three parameter (offset | Why is RBF kernel used in SVM?
I think the good reasons to use RBF kernel are that they work well in practice and they are relatively easy to calibrate, as opposed to other kernels.
The polynomial kernel has three parameter (offset, scaling, degree).
The RBF kernel has one parameter and there are good heuristics to find it. See, per example : SVM rbf kernel - heuristic method for estimating gamma
Linear separability in the feature space may not be the reason. Indeed, it is easy, with a Gaussian kernel, to enforce separability and a perfect accuracy on the train set (setting $\gamma$ to a large value). However, these model have a very bad generalization.
Edit.
This short video shows the influence of the increase of the bandwith parameter on the decision boundary. | Why is RBF kernel used in SVM?
I think the good reasons to use RBF kernel are that they work well in practice and they are relatively easy to calibrate, as opposed to other kernels.
The polynomial kernel has three parameter (offset |
53,741 | How to calculate impulse responses for a given autoregressive process? | Elaborating on Martin's answer, you will want to compare coefficients in the general AR(p) case.
First, write the AR(p) process in compact lag operator notation, using $\rho(L)=1-\rho_1L-\ldots-\rho_pL^p$.
We have from $\rho(L)Y_t=\epsilon_t$ and the lag operator statement of an $MA(\infty)$ process, $Y_t=\psi(L)\epsilon_{t}$, that
$$\rho(L)Y_t=\rho(L)\psi(L)\epsilon_t=\epsilon_t$$
Hence, $\rho(L)\psi(L)=1$. Now the two polynomials $\rho(L)\psi(L)$ and 1 (the latter of order 0) are identical if and only if the coefficients of each power are identical.
Example for an $AR(2)$:
We obtain
$$
(1-\rho_1L-\rho_2L^2)(\psi_0+\psi_1L+\psi_2L^2+\psi_3L^3+\ldots)=1
$$
Matching powers of $L$ yields
\begin{align*}
\psi_0&=1\\
\psi_1-\rho_1\psi_0&=0\Rightarrow\psi_1=\rho_1\\
-\rho_2\psi_0-\rho_1\psi_1+\psi_2&=0\Rightarrow\psi_2=\rho_2+\rho_1^2\\
\ldots&
\end{align*} | How to calculate impulse responses for a given autoregressive process? | Elaborating on Martin's answer, you will want to compare coefficients in the general AR(p) case.
First, write the AR(p) process in compact lag operator notation, using $\rho(L)=1-\rho_1L-\ldots-\rho_p | How to calculate impulse responses for a given autoregressive process?
Elaborating on Martin's answer, you will want to compare coefficients in the general AR(p) case.
First, write the AR(p) process in compact lag operator notation, using $\rho(L)=1-\rho_1L-\ldots-\rho_pL^p$.
We have from $\rho(L)Y_t=\epsilon_t$ and the lag operator statement of an $MA(\infty)$ process, $Y_t=\psi(L)\epsilon_{t}$, that
$$\rho(L)Y_t=\rho(L)\psi(L)\epsilon_t=\epsilon_t$$
Hence, $\rho(L)\psi(L)=1$. Now the two polynomials $\rho(L)\psi(L)$ and 1 (the latter of order 0) are identical if and only if the coefficients of each power are identical.
Example for an $AR(2)$:
We obtain
$$
(1-\rho_1L-\rho_2L^2)(\psi_0+\psi_1L+\psi_2L^2+\psi_3L^3+\ldots)=1
$$
Matching powers of $L$ yields
\begin{align*}
\psi_0&=1\\
\psi_1-\rho_1\psi_0&=0\Rightarrow\psi_1=\rho_1\\
-\rho_2\psi_0-\rho_1\psi_1+\psi_2&=0\Rightarrow\psi_2=\rho_2+\rho_1^2\\
\ldots&
\end{align*} | How to calculate impulse responses for a given autoregressive process?
Elaborating on Martin's answer, you will want to compare coefficients in the general AR(p) case.
First, write the AR(p) process in compact lag operator notation, using $\rho(L)=1-\rho_1L-\ldots-\rho_p |
53,742 | How to calculate impulse responses for a given autoregressive process? | Not sure if your equation is correct. I guess you meant
$$
y_t = \rho_1 y_{t-1} + \dots + \rho_p y_{t-p} + \epsilon_t
$$
In case of a AR(1) process you have to cast it into its MA($\infty$) (or 'covariance stationary') representation by reinserting the past observations $y_{t-j}$ where $j=1,\dots,\infty$:
$$
y_t = c + \rho_1 y_{t-1} + \epsilon_t .
$$
With $y_{t-1} = c + \rho_1 y_{t-2} + \epsilon_{t-1}$ it follows that
$$
y_t = c + \rho_1 (c + \rho_1 y_{t-2} + \epsilon_{t-1}) + \epsilon_t.
$$
If you do that infinitely you end up with
$$
y_t = \frac{c}{1-\rho} + \sum_{j=0}^{\infty} \psi_j \epsilon_{t-j} \quad \text{with} \quad \psi_j = \rho^j
$$
where we used that $\sum_{j=0}^{\infty}\rho^j c = \frac{c}{1-\rho}$ and the condition that $\rho < 1$.
The impsule-responses can now be calculated by
$$
\frac{\partial y_{t+j}}{\partial \epsilon_t} = \psi_j.
$$
You basically have to do the same for higher order AR(p) processes. There you can compute the $\psi_j$'s by 'comparing coefficients'. An important condition for an $AR(p)$ process to have a covariance stationary representation is that its roots all lie outside the unit circle (thats why we assumed that $\rho < 1$).
I hope this rudimentary info helps you... | How to calculate impulse responses for a given autoregressive process? | Not sure if your equation is correct. I guess you meant
$$
y_t = \rho_1 y_{t-1} + \dots + \rho_p y_{t-p} + \epsilon_t
$$
In case of a AR(1) process you have to cast it into its MA($\infty$) (or 'covar | How to calculate impulse responses for a given autoregressive process?
Not sure if your equation is correct. I guess you meant
$$
y_t = \rho_1 y_{t-1} + \dots + \rho_p y_{t-p} + \epsilon_t
$$
In case of a AR(1) process you have to cast it into its MA($\infty$) (or 'covariance stationary') representation by reinserting the past observations $y_{t-j}$ where $j=1,\dots,\infty$:
$$
y_t = c + \rho_1 y_{t-1} + \epsilon_t .
$$
With $y_{t-1} = c + \rho_1 y_{t-2} + \epsilon_{t-1}$ it follows that
$$
y_t = c + \rho_1 (c + \rho_1 y_{t-2} + \epsilon_{t-1}) + \epsilon_t.
$$
If you do that infinitely you end up with
$$
y_t = \frac{c}{1-\rho} + \sum_{j=0}^{\infty} \psi_j \epsilon_{t-j} \quad \text{with} \quad \psi_j = \rho^j
$$
where we used that $\sum_{j=0}^{\infty}\rho^j c = \frac{c}{1-\rho}$ and the condition that $\rho < 1$.
The impsule-responses can now be calculated by
$$
\frac{\partial y_{t+j}}{\partial \epsilon_t} = \psi_j.
$$
You basically have to do the same for higher order AR(p) processes. There you can compute the $\psi_j$'s by 'comparing coefficients'. An important condition for an $AR(p)$ process to have a covariance stationary representation is that its roots all lie outside the unit circle (thats why we assumed that $\rho < 1$).
I hope this rudimentary info helps you... | How to calculate impulse responses for a given autoregressive process?
Not sure if your equation is correct. I guess you meant
$$
y_t = \rho_1 y_{t-1} + \dots + \rho_p y_{t-p} + \epsilon_t
$$
In case of a AR(1) process you have to cast it into its MA($\infty$) (or 'covar |
53,743 | Fluctuations in hazard function at high (x) values | This kind of wild fluctuation arises from floating point rounding errors in the calculations.
The hazard function of a $\Gamma(a,1)$ distribution, with shape parameter $a$ and scale parameter $1$, equals
$$H(x; a) = \frac{x^{a-1}\exp(-x)}{\int_x^\infty t^{a-1} \exp(-t) dt }.$$
The maximum requested in the question is also the limiting value as $x\to\infty$, because the hazard function in this case is increasing.
Both the numerator and denominator are differentiable functions approaching zero as $x$ increases, so L'Hopital's Rule applies, telling us the limiting value of the ratio is the limit of the ratio of the derivatives:
$$\lim_{x\to\infty} H(x;a) = \lim_{x\to\infty}\frac{\exp(-x)\left((a-1)x^{a-2} - x^{a-1}\right)}{-x^{a-1}\exp(-x)} = \lim_{x\to\infty} 1 - \frac{a-1}{x} = 1.$$
When the scale is changed to $b$, the PDF must be divided by $b$ to compensate (to keep the total area equal to unity), implying the limiting value of the hazard function for a Gamma distribution with scale parameter $b$ is $1/b$.
A better way to compute this hazard function for large $x$ is to use the first few terms of its Taylor expansion around $x=\infty$:
$$H(x;a,b) \approx x^{-a} \left(\frac{(a-1)
\left(\frac{1}{b}\right)^{-a-1}}{x^2}-\frac{(a-1)
\left(\frac{1}{b}\right)^{-a}}{x}+\left(\frac{1}{b}\right)^{1-a}\right
) \left(\frac{x}{b}\right)^a.$$
This will be extremely accurate well before $x$ gets so large that the brute-force computation of the ratio breaks down. It is, of course, not accurate for small $x$. | Fluctuations in hazard function at high (x) values | This kind of wild fluctuation arises from floating point rounding errors in the calculations.
The hazard function of a $\Gamma(a,1)$ distribution, with shape parameter $a$ and scale parameter $1$, equ | Fluctuations in hazard function at high (x) values
This kind of wild fluctuation arises from floating point rounding errors in the calculations.
The hazard function of a $\Gamma(a,1)$ distribution, with shape parameter $a$ and scale parameter $1$, equals
$$H(x; a) = \frac{x^{a-1}\exp(-x)}{\int_x^\infty t^{a-1} \exp(-t) dt }.$$
The maximum requested in the question is also the limiting value as $x\to\infty$, because the hazard function in this case is increasing.
Both the numerator and denominator are differentiable functions approaching zero as $x$ increases, so L'Hopital's Rule applies, telling us the limiting value of the ratio is the limit of the ratio of the derivatives:
$$\lim_{x\to\infty} H(x;a) = \lim_{x\to\infty}\frac{\exp(-x)\left((a-1)x^{a-2} - x^{a-1}\right)}{-x^{a-1}\exp(-x)} = \lim_{x\to\infty} 1 - \frac{a-1}{x} = 1.$$
When the scale is changed to $b$, the PDF must be divided by $b$ to compensate (to keep the total area equal to unity), implying the limiting value of the hazard function for a Gamma distribution with scale parameter $b$ is $1/b$.
A better way to compute this hazard function for large $x$ is to use the first few terms of its Taylor expansion around $x=\infty$:
$$H(x;a,b) \approx x^{-a} \left(\frac{(a-1)
\left(\frac{1}{b}\right)^{-a-1}}{x^2}-\frac{(a-1)
\left(\frac{1}{b}\right)^{-a}}{x}+\left(\frac{1}{b}\right)^{1-a}\right
) \left(\frac{x}{b}\right)^a.$$
This will be extremely accurate well before $x$ gets so large that the brute-force computation of the ratio breaks down. It is, of course, not accurate for small $x$. | Fluctuations in hazard function at high (x) values
This kind of wild fluctuation arises from floating point rounding errors in the calculations.
The hazard function of a $\Gamma(a,1)$ distribution, with shape parameter $a$ and scale parameter $1$, equ |
53,744 | How many AdaBoost iterations? | The answer seems to be that it depends, based on the problem and on how you interpret the AdaBoost algorithm.
Mease and Wyner (2008) argue that AdaBoost should be run for a long time, until it converges, and that 1,000 iterations should be enough. The main point of the paper is that the intuition gained from the "Statistical view of boosting" due to Friedman, Hastie, and Tibshirani (2000) could be incorrect, and that this applies to their recommendations to regularize AdaBoost with the $\nu$ parameter and early stopping times. Their results suggest that long running times, no regularization, and deep trees as base learners make use of AdaBoost's strengths.
The response to that paper by Bennett demonstrates that AdaBoost converges more quickly and is more resistant to overfitting when deeper trees are used. Bennett concludes that
For slowly converging problems, AdaBoost will frequently be regularized by early stopping
and that
For more rapidly converging problems, AdaBoost will converge and enter an overtraining phase
where "overtraining" does not mean "overfitting" but rather "further improvement of out-of-sample performance after convergence."
Friedman, Hastie, and Tibshirani respond to Mease and Wyner with a demonstration that AdaBoost with decision stumps, shrinkage, and early stopping can perform better than deep trees, no shrinkage and running until convergence, but their argument depended on running the algorithm to convergence anyway. That is, even if 100 or 10 iterations are optimal, it is necessary to run many hundred iterations to find out where the correct stopping point should be. As user Matthew Drury pointed out in the comments, this only requires a single long run of the algorithm.
The paper itself, as well as the rejoinder to the response by Bickel and Ritov, emphasize that in low-dimensional problems AdaBoost does badly overfit and that early stopping is necessary; it seems that ten dimensions is enough for this to no longer be an issue.
The response by Buja and Stuetzle raises the possibility that the first few iterations of AdaBoost reduce bias and that the later iterations reduce variance, and offer that the Mease and Wyner approach of starting with a relatively unbiased but high-variance base learner like an 8-node tree makes sense for that reason. Therefore if low-variance predictions are desired, a longer running time could be useful.
Therefore what I am going to do is cross-validate over the following grid:
$\nu \in \{0.1, 1.0\}$
$\text{tree depth} \in \{1, 2, 3, 4\}$
$\text{iterations} \in \{1, \dots, 1000\}$ (but only ever fitting with 1000 iterations)
Basically, I'm looking into both approaches. This is partly because they both seem valid, and partly because many of my predictors are discrete, in which case the "statistical view" apparently applies more directly (although I wish I had a better understanding of how and why). | How many AdaBoost iterations? | The answer seems to be that it depends, based on the problem and on how you interpret the AdaBoost algorithm.
Mease and Wyner (2008) argue that AdaBoost should be run for a long time, until it converg | How many AdaBoost iterations?
The answer seems to be that it depends, based on the problem and on how you interpret the AdaBoost algorithm.
Mease and Wyner (2008) argue that AdaBoost should be run for a long time, until it converges, and that 1,000 iterations should be enough. The main point of the paper is that the intuition gained from the "Statistical view of boosting" due to Friedman, Hastie, and Tibshirani (2000) could be incorrect, and that this applies to their recommendations to regularize AdaBoost with the $\nu$ parameter and early stopping times. Their results suggest that long running times, no regularization, and deep trees as base learners make use of AdaBoost's strengths.
The response to that paper by Bennett demonstrates that AdaBoost converges more quickly and is more resistant to overfitting when deeper trees are used. Bennett concludes that
For slowly converging problems, AdaBoost will frequently be regularized by early stopping
and that
For more rapidly converging problems, AdaBoost will converge and enter an overtraining phase
where "overtraining" does not mean "overfitting" but rather "further improvement of out-of-sample performance after convergence."
Friedman, Hastie, and Tibshirani respond to Mease and Wyner with a demonstration that AdaBoost with decision stumps, shrinkage, and early stopping can perform better than deep trees, no shrinkage and running until convergence, but their argument depended on running the algorithm to convergence anyway. That is, even if 100 or 10 iterations are optimal, it is necessary to run many hundred iterations to find out where the correct stopping point should be. As user Matthew Drury pointed out in the comments, this only requires a single long run of the algorithm.
The paper itself, as well as the rejoinder to the response by Bickel and Ritov, emphasize that in low-dimensional problems AdaBoost does badly overfit and that early stopping is necessary; it seems that ten dimensions is enough for this to no longer be an issue.
The response by Buja and Stuetzle raises the possibility that the first few iterations of AdaBoost reduce bias and that the later iterations reduce variance, and offer that the Mease and Wyner approach of starting with a relatively unbiased but high-variance base learner like an 8-node tree makes sense for that reason. Therefore if low-variance predictions are desired, a longer running time could be useful.
Therefore what I am going to do is cross-validate over the following grid:
$\nu \in \{0.1, 1.0\}$
$\text{tree depth} \in \{1, 2, 3, 4\}$
$\text{iterations} \in \{1, \dots, 1000\}$ (but only ever fitting with 1000 iterations)
Basically, I'm looking into both approaches. This is partly because they both seem valid, and partly because many of my predictors are discrete, in which case the "statistical view" apparently applies more directly (although I wish I had a better understanding of how and why). | How many AdaBoost iterations?
The answer seems to be that it depends, based on the problem and on how you interpret the AdaBoost algorithm.
Mease and Wyner (2008) argue that AdaBoost should be run for a long time, until it converg |
53,745 | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem | As you say, the three elements used in MH are the proposal (jumping) probability, the prior probability, and the likelihood. Say that we want to estimate the posterior distribution of a parameter $\Theta$ after observing some data $\mathbf x$, that is, $p(\Theta|\mathbf{x})$. Assume that we know the prior distribution $p(\Theta)$, that summarizes our beliefs about the value of $\Theta$ before we observe any data.
Now, it is usually impossible to compute the posterior distribution analytically. Instead, an ingenious method is to create an abstract Markov chain whose states are values of $\Theta$, such that that the stationary distribution of such chain is the desired posterior distribution. Metropolis-Hastings (MH) is a schema (not the only one, e.g. there's Gibbs sampling) to construct such a chain, that requires to carefully select a jumping (or proposal) distribution $q(\Theta|\theta)$. In order to go from one value of $\Theta$, denoted as $\theta$, to the next, say $\theta'$, we apply the following procedure:
Sample a candidate (or proposed) $\theta^*$ as the next value, by sampling from $q(\Theta|\theta)$, where $\theta$ is the current value.
Accept the candidate value with a probability given by the MH acceptance ratio, given by the formula:
$$
\alpha(\theta,\theta^*) = \min\left[1,\frac{p(\theta^*|\mathbf{x})\;q(\theta|\theta^*)}{p(\theta|\mathbf{x})\;q(\theta^*|\theta)} \right].
$$
By applying Bayes rule the the posterior probability terms in the formula above, we get:
$$
\alpha(\theta,\theta^*) = \min\left[1,\frac{p(\theta^*)\;p(\mathbf{x}|\theta^*)\;q(\theta|\theta^*)}{p(\theta)\;p(\mathbf{x}|\theta)\;q(\theta^*|\theta)} \right].
$$
After iterating this process "enough" times, we are left with a collection of points that approximates the posterior distribution.
A counterintuitive thing about the formula above is that the proposal probability of the candidate value appears at the denominator, while the "reverse" proposal probability (i.e. going from the proposed to the original value) is at the numerator. This is so that the overall transition distribution resulting from this process ensures a necessary property of the Markov chain called detailed balance. I found this paper quite helpful on this topic.
Now, it is perfectly possible to use the prior distribution itself as the proposal distribution: $q(\Theta|\theta)=p(\Theta)$. Note that in this case the proposal distribution is not conditional on the current value of $\Theta$, but that is not a problem in theory. If we substitute this in the formula for $\alpha$ above, and carry out some simplifications, we obtain:
$$
\alpha(\theta,\theta^*) = \min\left[1,\frac{p(\mathbf{x}|\theta^*)}{p(\mathbf{x}|\theta)} \right].
$$
What is left is just the ratio of the likelihoods. This is a very simple approach and usually not very efficient, but may work for simple problems.
Regarding the likelihood, I think it really depends on what your model is. Regarding the formula you write, I don't really understand what is going on. What are $Data$ and $Model$ in there? | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem | As you say, the three elements used in MH are the proposal (jumping) probability, the prior probability, and the likelihood. Say that we want to estimate the posterior distribution of a parameter $\Th | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem
As you say, the three elements used in MH are the proposal (jumping) probability, the prior probability, and the likelihood. Say that we want to estimate the posterior distribution of a parameter $\Theta$ after observing some data $\mathbf x$, that is, $p(\Theta|\mathbf{x})$. Assume that we know the prior distribution $p(\Theta)$, that summarizes our beliefs about the value of $\Theta$ before we observe any data.
Now, it is usually impossible to compute the posterior distribution analytically. Instead, an ingenious method is to create an abstract Markov chain whose states are values of $\Theta$, such that that the stationary distribution of such chain is the desired posterior distribution. Metropolis-Hastings (MH) is a schema (not the only one, e.g. there's Gibbs sampling) to construct such a chain, that requires to carefully select a jumping (or proposal) distribution $q(\Theta|\theta)$. In order to go from one value of $\Theta$, denoted as $\theta$, to the next, say $\theta'$, we apply the following procedure:
Sample a candidate (or proposed) $\theta^*$ as the next value, by sampling from $q(\Theta|\theta)$, where $\theta$ is the current value.
Accept the candidate value with a probability given by the MH acceptance ratio, given by the formula:
$$
\alpha(\theta,\theta^*) = \min\left[1,\frac{p(\theta^*|\mathbf{x})\;q(\theta|\theta^*)}{p(\theta|\mathbf{x})\;q(\theta^*|\theta)} \right].
$$
By applying Bayes rule the the posterior probability terms in the formula above, we get:
$$
\alpha(\theta,\theta^*) = \min\left[1,\frac{p(\theta^*)\;p(\mathbf{x}|\theta^*)\;q(\theta|\theta^*)}{p(\theta)\;p(\mathbf{x}|\theta)\;q(\theta^*|\theta)} \right].
$$
After iterating this process "enough" times, we are left with a collection of points that approximates the posterior distribution.
A counterintuitive thing about the formula above is that the proposal probability of the candidate value appears at the denominator, while the "reverse" proposal probability (i.e. going from the proposed to the original value) is at the numerator. This is so that the overall transition distribution resulting from this process ensures a necessary property of the Markov chain called detailed balance. I found this paper quite helpful on this topic.
Now, it is perfectly possible to use the prior distribution itself as the proposal distribution: $q(\Theta|\theta)=p(\Theta)$. Note that in this case the proposal distribution is not conditional on the current value of $\Theta$, but that is not a problem in theory. If we substitute this in the formula for $\alpha$ above, and carry out some simplifications, we obtain:
$$
\alpha(\theta,\theta^*) = \min\left[1,\frac{p(\mathbf{x}|\theta^*)}{p(\mathbf{x}|\theta)} \right].
$$
What is left is just the ratio of the likelihoods. This is a very simple approach and usually not very efficient, but may work for simple problems.
Regarding the likelihood, I think it really depends on what your model is. Regarding the formula you write, I don't really understand what is going on. What are $Data$ and $Model$ in there? | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem
As you say, the three elements used in MH are the proposal (jumping) probability, the prior probability, and the likelihood. Say that we want to estimate the posterior distribution of a parameter $\Th |
53,746 | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem | First to explain the MH algorithm, it's used to approximate numerically a target distribution, in this case $p(\theta|D)$. At each stage of the algorithm:
A value $\theta_{proposed}$ is proposed using the jumping or proposal distribution.
An acceptance ratio is calculated, equal to $\frac{p(\theta_{proposed}|D)}{p(\theta_{current}|D)}$. Because we cannot calculate $p(\theta|D)$ directly, we leverage the proportional expression of Bayes rule and calculate this quotient using the products of likelihood and prior corresponding to $\theta_{proposed}, \theta_{current}$. That is, the acceptance ratio is:
$$\frac{p(D|\theta_{proposed})p(\theta_{proposed})}{p(D|\theta_{current})p(\theta_{current})}$$
If this ratio exceeds one—intuitively, if the proposed parameter value is more likely given data and prior—we accept this proposal. If not, we accept it with probability equal to the ratio.
As more and more values are sampled in this way, the trace of $\theta$ values more and more closely approximates the true distribution of $\theta$.
To your specific questions:
Your intuition on the jumping distribution is correct. There are a few more formal requirements, see wiki, but it's primary purpose is to propose a candidate parameter value at each step of the algorithm.
The prior expresses an analyst's prior beliefs about the parameter values. It is distinct from the proposal distribution.
You are correct that the application of Bayes rule happens in the acceptance ratio (see step 2 above).
For resources, a very intuitive explanation is given in Kruschke's Doing Bayesian Data Analysis, and that example is loosely summarized here. | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem | First to explain the MH algorithm, it's used to approximate numerically a target distribution, in this case $p(\theta|D)$. At each stage of the algorithm:
A value $\theta_{proposed}$ is proposed usin | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem
First to explain the MH algorithm, it's used to approximate numerically a target distribution, in this case $p(\theta|D)$. At each stage of the algorithm:
A value $\theta_{proposed}$ is proposed using the jumping or proposal distribution.
An acceptance ratio is calculated, equal to $\frac{p(\theta_{proposed}|D)}{p(\theta_{current}|D)}$. Because we cannot calculate $p(\theta|D)$ directly, we leverage the proportional expression of Bayes rule and calculate this quotient using the products of likelihood and prior corresponding to $\theta_{proposed}, \theta_{current}$. That is, the acceptance ratio is:
$$\frac{p(D|\theta_{proposed})p(\theta_{proposed})}{p(D|\theta_{current})p(\theta_{current})}$$
If this ratio exceeds one—intuitively, if the proposed parameter value is more likely given data and prior—we accept this proposal. If not, we accept it with probability equal to the ratio.
As more and more values are sampled in this way, the trace of $\theta$ values more and more closely approximates the true distribution of $\theta$.
To your specific questions:
Your intuition on the jumping distribution is correct. There are a few more formal requirements, see wiki, but it's primary purpose is to propose a candidate parameter value at each step of the algorithm.
The prior expresses an analyst's prior beliefs about the parameter values. It is distinct from the proposal distribution.
You are correct that the application of Bayes rule happens in the acceptance ratio (see step 2 above).
For resources, a very intuitive explanation is given in Kruschke's Doing Bayesian Data Analysis, and that example is loosely summarized here. | Metropolis Hastings Algorithm - Prior vs Proposal vs Numerator of Bayes Theorem
First to explain the MH algorithm, it's used to approximate numerically a target distribution, in this case $p(\theta|D)$. At each stage of the algorithm:
A value $\theta_{proposed}$ is proposed usin |
53,747 | The name of 'Fused' Lasso | The term is clearly explained in the abstract of the paper[1] you mention.
We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the $L_1$-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile.
That is, it's called that because adjacent parameters may be set equal -- i.e. "fused" (somewhat akin to aligning broken bones, which ultimately fuse together).
This is the first meaning of fuse here "to join or blend to form a single entity".
Diagrams in the paper further highlight the "fusion" of adjacent parameters.
[1] Tibshirani et al (2005),
"Sparsity and smoothness via the fused lasso,"
Journal of the Royal Statistical Society, Series B, 67, 91–108 | The name of 'Fused' Lasso | The term is clearly explained in the abstract of the paper[1] you mention.
We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meani | The name of 'Fused' Lasso
The term is clearly explained in the abstract of the paper[1] you mention.
We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the $L_1$-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile.
That is, it's called that because adjacent parameters may be set equal -- i.e. "fused" (somewhat akin to aligning broken bones, which ultimately fuse together).
This is the first meaning of fuse here "to join or blend to form a single entity".
Diagrams in the paper further highlight the "fusion" of adjacent parameters.
[1] Tibshirani et al (2005),
"Sparsity and smoothness via the fused lasso,"
Journal of the Royal Statistical Society, Series B, 67, 91–108 | The name of 'Fused' Lasso
The term is clearly explained in the abstract of the paper[1] you mention.
We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meani |
53,748 | Best Subset Selection Questions | Question 1
Yes, if we had three variables say (any more would be tedious to write out here) then all models containing 1 predictor would be fitted:
$$y_i = \beta_0 + \beta_1 x_{i1} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i2} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i3} + \varepsilon_i$$
Then all combinations containing two predictors
$$y_i = \beta_0 + \beta_1 x_{i1} + \beta_1 x_{i2} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i2} + \beta_1 x_{i3} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i1} + \beta_1 x_{i3} + \varepsilon_i$$
Then all combinations with three predictors, which in this case is just one, the full model
$$y_i = \beta_0 + \beta_1 x_{i1} + \beta_1 x_{i2} + \beta_1 x_{i3} + \varepsilon_i$$
Question 2
How do you want to measure "best"? Commonly used measures include AIC, BIC, or Mallow's Cp, or the adjusted $R^2$ or plain old $R^2$. All of these will provide a ranking of the models in terms the features of that metric, but in practice they are all telling you about which model gives the best predictions (prediction accuracy or related) given the complexity of the model fit.
Having gone through this process, it would be difficult to then use the $p$ values of tests for terms in the model or an omnibus test for all terms as such tests would know nothing of the selection procedure you'd subjected the data to. It is also not clear how one might "correct" the p-values for the selection procedure with Best-subsets selection (Hastie et al 2009 - sorry I can't quite locate the page for this just now, will add if I find it later).
In addition, because you are using a hard threshold for inclusion (term is either in the model or not, in which case it's coefficient is $\beta_{ij} = 0$) the coefficients may well be biased. If the true coefficient is small, it may not get selected in the above process given the sample of data you have collected. In that case your model would say the estimate for the coefficient is 0, which is biased low compared to the true value. Variables that remain in the model could also have coefficients biased-high because the selection procedure is likely to retain those terms if they have large effects, and throw them out if they don't. | Best Subset Selection Questions | Question 1
Yes, if we had three variables say (any more would be tedious to write out here) then all models containing 1 predictor would be fitted:
$$y_i = \beta_0 + \beta_1 x_{i1} + \varepsilon_i$$
$ | Best Subset Selection Questions
Question 1
Yes, if we had three variables say (any more would be tedious to write out here) then all models containing 1 predictor would be fitted:
$$y_i = \beta_0 + \beta_1 x_{i1} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i2} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i3} + \varepsilon_i$$
Then all combinations containing two predictors
$$y_i = \beta_0 + \beta_1 x_{i1} + \beta_1 x_{i2} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i2} + \beta_1 x_{i3} + \varepsilon_i$$
$$y_i = \beta_0 + \beta_1 x_{i1} + \beta_1 x_{i3} + \varepsilon_i$$
Then all combinations with three predictors, which in this case is just one, the full model
$$y_i = \beta_0 + \beta_1 x_{i1} + \beta_1 x_{i2} + \beta_1 x_{i3} + \varepsilon_i$$
Question 2
How do you want to measure "best"? Commonly used measures include AIC, BIC, or Mallow's Cp, or the adjusted $R^2$ or plain old $R^2$. All of these will provide a ranking of the models in terms the features of that metric, but in practice they are all telling you about which model gives the best predictions (prediction accuracy or related) given the complexity of the model fit.
Having gone through this process, it would be difficult to then use the $p$ values of tests for terms in the model or an omnibus test for all terms as such tests would know nothing of the selection procedure you'd subjected the data to. It is also not clear how one might "correct" the p-values for the selection procedure with Best-subsets selection (Hastie et al 2009 - sorry I can't quite locate the page for this just now, will add if I find it later).
In addition, because you are using a hard threshold for inclusion (term is either in the model or not, in which case it's coefficient is $\beta_{ij} = 0$) the coefficients may well be biased. If the true coefficient is small, it may not get selected in the above process given the sample of data you have collected. In that case your model would say the estimate for the coefficient is 0, which is biased low compared to the true value. Variables that remain in the model could also have coefficients biased-high because the selection procedure is likely to retain those terms if they have large effects, and throw them out if they don't. | Best Subset Selection Questions
Question 1
Yes, if we had three variables say (any more would be tedious to write out here) then all models containing 1 predictor would be fitted:
$$y_i = \beta_0 + \beta_1 x_{i1} + \varepsilon_i$$
$ |
53,749 | Best Subset Selection Questions | 1) Yes.
2) You evaluate by obtaining an estimate of the out of sample error rate for each of your models, and then choosing the model with the optimal estimate of out of sample error. A few common methods for this are cross validation, using a fully held out data set, and some kind of estimate that can be performed on the training data itself that penalizes for the number of parameters. In the first two cases, you have a model fit on training data, and a set of data you held out that the model has not seen. You make predictions on the held out data, and then use a sample estimate of your loss function (often sum of squared residuals in the linear regression setting). The third case subsumes methods like AIC and BIC, where a penalty term is applied to the training error rate.
All of these concepts are discussed in the book! Keep reading! | Best Subset Selection Questions | 1) Yes.
2) You evaluate by obtaining an estimate of the out of sample error rate for each of your models, and then choosing the model with the optimal estimate of out of sample error. A few common me | Best Subset Selection Questions
1) Yes.
2) You evaluate by obtaining an estimate of the out of sample error rate for each of your models, and then choosing the model with the optimal estimate of out of sample error. A few common methods for this are cross validation, using a fully held out data set, and some kind of estimate that can be performed on the training data itself that penalizes for the number of parameters. In the first two cases, you have a model fit on training data, and a set of data you held out that the model has not seen. You make predictions on the held out data, and then use a sample estimate of your loss function (often sum of squared residuals in the linear regression setting). The third case subsumes methods like AIC and BIC, where a penalty term is applied to the training error rate.
All of these concepts are discussed in the book! Keep reading! | Best Subset Selection Questions
1) Yes.
2) You evaluate by obtaining an estimate of the out of sample error rate for each of your models, and then choosing the model with the optimal estimate of out of sample error. A few common me |
53,750 | Finding $b$ such that $e^{5B_t - bt}$ is a martingale | You should apply Ito's lemma:
Let $Y_t = 5 B_t - bt$ then
$$
M_t = \exp(5 B_t -bt) = \exp(Y_t)
$$
thus
$$
d\exp(Y_t) = dM_t = M_t dY_t + M_t \frac{1}{2} 25 dt = M_t (5 dB_t - b dt + 12.5 dt).
$$
It will be a martingale if the drift terms cancel. Thus if $b = 12.5$. | Finding $b$ such that $e^{5B_t - bt}$ is a martingale | You should apply Ito's lemma:
Let $Y_t = 5 B_t - bt$ then
$$
M_t = \exp(5 B_t -bt) = \exp(Y_t)
$$
thus
$$
d\exp(Y_t) = dM_t = M_t dY_t + M_t \frac{1}{2} 25 dt = M_t (5 dB_t - b dt + 12.5 dt).
$$
It wi | Finding $b$ such that $e^{5B_t - bt}$ is a martingale
You should apply Ito's lemma:
Let $Y_t = 5 B_t - bt$ then
$$
M_t = \exp(5 B_t -bt) = \exp(Y_t)
$$
thus
$$
d\exp(Y_t) = dM_t = M_t dY_t + M_t \frac{1}{2} 25 dt = M_t (5 dB_t - b dt + 12.5 dt).
$$
It will be a martingale if the drift terms cancel. Thus if $b = 12.5$. | Finding $b$ such that $e^{5B_t - bt}$ is a martingale
You should apply Ito's lemma:
Let $Y_t = 5 B_t - bt$ then
$$
M_t = \exp(5 B_t -bt) = \exp(Y_t)
$$
thus
$$
d\exp(Y_t) = dM_t = M_t dY_t + M_t \frac{1}{2} 25 dt = M_t (5 dB_t - b dt + 12.5 dt).
$$
It wi |
53,751 | Finding $b$ such that $e^{5B_t - bt}$ is a martingale | You are almost there. Write $bt=bs+b(t-s)$ and think how the second summand relates to $25/2(t-s)$. | Finding $b$ such that $e^{5B_t - bt}$ is a martingale | You are almost there. Write $bt=bs+b(t-s)$ and think how the second summand relates to $25/2(t-s)$. | Finding $b$ such that $e^{5B_t - bt}$ is a martingale
You are almost there. Write $bt=bs+b(t-s)$ and think how the second summand relates to $25/2(t-s)$. | Finding $b$ such that $e^{5B_t - bt}$ is a martingale
You are almost there. Write $bt=bs+b(t-s)$ and think how the second summand relates to $25/2(t-s)$. |
53,752 | Approximate mean from .25, .5, .75 percentiles | Without imposing some kind of assumption, we can say almost nothing.
It's possible for the population mean to be anything at all - any value on the real line... or possibly infinite, or undefined.
If the underlying distribution is symmetric and unimodal and more or less normalish, your formula would do reasonably well, but symmetry is a pretty strong assumption.
To see that you can get any finite mean at all - even with a sample - consider a sample consisting of five each of the values 1, 2, 3, 4 and 5 and one additional value.
The estimator you propose should always be 3 no matter what the 26th value is. To get any finite mean $m$, choose a value for $m$ and make the additional value $26(m-3) + 3$.
That said, for reasonably symmetric distributions whose mean exists, your suggested estimator (the trimean) often performs quite well as an estimate of the mean; it does well at the normal, and typically better for cases with more of a peak/heavier tail.
If you expect your population distribution to be close to normal, equal weight to the three quartiles is slightly more efficient [1]. If you think it could be substantially more peaked (but still symmetric, like a $t$ or a logistic for example), the trimean will generally be an excellent choice.
[1] Doyle, John R. and Chen, Catherine Huirong, (2009)
"On the efficiency of the Trimean and Q123"
Journal of Statistics and Management Systems,
12:2, pp319-323 | Approximate mean from .25, .5, .75 percentiles | Without imposing some kind of assumption, we can say almost nothing.
It's possible for the population mean to be anything at all - any value on the real line... or possibly infinite, or undefined.
If | Approximate mean from .25, .5, .75 percentiles
Without imposing some kind of assumption, we can say almost nothing.
It's possible for the population mean to be anything at all - any value on the real line... or possibly infinite, or undefined.
If the underlying distribution is symmetric and unimodal and more or less normalish, your formula would do reasonably well, but symmetry is a pretty strong assumption.
To see that you can get any finite mean at all - even with a sample - consider a sample consisting of five each of the values 1, 2, 3, 4 and 5 and one additional value.
The estimator you propose should always be 3 no matter what the 26th value is. To get any finite mean $m$, choose a value for $m$ and make the additional value $26(m-3) + 3$.
That said, for reasonably symmetric distributions whose mean exists, your suggested estimator (the trimean) often performs quite well as an estimate of the mean; it does well at the normal, and typically better for cases with more of a peak/heavier tail.
If you expect your population distribution to be close to normal, equal weight to the three quartiles is slightly more efficient [1]. If you think it could be substantially more peaked (but still symmetric, like a $t$ or a logistic for example), the trimean will generally be an excellent choice.
[1] Doyle, John R. and Chen, Catherine Huirong, (2009)
"On the efficiency of the Trimean and Q123"
Journal of Statistics and Management Systems,
12:2, pp319-323 | Approximate mean from .25, .5, .75 percentiles
Without imposing some kind of assumption, we can say almost nothing.
It's possible for the population mean to be anything at all - any value on the real line... or possibly infinite, or undefined.
If |
53,753 | Approximate mean from .25, .5, .75 percentiles | Without knowing anything about the distribution your percentiles are coming from, I do not think you can tell much about the mean. What you can do is regard the three points you have as your data and calculate the weighted mean, which is what your formula describes.
I would not consider this measure as the mean of your original data though. It might be the best approximation, but the relationship between the mean and the median is heavily dependent on the distribution and the two quartiles don't contribute much to your knowledge. | Approximate mean from .25, .5, .75 percentiles | Without knowing anything about the distribution your percentiles are coming from, I do not think you can tell much about the mean. What you can do is regard the three points you have as your data and | Approximate mean from .25, .5, .75 percentiles
Without knowing anything about the distribution your percentiles are coming from, I do not think you can tell much about the mean. What you can do is regard the three points you have as your data and calculate the weighted mean, which is what your formula describes.
I would not consider this measure as the mean of your original data though. It might be the best approximation, but the relationship between the mean and the median is heavily dependent on the distribution and the two quartiles don't contribute much to your knowledge. | Approximate mean from .25, .5, .75 percentiles
Without knowing anything about the distribution your percentiles are coming from, I do not think you can tell much about the mean. What you can do is regard the three points you have as your data and |
53,754 | Advantages of taking the logarithm to minimize the likelihood | Numerical stability is by far the most important reason for using the log-likelihood instead of the likelihood. That reason alone is more than enough to choose the log-likelihood over the likelihood. Another reason that jumps to mind is that if there is an analytical solution then it is often much easier to find with the log-likelihood.
The likelihood function is typically a product of likelihood contributions by each observation. Taking the derivative of that will quickly lead to an unmanageable number of cross-product terms due to the product rule. In principle it is possible, but I don't want to be the person to keep track of all those terms.
The log-likelihood transforms that product of individual contributions to a sum of contributions, which is much more manageable due to the sum rule. | Advantages of taking the logarithm to minimize the likelihood | Numerical stability is by far the most important reason for using the log-likelihood instead of the likelihood. That reason alone is more than enough to choose the log-likelihood over the likelihood. | Advantages of taking the logarithm to minimize the likelihood
Numerical stability is by far the most important reason for using the log-likelihood instead of the likelihood. That reason alone is more than enough to choose the log-likelihood over the likelihood. Another reason that jumps to mind is that if there is an analytical solution then it is often much easier to find with the log-likelihood.
The likelihood function is typically a product of likelihood contributions by each observation. Taking the derivative of that will quickly lead to an unmanageable number of cross-product terms due to the product rule. In principle it is possible, but I don't want to be the person to keep track of all those terms.
The log-likelihood transforms that product of individual contributions to a sum of contributions, which is much more manageable due to the sum rule. | Advantages of taking the logarithm to minimize the likelihood
Numerical stability is by far the most important reason for using the log-likelihood instead of the likelihood. That reason alone is more than enough to choose the log-likelihood over the likelihood. |
53,755 | Advantages of taking the logarithm to minimize the likelihood | Gradient descent works (best) when the hessian, ie matrix of second derivatives is orthonormal (up to scaling factor.. Don't remember term) . In other words the error surface is quadratic and same in all directions. Remember the step size is not function of err surface in gradient descent.. If second derivative is rapidly changing (Eg exponential function) then convergence will be slow... A fixed step size will be too big for regions of high curvature and too small for regions of low curvature. So I would claim that taking the log likelihood makes your optimisation surface closer to constant curvature for many typical distributions (Eg normal).
The way I got to grips with this is by doing a Taylor series expansion and considering the requirements to constantly descend the error surface using gradient descent.. You end up( from memory) with choosing the step size as 1/max eigenvalue of error surface (over optimisation region) | Advantages of taking the logarithm to minimize the likelihood | Gradient descent works (best) when the hessian, ie matrix of second derivatives is orthonormal (up to scaling factor.. Don't remember term) . In other words the error surface is quadratic and same i | Advantages of taking the logarithm to minimize the likelihood
Gradient descent works (best) when the hessian, ie matrix of second derivatives is orthonormal (up to scaling factor.. Don't remember term) . In other words the error surface is quadratic and same in all directions. Remember the step size is not function of err surface in gradient descent.. If second derivative is rapidly changing (Eg exponential function) then convergence will be slow... A fixed step size will be too big for regions of high curvature and too small for regions of low curvature. So I would claim that taking the log likelihood makes your optimisation surface closer to constant curvature for many typical distributions (Eg normal).
The way I got to grips with this is by doing a Taylor series expansion and considering the requirements to constantly descend the error surface using gradient descent.. You end up( from memory) with choosing the step size as 1/max eigenvalue of error surface (over optimisation region) | Advantages of taking the logarithm to minimize the likelihood
Gradient descent works (best) when the hessian, ie matrix of second derivatives is orthonormal (up to scaling factor.. Don't remember term) . In other words the error surface is quadratic and same i |
53,756 | Newey-West standard errors when Durbin-Watson test results are fine | First, I would recommend to use a software package that not only reports the Durbin-Watson test statistics but also a p-value. That might give you more of an indication how close or far from 2 the statistic actually is. Furthermore, you may consider other tests for autocorrelation, e.g., Breusch-Godfrey etc. And if there is no evidence for autocorrelation (or heteroscedasticity) then the OLS standard errors are probably fine.
A pragmatic approach could also be to simply try whether using Newey-West standard errors makes a relevant difference. If the autocorrelations in the residuals are small, then Newey-West should lead to very similar results anyway. | Newey-West standard errors when Durbin-Watson test results are fine | First, I would recommend to use a software package that not only reports the Durbin-Watson test statistics but also a p-value. That might give you more of an indication how close or far from 2 the sta | Newey-West standard errors when Durbin-Watson test results are fine
First, I would recommend to use a software package that not only reports the Durbin-Watson test statistics but also a p-value. That might give you more of an indication how close or far from 2 the statistic actually is. Furthermore, you may consider other tests for autocorrelation, e.g., Breusch-Godfrey etc. And if there is no evidence for autocorrelation (or heteroscedasticity) then the OLS standard errors are probably fine.
A pragmatic approach could also be to simply try whether using Newey-West standard errors makes a relevant difference. If the autocorrelations in the residuals are small, then Newey-West should lead to very similar results anyway. | Newey-West standard errors when Durbin-Watson test results are fine
First, I would recommend to use a software package that not only reports the Durbin-Watson test statistics but also a p-value. That might give you more of an indication how close or far from 2 the sta |
53,757 | Variance of slope | For a standard linear regression that meets the normal assumptions, the variances of your parameter estimates can be taken from the variance covariance matrix, $\Sigma$. For example, the variance of the intercept is the first element on the main diagonal, $\Sigma_{11}$. The variance of the slope on $X_1$ is the second element on the main diagonal, $\Sigma_{22}$, and so on.
There are probably many ways to skin a cat, but the standard calculation for the variance covariance matrix uses the residual variance from your model and your design matrix. Then it is:
$$
\rm{VCov(model)} = s^2(X' X)^{-1}
$$
Here's a worked example of the calculations with your data and R:
x = c(1:6); y = c(18, 14, 15, 12, 7, 6); m = lm(y ~ x)
summary(m)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 20.4000 1.4119 14.45 0.000133 ***
# x -2.4000 0.3625 -6.62 0.002700 **
#
# Residual standard error: 1.517 on 4 degrees of freedom
s = summary(m)$sigma; s # [1] 1.516575
dm = model.matrix(m); dm
# (Intercept) x
# 1 1 1
# 2 1 2
# 3 1 3
# 4 1 4
# 5 1 5
# 6 1 6
s^2*solve(t(dm)%*%dm)
# (Intercept) x
# (Intercept) 1.993333 -0.4600000
# x -0.460000 0.1314286
vcov(m) # you can see that this is the same as the manual calculation above
# (Intercept) x
# (Intercept) 1.993333 -0.4600000
# x -0.460000 0.1314286
sqrt(diag(vcov(m))) # these are the same standard errors as the summary output
# (Intercept) x
# 1.4118546 0.3625308 | Variance of slope | For a standard linear regression that meets the normal assumptions, the variances of your parameter estimates can be taken from the variance covariance matrix, $\Sigma$. For example, the variance of | Variance of slope
For a standard linear regression that meets the normal assumptions, the variances of your parameter estimates can be taken from the variance covariance matrix, $\Sigma$. For example, the variance of the intercept is the first element on the main diagonal, $\Sigma_{11}$. The variance of the slope on $X_1$ is the second element on the main diagonal, $\Sigma_{22}$, and so on.
There are probably many ways to skin a cat, but the standard calculation for the variance covariance matrix uses the residual variance from your model and your design matrix. Then it is:
$$
\rm{VCov(model)} = s^2(X' X)^{-1}
$$
Here's a worked example of the calculations with your data and R:
x = c(1:6); y = c(18, 14, 15, 12, 7, 6); m = lm(y ~ x)
summary(m)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 20.4000 1.4119 14.45 0.000133 ***
# x -2.4000 0.3625 -6.62 0.002700 **
#
# Residual standard error: 1.517 on 4 degrees of freedom
s = summary(m)$sigma; s # [1] 1.516575
dm = model.matrix(m); dm
# (Intercept) x
# 1 1 1
# 2 1 2
# 3 1 3
# 4 1 4
# 5 1 5
# 6 1 6
s^2*solve(t(dm)%*%dm)
# (Intercept) x
# (Intercept) 1.993333 -0.4600000
# x -0.460000 0.1314286
vcov(m) # you can see that this is the same as the manual calculation above
# (Intercept) x
# (Intercept) 1.993333 -0.4600000
# x -0.460000 0.1314286
sqrt(diag(vcov(m))) # these are the same standard errors as the summary output
# (Intercept) x
# 1.4118546 0.3625308 | Variance of slope
For a standard linear regression that meets the normal assumptions, the variances of your parameter estimates can be taken from the variance covariance matrix, $\Sigma$. For example, the variance of |
53,758 | Variance of slope | It's easy to do this for the multiple regression case:
\begin{align}
\text{Var}((X'X)^{-1}X'y) &= (X'X)^{-1}X'\text{Var}(y)X'(X'X)^{-1} \\
&=(X'X)^{-1}X'(\sigma^2I)X'(X'X)^{-1} \\
&=\sigma^2(X'X)^{-1}X'X'(X'X)^{-1} \\
&=\sigma^2(X'X)^{-1}
\end{align}
which is usually estimated by $s^2(X'X)^{-1}$ | Variance of slope | It's easy to do this for the multiple regression case:
\begin{align}
\text{Var}((X'X)^{-1}X'y) &= (X'X)^{-1}X'\text{Var}(y)X'(X'X)^{-1} \\
&=(X'X)^{-1}X'(\sigma^2I)X'(X'X)^{ | Variance of slope
It's easy to do this for the multiple regression case:
\begin{align}
\text{Var}((X'X)^{-1}X'y) &= (X'X)^{-1}X'\text{Var}(y)X'(X'X)^{-1} \\
&=(X'X)^{-1}X'(\sigma^2I)X'(X'X)^{-1} \\
&=\sigma^2(X'X)^{-1}X'X'(X'X)^{-1} \\
&=\sigma^2(X'X)^{-1}
\end{align}
which is usually estimated by $s^2(X'X)^{-1}$ | Variance of slope
It's easy to do this for the multiple regression case:
\begin{align}
\text{Var}((X'X)^{-1}X'y) &= (X'X)^{-1}X'\text{Var}(y)X'(X'X)^{-1} \\
&=(X'X)^{-1}X'(\sigma^2I)X'(X'X)^{ |
53,759 | Boosted trees and Variable Interactions | From this tutorial. See section 8 in particular.
Look up ?gbm.interactions. First construct your model, named angaus.tc5.lr005 in the tutorial.
angaus.tc5.lr005 <- gbm.step(data=Anguilla_train, gbm.x = 3:13, gbm.y = 2,
+ family = "bernoulli", tree.complexity = 5,
+ learning.rate = 0.005, bag.fraction = 0.5)
Then you will calculate the interactions in the model:
find.int <- gbm.interactions(angaus.tc5.lr005)
After this, you can access multiple attributes of your interaction, including the strength ($interaction) and the rank ($rank.list).
Have a look through the tutorial, I think it will answer most of your questions. I did not write it.
Edit: This may be out of date as of at least May 2018, please see the comments below. | Boosted trees and Variable Interactions | From this tutorial. See section 8 in particular.
Look up ?gbm.interactions. First construct your model, named angaus.tc5.lr005 in the tutorial.
angaus.tc5.lr005 <- gbm.step(data=Anguilla_train, gb | Boosted trees and Variable Interactions
From this tutorial. See section 8 in particular.
Look up ?gbm.interactions. First construct your model, named angaus.tc5.lr005 in the tutorial.
angaus.tc5.lr005 <- gbm.step(data=Anguilla_train, gbm.x = 3:13, gbm.y = 2,
+ family = "bernoulli", tree.complexity = 5,
+ learning.rate = 0.005, bag.fraction = 0.5)
Then you will calculate the interactions in the model:
find.int <- gbm.interactions(angaus.tc5.lr005)
After this, you can access multiple attributes of your interaction, including the strength ($interaction) and the rank ($rank.list).
Have a look through the tutorial, I think it will answer most of your questions. I did not write it.
Edit: This may be out of date as of at least May 2018, please see the comments below. | Boosted trees and Variable Interactions
From this tutorial. See section 8 in particular.
Look up ?gbm.interactions. First construct your model, named angaus.tc5.lr005 in the tutorial.
angaus.tc5.lr005 <- gbm.step(data=Anguilla_train, gb |
53,760 | Boosted trees and Variable Interactions | Additionally, you might also look at ?interact.gbm from the gbm package which implements Friedman's (2005) approach for detecting interactions.
J.H. Friedman and B.E. Popescu (2005). “Predictive Learning via Rule Ensembles.” Section 8.1 | Boosted trees and Variable Interactions | Additionally, you might also look at ?interact.gbm from the gbm package which implements Friedman's (2005) approach for detecting interactions.
J.H. Friedman and B.E. Popescu (2005). “Predictive Learn | Boosted trees and Variable Interactions
Additionally, you might also look at ?interact.gbm from the gbm package which implements Friedman's (2005) approach for detecting interactions.
J.H. Friedman and B.E. Popescu (2005). “Predictive Learning via Rule Ensembles.” Section 8.1 | Boosted trees and Variable Interactions
Additionally, you might also look at ?interact.gbm from the gbm package which implements Friedman's (2005) approach for detecting interactions.
J.H. Friedman and B.E. Popescu (2005). “Predictive Learn |
53,761 | When would you use an I-optimal design over a D-optimal design? | D-optimality is related to the covariance matrix of the parameter estimates, so if you wanted to identify which factors aren't significant (factor screening experiment) it's a natural choice. I-optimality minimizes the average prediction variance of your model over a region of measurement parameters, so it is more naturally applied when you know the form of the model and want "good" prediction over your design space. For this reason, I-optimality finds more use in a response surface optimization context.
In reality, the prediction variance and the parameter estimate covariance are related. There's an interesting equivalence theorem (for designs as measures) between minimizing the maximum prediction variance (G-optimality) and D-optimality due to Kiefer and Wolfowitz. | When would you use an I-optimal design over a D-optimal design? | D-optimality is related to the covariance matrix of the parameter estimates, so if you wanted to identify which factors aren't significant (factor screening experiment) it's a natural choice. I-optim | When would you use an I-optimal design over a D-optimal design?
D-optimality is related to the covariance matrix of the parameter estimates, so if you wanted to identify which factors aren't significant (factor screening experiment) it's a natural choice. I-optimality minimizes the average prediction variance of your model over a region of measurement parameters, so it is more naturally applied when you know the form of the model and want "good" prediction over your design space. For this reason, I-optimality finds more use in a response surface optimization context.
In reality, the prediction variance and the parameter estimate covariance are related. There's an interesting equivalence theorem (for designs as measures) between minimizing the maximum prediction variance (G-optimality) and D-optimality due to Kiefer and Wolfowitz. | When would you use an I-optimal design over a D-optimal design?
D-optimality is related to the covariance matrix of the parameter estimates, so if you wanted to identify which factors aren't significant (factor screening experiment) it's a natural choice. I-optim |
53,762 | Which distribution to use for a probability problem? | The following analysis illustrates one approach to obtaining a solution. At least it might help show how to work with the Poisson distribution.
To answer this question constructively and clearly, let's make a few simplifying assumptions to avoid getting bogged down in details that haven't been described. For instance, you might choose to
assume that a "breakdown" is an event with such a short duration that a machine is back in operation immediately after a breakdown; and
therefore the same machine could break down multiple times during a week (although this might be a rare event).
As apparently intended by the question, we will make some additional stronger assumptions. Some such (modeling) assumptions are needed to make any progress at all with the answer. Their chief purpose would be to give us a point of departure for eliciting additional information from the plant engineers so we could develop improved models and better answers:
All machines independently have the same chances of breaking down and
those chances do not vary over time.
These assumptions imply the number of breakdowns observed among any number $N$ of machines during any period of $x$ weeks has a Poisson$(\lambda N x)$ distribution, where $\lambda$ is a parameter common to all machines at all times. The question tells us about the breakdown rate for $x = 1$ week:
$$\lambda N 1\text{ weeks} = 2.$$
Therefore
$$\lambda = 2 / (N \text{ machine-weeks}).$$
In a random sample of $26$ such machines, the number of breakdowns in a week will have a Poisson distribution with parameter
$$\mu = \lambda\times (26\text{ machines})\times (1\text{ week}) = 26\lambda = 52/N.$$
From the formula for Poisson probabilities, the chance of no breakdowns among these $26$ machines is
$$e^{-\mu} 0! = e^{-\mu} = e^{-26\lambda} = e^{-52/N}.$$
Since $N\ge 26$, this value cannot exceed $e^{-52/26}=e^{-2}\approx 0.135$, but as $N$ grows large it could become arbitrarily small.
This is not a final answer. It only shows the implication of four assumptions that were made upon interpreting the question in terms of the chance of no breakdowns in a week. (Other interpretations of the question are possible, due to the contorted syntax used to pose it.) In particular, the dependence upon the unknown total number of machines is clear and explicit. This is about as far as one can go, given the limited information supplied in the question.
A simulation (covering almost 200 years of operation) illustrates the ideas. Its output consists of two histograms: the weekly breakdown counts for all $N$ machines and the counts for the sample of the machines. Here is an example for $N=60$:
On each histogram are drawn two vertical lines: a gray one indicating the location of the actual rate (as given by the preceding solution) and a dashed red one indicating the average rate during the simulation. In each case those lines are visibly coincident, showing that the simulation and the preceding analysis are in agreement.
Studying the R code that produced this simulation may help clarify the ideas.
n <- 60 # Number of machines
sample.size <- 26 # Must be less than or equal to n
weekly.mean <- 2 # Events per week, on average
n.iter <- 1e4 # Size of this simulation in weeks
set.seed(17) # Reproduce these results exactly
#
# Simulate all machines.
#
lambda <- weekly.mean/n # Weekly breakdown rate per machine
x <- matrix(rpois(n.iter*n, lambda), nrow=n) # Breakdowns by machine by week
weekly.breakdowns <- colSums(x) # Total breakdowns each week
sample.breakdowns <- colSums(x[1:sample.size, ]) # Total breakdowns in the sample
#
# Plot the results.
#
par(mfrow=c(1,2))
eps <- 0.99
hist(weekly.breakdowns, breaks=(-1):max(weekly.breakdowns)+eps,
freq=FALSE, cex.main=0.9)
abline(v=lambda * n, lwd=2, col="Gray")
abline(v=mean(weekly.breakdowns), col="Red", lwd=3, lty=3)
mu <- weekly.mean * sample.size / n
hist(sample.breakdowns, breaks=(-1):max(sample.breakdowns)+eps,
freq=FALSE, cex.main=0.9)
abline(v=mu * n, lwd=2, col="Gray")
abline(v=mean(sample.breakdowns), col="Red", lwd=3, lty=3) | Which distribution to use for a probability problem? | The following analysis illustrates one approach to obtaining a solution. At least it might help show how to work with the Poisson distribution.
To answer this question constructively and clearly, let | Which distribution to use for a probability problem?
The following analysis illustrates one approach to obtaining a solution. At least it might help show how to work with the Poisson distribution.
To answer this question constructively and clearly, let's make a few simplifying assumptions to avoid getting bogged down in details that haven't been described. For instance, you might choose to
assume that a "breakdown" is an event with such a short duration that a machine is back in operation immediately after a breakdown; and
therefore the same machine could break down multiple times during a week (although this might be a rare event).
As apparently intended by the question, we will make some additional stronger assumptions. Some such (modeling) assumptions are needed to make any progress at all with the answer. Their chief purpose would be to give us a point of departure for eliciting additional information from the plant engineers so we could develop improved models and better answers:
All machines independently have the same chances of breaking down and
those chances do not vary over time.
These assumptions imply the number of breakdowns observed among any number $N$ of machines during any period of $x$ weeks has a Poisson$(\lambda N x)$ distribution, where $\lambda$ is a parameter common to all machines at all times. The question tells us about the breakdown rate for $x = 1$ week:
$$\lambda N 1\text{ weeks} = 2.$$
Therefore
$$\lambda = 2 / (N \text{ machine-weeks}).$$
In a random sample of $26$ such machines, the number of breakdowns in a week will have a Poisson distribution with parameter
$$\mu = \lambda\times (26\text{ machines})\times (1\text{ week}) = 26\lambda = 52/N.$$
From the formula for Poisson probabilities, the chance of no breakdowns among these $26$ machines is
$$e^{-\mu} 0! = e^{-\mu} = e^{-26\lambda} = e^{-52/N}.$$
Since $N\ge 26$, this value cannot exceed $e^{-52/26}=e^{-2}\approx 0.135$, but as $N$ grows large it could become arbitrarily small.
This is not a final answer. It only shows the implication of four assumptions that were made upon interpreting the question in terms of the chance of no breakdowns in a week. (Other interpretations of the question are possible, due to the contorted syntax used to pose it.) In particular, the dependence upon the unknown total number of machines is clear and explicit. This is about as far as one can go, given the limited information supplied in the question.
A simulation (covering almost 200 years of operation) illustrates the ideas. Its output consists of two histograms: the weekly breakdown counts for all $N$ machines and the counts for the sample of the machines. Here is an example for $N=60$:
On each histogram are drawn two vertical lines: a gray one indicating the location of the actual rate (as given by the preceding solution) and a dashed red one indicating the average rate during the simulation. In each case those lines are visibly coincident, showing that the simulation and the preceding analysis are in agreement.
Studying the R code that produced this simulation may help clarify the ideas.
n <- 60 # Number of machines
sample.size <- 26 # Must be less than or equal to n
weekly.mean <- 2 # Events per week, on average
n.iter <- 1e4 # Size of this simulation in weeks
set.seed(17) # Reproduce these results exactly
#
# Simulate all machines.
#
lambda <- weekly.mean/n # Weekly breakdown rate per machine
x <- matrix(rpois(n.iter*n, lambda), nrow=n) # Breakdowns by machine by week
weekly.breakdowns <- colSums(x) # Total breakdowns each week
sample.breakdowns <- colSums(x[1:sample.size, ]) # Total breakdowns in the sample
#
# Plot the results.
#
par(mfrow=c(1,2))
eps <- 0.99
hist(weekly.breakdowns, breaks=(-1):max(weekly.breakdowns)+eps,
freq=FALSE, cex.main=0.9)
abline(v=lambda * n, lwd=2, col="Gray")
abline(v=mean(weekly.breakdowns), col="Red", lwd=3, lty=3)
mu <- weekly.mean * sample.size / n
hist(sample.breakdowns, breaks=(-1):max(sample.breakdowns)+eps,
freq=FALSE, cex.main=0.9)
abline(v=mu * n, lwd=2, col="Gray")
abline(v=mean(sample.breakdowns), col="Red", lwd=3, lty=3) | Which distribution to use for a probability problem?
The following analysis illustrates one approach to obtaining a solution. At least it might help show how to work with the Poisson distribution.
To answer this question constructively and clearly, let |
53,763 | Which distribution to use for a probability problem? | Both the Poisson and the binomial distribution can model counts, but they do so differently. You assume that each machine can break down or not at most once each week. That suggests a particular distribution is more appropriate.
From this, and your calculations, you believe the right distribution is binom(26, 1/13). You should be able to calculate probabilities directly from that distribution, now that you have settled on one.
The probability you are asked to calculate is somewhat ambiguous. The phrasing in the body of the question ("the probability of no machine breakdowns") is inconsistent with what you report in the comments ("the probability that at least one machine doesn't breakdown"). These two would be $P(X=0|{\rm binom}(26, 1/13))$ and $P(X\le25|{\rm binom}(26, 1/13)$, respectively, which are not equal. Irregardless of which is what you were asked, both should be straightforward to calculate from the distribution you have decided on. | Which distribution to use for a probability problem? | Both the Poisson and the binomial distribution can model counts, but they do so differently. You assume that each machine can break down or not at most once each week. That suggests a particular dis | Which distribution to use for a probability problem?
Both the Poisson and the binomial distribution can model counts, but they do so differently. You assume that each machine can break down or not at most once each week. That suggests a particular distribution is more appropriate.
From this, and your calculations, you believe the right distribution is binom(26, 1/13). You should be able to calculate probabilities directly from that distribution, now that you have settled on one.
The probability you are asked to calculate is somewhat ambiguous. The phrasing in the body of the question ("the probability of no machine breakdowns") is inconsistent with what you report in the comments ("the probability that at least one machine doesn't breakdown"). These two would be $P(X=0|{\rm binom}(26, 1/13))$ and $P(X\le25|{\rm binom}(26, 1/13)$, respectively, which are not equal. Irregardless of which is what you were asked, both should be straightforward to calculate from the distribution you have decided on. | Which distribution to use for a probability problem?
Both the Poisson and the binomial distribution can model counts, but they do so differently. You assume that each machine can break down or not at most once each week. That suggests a particular dis |
53,764 | How to determine the sign of R underlying R-squared? | In multiple regression, $R$ represents the correlation between $\hat{y}$ and $y$, and as such is always non-negative.
e.g. see Wikipedia on $R^2$
[...] $R^2$ equals the square of the Pearson correlation coefficient between the observed and modeled (predicted) data values of the dependent variable.
In simple regression you could regard $R$ as more than one thing (since $R^2$ can be seen meaningfully as the square of more than one correlation), in which case you need to say which thing you want. If you mean the correlation between $\hat{y}$ and $y$, that's still non-negative. If you mean the correlation between $y$ and $x$, the sign will be the same as the sign of the regression coefficient. | How to determine the sign of R underlying R-squared? | In multiple regression, $R$ represents the correlation between $\hat{y}$ and $y$, and as such is always non-negative.
e.g. see Wikipedia on $R^2$
[...] $R^2$ equals the square of the Pearson correlat | How to determine the sign of R underlying R-squared?
In multiple regression, $R$ represents the correlation between $\hat{y}$ and $y$, and as such is always non-negative.
e.g. see Wikipedia on $R^2$
[...] $R^2$ equals the square of the Pearson correlation coefficient between the observed and modeled (predicted) data values of the dependent variable.
In simple regression you could regard $R$ as more than one thing (since $R^2$ can be seen meaningfully as the square of more than one correlation), in which case you need to say which thing you want. If you mean the correlation between $\hat{y}$ and $y$, that's still non-negative. If you mean the correlation between $y$ and $x$, the sign will be the same as the sign of the regression coefficient. | How to determine the sign of R underlying R-squared?
In multiple regression, $R$ represents the correlation between $\hat{y}$ and $y$, and as such is always non-negative.
e.g. see Wikipedia on $R^2$
[...] $R^2$ equals the square of the Pearson correlat |
53,765 | How do I generate two correlated Poisson random variables? | Since you do not impose any constraint on the joint distribution, any copula structure gives you a solution. For instance,
take a bivariate normal
$$(x_1,x_2)\sim\mathcal{N}_2\left((0,0),\left[\matrix{1 &\rho\\\rho &1}\right]\right)$$generation;
turn $(x_1,x_2)$ in correlated uniforms as $$(u_1,u_2)=(\Phi(x_1),\Phi(x_2))$$where $\Phi(\cdot)$ is the normal CDF;
derive two Poisson variates $(n_1,n_2)$ with parameters $\lambda_1$ and $\lambda_2$ from $(u_1,u_2)$ by inverting the Poisson CDF. | How do I generate two correlated Poisson random variables? | Since you do not impose any constraint on the joint distribution, any copula structure gives you a solution. For instance,
take a bivariate normal
$$(x_1,x_2)\sim\mathcal{N}_2\left((0,0),\left[\matr | How do I generate two correlated Poisson random variables?
Since you do not impose any constraint on the joint distribution, any copula structure gives you a solution. For instance,
take a bivariate normal
$$(x_1,x_2)\sim\mathcal{N}_2\left((0,0),\left[\matrix{1 &\rho\\\rho &1}\right]\right)$$generation;
turn $(x_1,x_2)$ in correlated uniforms as $$(u_1,u_2)=(\Phi(x_1),\Phi(x_2))$$where $\Phi(\cdot)$ is the normal CDF;
derive two Poisson variates $(n_1,n_2)$ with parameters $\lambda_1$ and $\lambda_2$ from $(u_1,u_2)$ by inverting the Poisson CDF. | How do I generate two correlated Poisson random variables?
Since you do not impose any constraint on the joint distribution, any copula structure gives you a solution. For instance,
take a bivariate normal
$$(x_1,x_2)\sim\mathcal{N}_2\left((0,0),\left[\matr |
53,766 | Difference between ANOVA and permutation test | Randomization techniques might be more powerful than ANOVA in some circumstances, but not necessarily. See "Comparative Power Of The Anova, Randomization Anova, And Kruskal-Wallis Test" . Randomization techniques are usually more powerful than non-parametric rank transformation tests like Kruskal Wallis. See Adams and Anthony "Using randomization techniques to analyse behavioural data". I think their main strength though is that they don't invoke assumptions related to normality; non-parametric methods in general (bootstrap, jacknife, randomization) can handle data from any distribution. It has been argued that for many study designs randomization tests offer conceptual advantages over hypothesis testings because available items or subjects are generally randomized to treatments rather than truly randomly selected from a population then randomly assigned to treatment; see Ludbrook & Hugh Dudley "Why Permutation Tests are Superior to t and F Tests in Biomedical Research". There are lower limits to sample size and I;ve seen writers lament that people will take advice that non-parametric methods "work well with small sample sizes" and use them with samples sizes < 5.
You should probably also assess to what extent you are violating ANOVA assumptions; perhaps a generalized linear model with a binomial (logistic regression), poisson or negative binomial distribution would be more appropriate if your response variable is, for example, a count. | Difference between ANOVA and permutation test | Randomization techniques might be more powerful than ANOVA in some circumstances, but not necessarily. See "Comparative Power Of The Anova, Randomization Anova, And Kruskal-Wallis Test" . Randomizat | Difference between ANOVA and permutation test
Randomization techniques might be more powerful than ANOVA in some circumstances, but not necessarily. See "Comparative Power Of The Anova, Randomization Anova, And Kruskal-Wallis Test" . Randomization techniques are usually more powerful than non-parametric rank transformation tests like Kruskal Wallis. See Adams and Anthony "Using randomization techniques to analyse behavioural data". I think their main strength though is that they don't invoke assumptions related to normality; non-parametric methods in general (bootstrap, jacknife, randomization) can handle data from any distribution. It has been argued that for many study designs randomization tests offer conceptual advantages over hypothesis testings because available items or subjects are generally randomized to treatments rather than truly randomly selected from a population then randomly assigned to treatment; see Ludbrook & Hugh Dudley "Why Permutation Tests are Superior to t and F Tests in Biomedical Research". There are lower limits to sample size and I;ve seen writers lament that people will take advice that non-parametric methods "work well with small sample sizes" and use them with samples sizes < 5.
You should probably also assess to what extent you are violating ANOVA assumptions; perhaps a generalized linear model with a binomial (logistic regression), poisson or negative binomial distribution would be more appropriate if your response variable is, for example, a count. | Difference between ANOVA and permutation test
Randomization techniques might be more powerful than ANOVA in some circumstances, but not necessarily. See "Comparative Power Of The Anova, Randomization Anova, And Kruskal-Wallis Test" . Randomizat |
53,767 | Difference between ANOVA and permutation test | I have read that permutation test is preferred to use when sample size is relatively small and when ANOVA fails to capture the difference due to this fact.
I see several difficulties with this statement, of which I'll mention a few:
how can you tell if it's "due to this fact" rather than due to something else?
if sample sizes are very small (particularly with ties), permutation tests only have a small number of possible significance levels, and the smallest possible significance level may be substantially larger than the desired level; this problem isn't shared by ANOVA.
if the ANOVA assumptions are suitable, a permutation test won't necessarily give you any additional power.
Permutation tests may be particularly useful when the actual distribution is non-normal** - if it's heavy-tailed for example - better power might sometimes be had by a rank-based permutation test. This benefit might be quite noticeable when sample sizes are moderately small (but not too small, perhaps).
** but not too heavily non-normal, because then the test statistic is not robust. It's still a valid test (i.e. it's level-robust), but if the distribution is so non-normal that the sample mean is a very inefficient estimate of the population mean, then the test will be low-power. If the distribution is fairly close to normal and the sample sizes aren't tiny, then it's a very good choice.
But there are also robust methods, GLMs, bootstrapping to name some alternative possibilities. | Difference between ANOVA and permutation test | I have read that permutation test is preferred to use when sample size is relatively small and when ANOVA fails to capture the difference due to this fact.
I see several difficulties with this state | Difference between ANOVA and permutation test
I have read that permutation test is preferred to use when sample size is relatively small and when ANOVA fails to capture the difference due to this fact.
I see several difficulties with this statement, of which I'll mention a few:
how can you tell if it's "due to this fact" rather than due to something else?
if sample sizes are very small (particularly with ties), permutation tests only have a small number of possible significance levels, and the smallest possible significance level may be substantially larger than the desired level; this problem isn't shared by ANOVA.
if the ANOVA assumptions are suitable, a permutation test won't necessarily give you any additional power.
Permutation tests may be particularly useful when the actual distribution is non-normal** - if it's heavy-tailed for example - better power might sometimes be had by a rank-based permutation test. This benefit might be quite noticeable when sample sizes are moderately small (but not too small, perhaps).
** but not too heavily non-normal, because then the test statistic is not robust. It's still a valid test (i.e. it's level-robust), but if the distribution is so non-normal that the sample mean is a very inefficient estimate of the population mean, then the test will be low-power. If the distribution is fairly close to normal and the sample sizes aren't tiny, then it's a very good choice.
But there are also robust methods, GLMs, bootstrapping to name some alternative possibilities. | Difference between ANOVA and permutation test
I have read that permutation test is preferred to use when sample size is relatively small and when ANOVA fails to capture the difference due to this fact.
I see several difficulties with this state |
53,768 | K-means++ algorithm | Here is my code, in Mathematica:
data = {{7, 1}, {3, 4}, {1, 5}, {5, 8}, {1, 3}, {7, 8}, {8, 2}, {5, 9}, {8, 0}};
centers = {};
RelativeWeights = Table[1/Length[data], {Length[data]}];
Table[
centers =
Union[RandomChoice[RelativeWeights -> data , 1], centers];
data = Complement[data, centers];
RelativeWeights =
Normalize@(EuclideanDistance[#1[[1]], Nearest[centers, #1]]^2 & /@
data);
{centers, data},
{3}] // TableForm
Here is how it works:
The data set is defined
We start with no centers
The RelativeWeights (which govern the probability that a current data point will be selected to be a new, additional center) are initially set to be equal for all the current data points. Incidentally, Length[data] is merely the number of elements in the list called data.
Now, Table is an iterator (like a DO statement), which here runs through the algorithm k times, where I set k = 3
RandomChoice chooses 1 member of the set data according to the RelativeWeights, and adds this chosen center to the list of centers, and removes it from data
The RelativeWeights are updated by taking each element in data in turn, and finding the Nearest element in the current list of centers, then computing its EuclideanDistance (squared) to that point. Your problem stated "distance," but the Wikipedia page for the algorithm stated distance squared. (The weights are then normalized to 1, to make a true discrete distribution, but this step is not needed as Mathematica automatically normalizes in this case)
Then we store the current list of centers and data.
$\left(
\begin{array}{cc}
\left(
\begin{array}{cc}
1 & 5 \\
\end{array}
\right) & \left(
\begin{array}{cc}
1 & 3 \\
3 & 4 \\
5 & 8 \\
5 & 9 \\
7 & 1 \\
7 & 8 \\
8 & 0 \\
8 & 2 \\
\end{array}
\right) \\
\left(
\begin{array}{cc}
1 & 5 \\
7 & 8 \\
\end{array}
\right) & \left(
\begin{array}{cc}
1 & 3 \\
3 & 4 \\
5 & 8 \\
5 & 9 \\
7 & 1 \\
8 & 0 \\
8 & 2 \\
\end{array}
\right) \\
\left(
\begin{array}{cc}
1 & 3 \\
1 & 5 \\
7 & 8 \\
\end{array}
\right) & \left(
\begin{array}{cc}
3 & 4 \\
5 & 8 \\
5 & 9 \\
7 & 1 \\
8 & 0 \\
8 & 2 \\
\end{array}
\right) \\
\end{array}
\right)$
As you can see, the first data point chosen to be a center was (1,5), and the other points remained. Next, the point (7,8) was chosen and added to the list of centers, and so on.
The precise mathematics behind the third step is straightforward: For each point in data, find its distance to the nearest center, $d_i$. If there are $r$ elements currently in data, then you have $r$ distances--one for each point. The overall goal of kmeans++ is to choose new points from data that are FAR from existing centers, so we want to increase the probability of being chosen for points in data that are far from any center.
We do this as follows: We sum up all the $r$ distances to get $s_{tot}$:
$s_{tot} = \sum_{i=1}^r d_i$ .
For each point in data, we compute its distance divided by $s_{tot}$ and set that to the probability that point will be chosen as a new, additional, center:
$p_i = d_i/s_{tot}$.
Notice that the sum of all $r$ probabilities will add up to 1.0, as is required for a true probability.
Now, we want to choose a new point in data proportional to its probability $p_i$. Because of how we computed $p_i$, points far from any center (i.e., with large $p_i$) are more likely to be chosen than points with small $p_i$---just as the algorithm wants.
You can implement such a probability proportional selection by dividing up the unit interval ($0 \rightarrow 1$) by segments of length $p_i$ and uniformly choosing a value between 0 and 1 and finding which interval (i.e., which point) the random selected value lands in.
This is a full, precise mathematical explanation of kmeans++. I cannot see that there is anything more needed to describe it, especially since the working code is present for all. | K-means++ algorithm | Here is my code, in Mathematica:
data = {{7, 1}, {3, 4}, {1, 5}, {5, 8}, {1, 3}, {7, 8}, {8, 2}, {5, 9}, {8, 0}};
centers = {};
RelativeWeights = Table[1/Length[data], {Length[data]}];
Table[
center | K-means++ algorithm
Here is my code, in Mathematica:
data = {{7, 1}, {3, 4}, {1, 5}, {5, 8}, {1, 3}, {7, 8}, {8, 2}, {5, 9}, {8, 0}};
centers = {};
RelativeWeights = Table[1/Length[data], {Length[data]}];
Table[
centers =
Union[RandomChoice[RelativeWeights -> data , 1], centers];
data = Complement[data, centers];
RelativeWeights =
Normalize@(EuclideanDistance[#1[[1]], Nearest[centers, #1]]^2 & /@
data);
{centers, data},
{3}] // TableForm
Here is how it works:
The data set is defined
We start with no centers
The RelativeWeights (which govern the probability that a current data point will be selected to be a new, additional center) are initially set to be equal for all the current data points. Incidentally, Length[data] is merely the number of elements in the list called data.
Now, Table is an iterator (like a DO statement), which here runs through the algorithm k times, where I set k = 3
RandomChoice chooses 1 member of the set data according to the RelativeWeights, and adds this chosen center to the list of centers, and removes it from data
The RelativeWeights are updated by taking each element in data in turn, and finding the Nearest element in the current list of centers, then computing its EuclideanDistance (squared) to that point. Your problem stated "distance," but the Wikipedia page for the algorithm stated distance squared. (The weights are then normalized to 1, to make a true discrete distribution, but this step is not needed as Mathematica automatically normalizes in this case)
Then we store the current list of centers and data.
$\left(
\begin{array}{cc}
\left(
\begin{array}{cc}
1 & 5 \\
\end{array}
\right) & \left(
\begin{array}{cc}
1 & 3 \\
3 & 4 \\
5 & 8 \\
5 & 9 \\
7 & 1 \\
7 & 8 \\
8 & 0 \\
8 & 2 \\
\end{array}
\right) \\
\left(
\begin{array}{cc}
1 & 5 \\
7 & 8 \\
\end{array}
\right) & \left(
\begin{array}{cc}
1 & 3 \\
3 & 4 \\
5 & 8 \\
5 & 9 \\
7 & 1 \\
8 & 0 \\
8 & 2 \\
\end{array}
\right) \\
\left(
\begin{array}{cc}
1 & 3 \\
1 & 5 \\
7 & 8 \\
\end{array}
\right) & \left(
\begin{array}{cc}
3 & 4 \\
5 & 8 \\
5 & 9 \\
7 & 1 \\
8 & 0 \\
8 & 2 \\
\end{array}
\right) \\
\end{array}
\right)$
As you can see, the first data point chosen to be a center was (1,5), and the other points remained. Next, the point (7,8) was chosen and added to the list of centers, and so on.
The precise mathematics behind the third step is straightforward: For each point in data, find its distance to the nearest center, $d_i$. If there are $r$ elements currently in data, then you have $r$ distances--one for each point. The overall goal of kmeans++ is to choose new points from data that are FAR from existing centers, so we want to increase the probability of being chosen for points in data that are far from any center.
We do this as follows: We sum up all the $r$ distances to get $s_{tot}$:
$s_{tot} = \sum_{i=1}^r d_i$ .
For each point in data, we compute its distance divided by $s_{tot}$ and set that to the probability that point will be chosen as a new, additional, center:
$p_i = d_i/s_{tot}$.
Notice that the sum of all $r$ probabilities will add up to 1.0, as is required for a true probability.
Now, we want to choose a new point in data proportional to its probability $p_i$. Because of how we computed $p_i$, points far from any center (i.e., with large $p_i$) are more likely to be chosen than points with small $p_i$---just as the algorithm wants.
You can implement such a probability proportional selection by dividing up the unit interval ($0 \rightarrow 1$) by segments of length $p_i$ and uniformly choosing a value between 0 and 1 and finding which interval (i.e., which point) the random selected value lands in.
This is a full, precise mathematical explanation of kmeans++. I cannot see that there is anything more needed to describe it, especially since the working code is present for all. | K-means++ algorithm
Here is my code, in Mathematica:
data = {{7, 1}, {3, 4}, {1, 5}, {5, 8}, {1, 3}, {7, 8}, {8, 2}, {5, 9}, {8, 0}};
centers = {};
RelativeWeights = Table[1/Length[data], {Length[data]}];
Table[
center |
53,769 | K-means++ algorithm | For step 3,
Choose one new data point at random as a new center, using a weighted
probability distribution where a point x is chosen with probability
proportional to $D(x)^2$.
Compute all the $D(x)^2$ values and convert them to an array of cumulative sums. That way each item is represented by a range proportional to its value. Then pick a uniform random number in that range and see which item it corresponds to (using a binary search).
For instance, you have:
D(x)^2 = [2, 41, 74, 73, 58, 65, 4, 90]
cumulative D(x)^2 = [2, 43, 117, 190, 248, 313, 317, 407]
So pick a random number from [0, 407). Say you pick 123.45. It falls in the range [117, 190) which corresponds to the 4th item. | K-means++ algorithm | For step 3,
Choose one new data point at random as a new center, using a weighted
probability distribution where a point x is chosen with probability
proportional to $D(x)^2$.
Compute all the $ | K-means++ algorithm
For step 3,
Choose one new data point at random as a new center, using a weighted
probability distribution where a point x is chosen with probability
proportional to $D(x)^2$.
Compute all the $D(x)^2$ values and convert them to an array of cumulative sums. That way each item is represented by a range proportional to its value. Then pick a uniform random number in that range and see which item it corresponds to (using a binary search).
For instance, you have:
D(x)^2 = [2, 41, 74, 73, 58, 65, 4, 90]
cumulative D(x)^2 = [2, 43, 117, 190, 248, 313, 317, 407]
So pick a random number from [0, 407). Say you pick 123.45. It falls in the range [117, 190) which corresponds to the 4th item. | K-means++ algorithm
For step 3,
Choose one new data point at random as a new center, using a weighted
probability distribution where a point x is chosen with probability
proportional to $D(x)^2$.
Compute all the $ |
53,770 | Kendall's tau for Clayton Copula | As pointed out by @whuber, the expression for the Copula is
$$C(u,v) = \big[\max \{u^{-\theta}+v^{-\theta}-1,\;0\}\big]^{-1/\theta} , \;\;\theta \in [-1,\infty), \theta\neq 0 $$
When $\theta >0 \Rightarrow u^{-\theta}+v^{-\theta}-1>0$ for the whole joint support, and we can "ignore" the $\max$ operator. But when $\theta<0$ the $\max$ operator becomes effective, and this must be taken into account when performing the integration, since for subsets of the joint support the value of the copula is zero.
An alternative approach could be the following:
Treat the Copula itself as a (univariate) random variable (which in turn is a function of two other random variables):
$$T = C(U,V)$$
Then $T$ has a distribution function called "Kendall distribution function", and it is equal to
$$K_C(t) = t- \frac {\varphi (t)}{\varphi'(t)} \tag{1}$$
where $\varphi(t)$ is the copula's generator function, and the prime denotes the first derivative.
The relation between Kendall's tau and a copula is
$$\tau = 4 \cdot E[C(U,V)] - 1 \tag{2}$$
and from $(1)$ we have
$$E[C(U,V)] = E(T) = \int_0^1tdK_C(t)$$
Integrating by parts we have
$$\int_0^1tdK_C(t) = tK_C(t)\Big|_0^1 - \int_0^1K_C(t)dt = 1 -\int_0^1K_C(t)dt$$
So Kendall's tau now is
$$\tau = 4\cdot \left(1 -\int_0^1K_C(t)dt\right) - 1 = 3 - 4\int_0^1K_C(t)dt$$
Inserting the expression for $K_C(t)$ we have
$$\tau = 3 - 4\int_0^1\left[t- \frac {\varphi (t)}{\varphi'(t)}\right]dt = 1 + 4\int_0^1\frac {\varphi (t)}{\varphi'(t)}dt$$
which is a general expression for the calculation of Kendall's tau related to a copula.
For Clayton's Copula, its generator function is (for $\theta \neq 0$)
$$\varphi(t) = \frac1{\theta}\left(t^{-\theta}-1\right)$$
Completing the calculations, one arrives at $\tau = \theta / (\theta + 2)$.
Then , for $\theta = -1/2$ we have indeed that $\tau = -1/3$. | Kendall's tau for Clayton Copula | As pointed out by @whuber, the expression for the Copula is
$$C(u,v) = \big[\max \{u^{-\theta}+v^{-\theta}-1,\;0\}\big]^{-1/\theta} , \;\;\theta \in [-1,\infty), \theta\neq 0 $$
When $\theta >0 \Right | Kendall's tau for Clayton Copula
As pointed out by @whuber, the expression for the Copula is
$$C(u,v) = \big[\max \{u^{-\theta}+v^{-\theta}-1,\;0\}\big]^{-1/\theta} , \;\;\theta \in [-1,\infty), \theta\neq 0 $$
When $\theta >0 \Rightarrow u^{-\theta}+v^{-\theta}-1>0$ for the whole joint support, and we can "ignore" the $\max$ operator. But when $\theta<0$ the $\max$ operator becomes effective, and this must be taken into account when performing the integration, since for subsets of the joint support the value of the copula is zero.
An alternative approach could be the following:
Treat the Copula itself as a (univariate) random variable (which in turn is a function of two other random variables):
$$T = C(U,V)$$
Then $T$ has a distribution function called "Kendall distribution function", and it is equal to
$$K_C(t) = t- \frac {\varphi (t)}{\varphi'(t)} \tag{1}$$
where $\varphi(t)$ is the copula's generator function, and the prime denotes the first derivative.
The relation between Kendall's tau and a copula is
$$\tau = 4 \cdot E[C(U,V)] - 1 \tag{2}$$
and from $(1)$ we have
$$E[C(U,V)] = E(T) = \int_0^1tdK_C(t)$$
Integrating by parts we have
$$\int_0^1tdK_C(t) = tK_C(t)\Big|_0^1 - \int_0^1K_C(t)dt = 1 -\int_0^1K_C(t)dt$$
So Kendall's tau now is
$$\tau = 4\cdot \left(1 -\int_0^1K_C(t)dt\right) - 1 = 3 - 4\int_0^1K_C(t)dt$$
Inserting the expression for $K_C(t)$ we have
$$\tau = 3 - 4\int_0^1\left[t- \frac {\varphi (t)}{\varphi'(t)}\right]dt = 1 + 4\int_0^1\frac {\varphi (t)}{\varphi'(t)}dt$$
which is a general expression for the calculation of Kendall's tau related to a copula.
For Clayton's Copula, its generator function is (for $\theta \neq 0$)
$$\varphi(t) = \frac1{\theta}\left(t^{-\theta}-1\right)$$
Completing the calculations, one arrives at $\tau = \theta / (\theta + 2)$.
Then , for $\theta = -1/2$ we have indeed that $\tau = -1/3$. | Kendall's tau for Clayton Copula
As pointed out by @whuber, the expression for the Copula is
$$C(u,v) = \big[\max \{u^{-\theta}+v^{-\theta}-1,\;0\}\big]^{-1/\theta} , \;\;\theta \in [-1,\infty), \theta\neq 0 $$
When $\theta >0 \Right |
53,771 | Density plot of parameter estimates from linear regression model | You also could use bootstrap estimates.
library(boot)
f <- function(data, d) coef(lm(Sepal.Length ~ Petal.Length+Petal.Width , data=data[d,]))
boot.fit <- boot(iris, f, 1000)
Now, estimating the density for the Petal.Length coefficient as an example:
petal.density <- density(boot.fit$t[,2])
plot(petal.density, main = "Petal Length Density") | Density plot of parameter estimates from linear regression model | You also could use bootstrap estimates.
library(boot)
f <- function(data, d) coef(lm(Sepal.Length ~ Petal.Length+Petal.Width , data=data[d,]))
boot.fit <- boot(iris, f, 1000)
Now, estimating the den | Density plot of parameter estimates from linear regression model
You also could use bootstrap estimates.
library(boot)
f <- function(data, d) coef(lm(Sepal.Length ~ Petal.Length+Petal.Width , data=data[d,]))
boot.fit <- boot(iris, f, 1000)
Now, estimating the density for the Petal.Length coefficient as an example:
petal.density <- density(boot.fit$t[,2])
plot(petal.density, main = "Petal Length Density") | Density plot of parameter estimates from linear regression model
You also could use bootstrap estimates.
library(boot)
f <- function(data, d) coef(lm(Sepal.Length ~ Petal.Length+Petal.Width , data=data[d,]))
boot.fit <- boot(iris, f, 1000)
Now, estimating the den |
53,772 | Density plot of parameter estimates from linear regression model | Under usual conditions the parameter estimates end up being asymptotically normal. You can find the proof in any econometrics textbook. Additionally, if your errors are normal, then parameters would be normal even in small samples.
So, assuming that the parameter estimates are normal, you can graph them with any plotting function. Draw a normal distribution with mean and standard error equal to parameter estimate and its standard error. | Density plot of parameter estimates from linear regression model | Under usual conditions the parameter estimates end up being asymptotically normal. You can find the proof in any econometrics textbook. Additionally, if your errors are normal, then parameters would b | Density plot of parameter estimates from linear regression model
Under usual conditions the parameter estimates end up being asymptotically normal. You can find the proof in any econometrics textbook. Additionally, if your errors are normal, then parameters would be normal even in small samples.
So, assuming that the parameter estimates are normal, you can graph them with any plotting function. Draw a normal distribution with mean and standard error equal to parameter estimate and its standard error. | Density plot of parameter estimates from linear regression model
Under usual conditions the parameter estimates end up being asymptotically normal. You can find the proof in any econometrics textbook. Additionally, if your errors are normal, then parameters would b |
53,773 | $R^2$ of linear regression with no variation in the response variable | The following plots are accompanied by their Pearson product-moment correlation coefficients (image credit):
If the points lie exactly on an upwards sloping line then the Pearson correlation is +1, if they lie exactly on a downwards sloping line the correlation is -1. But notice that the horizontal line has an undefined correlation.
At first sight you might expect this to be zero, as a compromise between +1 and -1. You may have thought that since positive correlation means "as one variable increases, the other tends to increase" while negative correlation means "as one variable increases, the others tends to decrease", the fact that $Y$ neither tends to increase nor decrease as $X$ increases means that $r=0$. That idea is correct for the other plots labelled $r=0$, but they all exhibited variation in $Y$. Correlation is symmetric: the correlation between $X$ and $Y$ is the same as that between $Y$ and $X$. Turning things around, in the $r=0$ plots we see that as $Y$ increases, $X$ neither tends to increase nor decrease. But in our case what happens to $X$ as $Y$ changes? We just don't know! We certainly can't claim (as $r=0$ would imply) that $X$ would neither tend to increase nor decrease. We never got a chance to see it, because $Y$ never varied. Intuitively, there's no way we can determine the correlation from the available data.
More technically, consideration of the formula for PMCC should clarify things:
$$r = \frac{\text{Covariance of X and Y}}{\text{SD of X} \times \text{SD of Y}}$$
where "SD" stands for standard deviation. On a completely horizontal line, the standard deviation of $Y$ is zero because that variable does not vary at all. So we have zero on the denominator. Also since $X$ and $Y$ can not co-vary, then the covariance is zero, and the numerator is zero also. Hence the fraction is $\frac{0}{0}$ which is an indeterminate form and so the correlation coefficient is not defined.
In a simple linear regression model (only one response and one predictor variable plus an intercept), the coefficient of determination $R^2$ is simply the square of $r$, the PMCC between $X$ and $Y$. Unsurprisingly, this will not be defined either. This is intuitive if we think about $R^2$ as the proportion of variance explained - here the response variable has no variation, so we can explain 0 out of 0 variance, which as a proportion brings us back to the indeterminate form $\frac{0}{0}$.
This conclusion holds true regardless of whether the recorded data are all identically zero, or identically some other number, so long as it would give a horizontal line in a graph of $Y$ against $X$. Note that there may be a difference between the "true" values of $Y$ and those that have been recorded in the data set to the specified level of accuracy. It's possible in a case such as yours that the correct values of $Y$ all round to 0.0 to one decimal place, but if we had access to them to full accuracy, we may be able to observe very small deviations about 0. If that were the case then the actual PMCC and coefficient of determination would both exist, and (i) be approximately equal to zero if the small deviations were just "noise", (ii) be anything up to and including 1 if the small deviations formed an increasing trend indiscernible at the current level of accuracy, or (iii) be anything up to and including $r = -1$ and $R^2 = 1$ if they formed a currently indiscernible downwards trend.
In this answer I have only considered the case of simple linear regression, where the response depends on one explanatory variable. But the argument also applies to multiple regression, where there are several explanatory variables. I'll assume the model includes an intercept term, since dropping the intercept is rarely a good idea and even with a model without an intercept, it's unlikely you want to calculate $R^2$. So long as the intercept is included in the model, then $R^2$ is just the square of multiple correlation coefficient $R$, which is the PMCC between the observed values of the response $Y$ and the values fitted by the model. If $Y$ shows no variation (at least to the recorded accuracy) then the same considerations prevent you calculating $R$ and hence $R^2$. | $R^2$ of linear regression with no variation in the response variable | The following plots are accompanied by their Pearson product-moment correlation coefficients (image credit):
If the points lie exactly on an upwards sloping line then the Pearson correlation is +1, i | $R^2$ of linear regression with no variation in the response variable
The following plots are accompanied by their Pearson product-moment correlation coefficients (image credit):
If the points lie exactly on an upwards sloping line then the Pearson correlation is +1, if they lie exactly on a downwards sloping line the correlation is -1. But notice that the horizontal line has an undefined correlation.
At first sight you might expect this to be zero, as a compromise between +1 and -1. You may have thought that since positive correlation means "as one variable increases, the other tends to increase" while negative correlation means "as one variable increases, the others tends to decrease", the fact that $Y$ neither tends to increase nor decrease as $X$ increases means that $r=0$. That idea is correct for the other plots labelled $r=0$, but they all exhibited variation in $Y$. Correlation is symmetric: the correlation between $X$ and $Y$ is the same as that between $Y$ and $X$. Turning things around, in the $r=0$ plots we see that as $Y$ increases, $X$ neither tends to increase nor decrease. But in our case what happens to $X$ as $Y$ changes? We just don't know! We certainly can't claim (as $r=0$ would imply) that $X$ would neither tend to increase nor decrease. We never got a chance to see it, because $Y$ never varied. Intuitively, there's no way we can determine the correlation from the available data.
More technically, consideration of the formula for PMCC should clarify things:
$$r = \frac{\text{Covariance of X and Y}}{\text{SD of X} \times \text{SD of Y}}$$
where "SD" stands for standard deviation. On a completely horizontal line, the standard deviation of $Y$ is zero because that variable does not vary at all. So we have zero on the denominator. Also since $X$ and $Y$ can not co-vary, then the covariance is zero, and the numerator is zero also. Hence the fraction is $\frac{0}{0}$ which is an indeterminate form and so the correlation coefficient is not defined.
In a simple linear regression model (only one response and one predictor variable plus an intercept), the coefficient of determination $R^2$ is simply the square of $r$, the PMCC between $X$ and $Y$. Unsurprisingly, this will not be defined either. This is intuitive if we think about $R^2$ as the proportion of variance explained - here the response variable has no variation, so we can explain 0 out of 0 variance, which as a proportion brings us back to the indeterminate form $\frac{0}{0}$.
This conclusion holds true regardless of whether the recorded data are all identically zero, or identically some other number, so long as it would give a horizontal line in a graph of $Y$ against $X$. Note that there may be a difference between the "true" values of $Y$ and those that have been recorded in the data set to the specified level of accuracy. It's possible in a case such as yours that the correct values of $Y$ all round to 0.0 to one decimal place, but if we had access to them to full accuracy, we may be able to observe very small deviations about 0. If that were the case then the actual PMCC and coefficient of determination would both exist, and (i) be approximately equal to zero if the small deviations were just "noise", (ii) be anything up to and including 1 if the small deviations formed an increasing trend indiscernible at the current level of accuracy, or (iii) be anything up to and including $r = -1$ and $R^2 = 1$ if they formed a currently indiscernible downwards trend.
In this answer I have only considered the case of simple linear regression, where the response depends on one explanatory variable. But the argument also applies to multiple regression, where there are several explanatory variables. I'll assume the model includes an intercept term, since dropping the intercept is rarely a good idea and even with a model without an intercept, it's unlikely you want to calculate $R^2$. So long as the intercept is included in the model, then $R^2$ is just the square of multiple correlation coefficient $R$, which is the PMCC between the observed values of the response $Y$ and the values fitted by the model. If $Y$ shows no variation (at least to the recorded accuracy) then the same considerations prevent you calculating $R$ and hence $R^2$. | $R^2$ of linear regression with no variation in the response variable
The following plots are accompanied by their Pearson product-moment correlation coefficients (image credit):
If the points lie exactly on an upwards sloping line then the Pearson correlation is +1, i |
53,774 | $R^2$ of linear regression with no variation in the response variable | As $R^2$ is "variance explained", then as $0 = 0 + 0 \times x$ has variance of 0, the same as variance of $y$, so we could think of it as 100% variance explained, i.e. $R^2 = 1$. On another hand, as you notice $0/0$ is indeterminate and does not make sense, as this model does not either. As goangit mentioned, this kind of model doesn't meet criteria for regression and it is not even a linear function of $x$, but rather a constant function, so by definition this is not a regression problem. | $R^2$ of linear regression with no variation in the response variable | As $R^2$ is "variance explained", then as $0 = 0 + 0 \times x$ has variance of 0, the same as variance of $y$, so we could think of it as 100% variance explained, i.e. $R^2 = 1$. On another hand, as y | $R^2$ of linear regression with no variation in the response variable
As $R^2$ is "variance explained", then as $0 = 0 + 0 \times x$ has variance of 0, the same as variance of $y$, so we could think of it as 100% variance explained, i.e. $R^2 = 1$. On another hand, as you notice $0/0$ is indeterminate and does not make sense, as this model does not either. As goangit mentioned, this kind of model doesn't meet criteria for regression and it is not even a linear function of $x$, but rather a constant function, so by definition this is not a regression problem. | $R^2$ of linear regression with no variation in the response variable
As $R^2$ is "variance explained", then as $0 = 0 + 0 \times x$ has variance of 0, the same as variance of $y$, so we could think of it as 100% variance explained, i.e. $R^2 = 1$. On another hand, as y |
53,775 | $R^2$ of linear regression with no variation in the response variable | For the General Linear Model to be suitable the data need to meet certain criteria:
The mean response is a linear function of the predictors.
Model residuals are conditionally independent.
Model residuals are distributed with conditional mean zero.
Model residuals have constant conditional variance.
Model residuals are conditionally normal in distribution.
The response you describe fails to meet (at least) criterion 5, so the General Linear Model does not apply to this example.
As discussed in the comments, this does not prevent the calculation of RSS, which you have already noted is 0/0, an indeterminate form. | $R^2$ of linear regression with no variation in the response variable | For the General Linear Model to be suitable the data need to meet certain criteria:
The mean response is a linear function of the predictors.
Model residuals are conditionally independent.
Model res | $R^2$ of linear regression with no variation in the response variable
For the General Linear Model to be suitable the data need to meet certain criteria:
The mean response is a linear function of the predictors.
Model residuals are conditionally independent.
Model residuals are distributed with conditional mean zero.
Model residuals have constant conditional variance.
Model residuals are conditionally normal in distribution.
The response you describe fails to meet (at least) criterion 5, so the General Linear Model does not apply to this example.
As discussed in the comments, this does not prevent the calculation of RSS, which you have already noted is 0/0, an indeterminate form. | $R^2$ of linear regression with no variation in the response variable
For the General Linear Model to be suitable the data need to meet certain criteria:
The mean response is a linear function of the predictors.
Model residuals are conditionally independent.
Model res |
53,776 | Robust estimates of the covariance matrix in the big data space | First of all, it is important to point out that, most likely, you will be using the FastMCD[0] or FastMVE[1,p199] algorithms which are random approximations to the actual MCD and MVE estimators. The quality (and specifically, the actual robustness to outliers) of these approximations as well as the computational efforts that need to be deployed to get them chiefly depends on a parameter that determines the number of random start used by both algorithms.
Typically this number is set to 500 by default in most implantations but can be changed by the user. In order to guarantee maximal robustness of the algorithms, it should grow as $O(2^p)$ (see [2] for the exact formula), where $p$ is the number of variables in your data set. So, using the prescribed number of subset for values of $p$ beyond 30 is impractical and one has to accept the resulting loss in robustness (the loss in robustness can be computed using the formula in [2]).
On the other hand, for fixed $p$, the computational costs of obtaining the FastMCD and FastMVE fits growths sub-linearly in $n$ (the number of observations) thanks to a trick called random sub-sampling (see [0], section 3.3 for an explanation). Consequently, the computational costs of obtaining the FastMCD/FastMVE fits are essentially determined by $p$ (through the number of random starts). Now, despite being very similar in principle, the FastMVE algorithms is somewhat simpler than FastMCD and takes about a fourth of the time to compute for an equal number of starting subsets when $n$ is large.
--the exact MCD and MVE are not random but need on the order of ${n\choose p+1}$ starting points which, except in cases where both $n$ and $p$ are small, is not of much practical use (though, most implementation allow you to ask for them, see the linked answer for an example of how to do this in rrcov).
You will find good, open source, c implementations of FastMCD and FastMVE in the R packages rrcov. Other older implementations of FastMCD exists in the Matlab library Libra.
Now, for the problems with comparable number of variable as yours, you might want to look at the OGK. The OGK is a robust estimator of scatter that shuns a key (and computationally expensive!) properties of FastMCD and FastMVE: affine equivariance. In return, the OGK fit is much cheaper to compute (between one and two order of magnitude depending on $p$ and $n$) and it is deterministic. An open source, c, implementation of OGK is also available in the R package rrcov. See [2] for an empirical comparison of all, and some more of, these methods.
I would also note that, If you are willing to use a rank $k,k<p$ approximation to the covariance matrix for a known value of $k$, a possible appealing alternative would be to use a robust PCA method.
[0] P. J. Rousseeuw and K. van Driessen (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41, 212–223.
[1] R. A. Maronna, D. Martin and V. Yohai (2006). Robust Statistics: Theory and Methods. Wiley, New York.
[2] Hubert, M., Rousseeuw, P.J., Vakili, K. (2014).
Shape bias of robust covariance estimators: an empirical study. Statistical Papers, 55, 15-28.
[3] Maronna, R.A. and Zamar, R.H. (2002) Robust estimates of location and dispersion of high-dimensional datasets; Technometrics 44(4), 307--317. | Robust estimates of the covariance matrix in the big data space | First of all, it is important to point out that, most likely, you will be using the FastMCD[0] or FastMVE[1,p199] algorithms which are random approximations to the actual MCD and MVE estimators. The | Robust estimates of the covariance matrix in the big data space
First of all, it is important to point out that, most likely, you will be using the FastMCD[0] or FastMVE[1,p199] algorithms which are random approximations to the actual MCD and MVE estimators. The quality (and specifically, the actual robustness to outliers) of these approximations as well as the computational efforts that need to be deployed to get them chiefly depends on a parameter that determines the number of random start used by both algorithms.
Typically this number is set to 500 by default in most implantations but can be changed by the user. In order to guarantee maximal robustness of the algorithms, it should grow as $O(2^p)$ (see [2] for the exact formula), where $p$ is the number of variables in your data set. So, using the prescribed number of subset for values of $p$ beyond 30 is impractical and one has to accept the resulting loss in robustness (the loss in robustness can be computed using the formula in [2]).
On the other hand, for fixed $p$, the computational costs of obtaining the FastMCD and FastMVE fits growths sub-linearly in $n$ (the number of observations) thanks to a trick called random sub-sampling (see [0], section 3.3 for an explanation). Consequently, the computational costs of obtaining the FastMCD/FastMVE fits are essentially determined by $p$ (through the number of random starts). Now, despite being very similar in principle, the FastMVE algorithms is somewhat simpler than FastMCD and takes about a fourth of the time to compute for an equal number of starting subsets when $n$ is large.
--the exact MCD and MVE are not random but need on the order of ${n\choose p+1}$ starting points which, except in cases where both $n$ and $p$ are small, is not of much practical use (though, most implementation allow you to ask for them, see the linked answer for an example of how to do this in rrcov).
You will find good, open source, c implementations of FastMCD and FastMVE in the R packages rrcov. Other older implementations of FastMCD exists in the Matlab library Libra.
Now, for the problems with comparable number of variable as yours, you might want to look at the OGK. The OGK is a robust estimator of scatter that shuns a key (and computationally expensive!) properties of FastMCD and FastMVE: affine equivariance. In return, the OGK fit is much cheaper to compute (between one and two order of magnitude depending on $p$ and $n$) and it is deterministic. An open source, c, implementation of OGK is also available in the R package rrcov. See [2] for an empirical comparison of all, and some more of, these methods.
I would also note that, If you are willing to use a rank $k,k<p$ approximation to the covariance matrix for a known value of $k$, a possible appealing alternative would be to use a robust PCA method.
[0] P. J. Rousseeuw and K. van Driessen (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41, 212–223.
[1] R. A. Maronna, D. Martin and V. Yohai (2006). Robust Statistics: Theory and Methods. Wiley, New York.
[2] Hubert, M., Rousseeuw, P.J., Vakili, K. (2014).
Shape bias of robust covariance estimators: an empirical study. Statistical Papers, 55, 15-28.
[3] Maronna, R.A. and Zamar, R.H. (2002) Robust estimates of location and dispersion of high-dimensional datasets; Technometrics 44(4), 307--317. | Robust estimates of the covariance matrix in the big data space
First of all, it is important to point out that, most likely, you will be using the FastMCD[0] or FastMVE[1,p199] algorithms which are random approximations to the actual MCD and MVE estimators. The |
53,777 | Problem obtaining a marginal from the joint distribution | Draw pictures of the regions of integration.
The region where $0 \le x_1 \le 1, 0 \le x_2 \le 1,$ and $x_1 x_2 \le y$ (for $0 \le y \le 1$) looks like the shaded part of
The colors denote the varying values of the density $f(x_1,x_2)$, ranging from blue (low) to red (high).
The integral of $f(x_1,x_2)dx_1 dx_2 = 4 x_1 x_2 dx_1 dx_2$ is readily found by integrating separately over the rectangle to the left of the dashed line and the region to its right, which is bounded above by the curve $x_1 x_2 = y$; it gives
$$\Pr(Y_2 \le y) = y^2-2 y^2 \log (y).$$
Here is a plot of this distribution: it is the marginal CDF for $Y_2$.
Differentiate this to obtain the PDF of $Y_2$.
The region where $0 \le x_1 \le 1, 0 \le x_2 \le 1,$ and $x_1 / x_2 \le y$ (for $0 \le y \lt \infty$) looks like the shaded part of
The lower curve is a portion of the line $x_2 = x_1 / y.$
When $y \gt 1$ the integral of $4 x_1 x_2 dx_1 dx_2$ can be broken into the two triangles shown; when $y \le 1$ only an upper triangle appears. The marginal CDF of $Y_1$ works out to
$$\Pr(Y_1 \le y) = y^2/2, \ 0 \lt y \le 1; \quad 1 - 1/(2y^2),\ y \ge 1.$$
A partial plot of this marginal CDF for $Y_1$ is
The full plot extends infinitely far to the right. Differentiate this to obtain the PDF of $Y_1$, the first marginal. | Problem obtaining a marginal from the joint distribution | Draw pictures of the regions of integration.
The region where $0 \le x_1 \le 1, 0 \le x_2 \le 1,$ and $x_1 x_2 \le y$ (for $0 \le y \le 1$) looks like the shaded part of
The colors denote the varying | Problem obtaining a marginal from the joint distribution
Draw pictures of the regions of integration.
The region where $0 \le x_1 \le 1, 0 \le x_2 \le 1,$ and $x_1 x_2 \le y$ (for $0 \le y \le 1$) looks like the shaded part of
The colors denote the varying values of the density $f(x_1,x_2)$, ranging from blue (low) to red (high).
The integral of $f(x_1,x_2)dx_1 dx_2 = 4 x_1 x_2 dx_1 dx_2$ is readily found by integrating separately over the rectangle to the left of the dashed line and the region to its right, which is bounded above by the curve $x_1 x_2 = y$; it gives
$$\Pr(Y_2 \le y) = y^2-2 y^2 \log (y).$$
Here is a plot of this distribution: it is the marginal CDF for $Y_2$.
Differentiate this to obtain the PDF of $Y_2$.
The region where $0 \le x_1 \le 1, 0 \le x_2 \le 1,$ and $x_1 / x_2 \le y$ (for $0 \le y \lt \infty$) looks like the shaded part of
The lower curve is a portion of the line $x_2 = x_1 / y.$
When $y \gt 1$ the integral of $4 x_1 x_2 dx_1 dx_2$ can be broken into the two triangles shown; when $y \le 1$ only an upper triangle appears. The marginal CDF of $Y_1$ works out to
$$\Pr(Y_1 \le y) = y^2/2, \ 0 \lt y \le 1; \quad 1 - 1/(2y^2),\ y \ge 1.$$
A partial plot of this marginal CDF for $Y_1$ is
The full plot extends infinitely far to the right. Differentiate this to obtain the PDF of $Y_1$, the first marginal. | Problem obtaining a marginal from the joint distribution
Draw pictures of the regions of integration.
The region where $0 \le x_1 \le 1, 0 \le x_2 \le 1,$ and $x_1 x_2 \le y$ (for $0 \le y \le 1$) looks like the shaded part of
The colors denote the varying |
53,778 | Problem obtaining a marginal from the joint distribution | Just to add visual input, it is easily found that
$$F_X(x_i) = x_i^2$$
and since for $U_i \sim U(0,1)$
$$F^{-1}(U_i) = X_i \Rightarrow X_i = \sqrt{U_i}$$
the $X's$ are the square roots of uniform RV's in $(0,1)$. They are also independent.So simulate two uniforms, and then take their product. The resulting empirical relative frequency curve is
Now take the CDF of $Y_2 = X_1X_2$
$$F_{Y_2}(y_2) = y^2-2 y^2 \log (y) \Rightarrow f_{Y_2}(y_2) =-4y_2\ln y_2$$
Graph this function in (0,1) to get
More mathematically, we have that $Y_2 = \sqrt {U_1U_2}$ i.e. it is the square root of the product of two independent standard uniform RV's. The density of the product of $n$ standard uniform independent RV's can be found here. . For $n=2$ it is simply
$$f_{U_1U_2}(u_1u_2) = -\ln(u_1u_2)$$
For $Y_2 =\sqrt{ U_1U_2}$ we immediately obtain by the change of variable formula that
$f_{Y_2}(y_2) =-4y_2\ln y_2$. | Problem obtaining a marginal from the joint distribution | Just to add visual input, it is easily found that
$$F_X(x_i) = x_i^2$$
and since for $U_i \sim U(0,1)$
$$F^{-1}(U_i) = X_i \Rightarrow X_i = \sqrt{U_i}$$
the $X's$ are the square roots of uniform RV's | Problem obtaining a marginal from the joint distribution
Just to add visual input, it is easily found that
$$F_X(x_i) = x_i^2$$
and since for $U_i \sim U(0,1)$
$$F^{-1}(U_i) = X_i \Rightarrow X_i = \sqrt{U_i}$$
the $X's$ are the square roots of uniform RV's in $(0,1)$. They are also independent.So simulate two uniforms, and then take their product. The resulting empirical relative frequency curve is
Now take the CDF of $Y_2 = X_1X_2$
$$F_{Y_2}(y_2) = y^2-2 y^2 \log (y) \Rightarrow f_{Y_2}(y_2) =-4y_2\ln y_2$$
Graph this function in (0,1) to get
More mathematically, we have that $Y_2 = \sqrt {U_1U_2}$ i.e. it is the square root of the product of two independent standard uniform RV's. The density of the product of $n$ standard uniform independent RV's can be found here. . For $n=2$ it is simply
$$f_{U_1U_2}(u_1u_2) = -\ln(u_1u_2)$$
For $Y_2 =\sqrt{ U_1U_2}$ we immediately obtain by the change of variable formula that
$f_{Y_2}(y_2) =-4y_2\ln y_2$. | Problem obtaining a marginal from the joint distribution
Just to add visual input, it is easily found that
$$F_X(x_i) = x_i^2$$
and since for $U_i \sim U(0,1)$
$$F^{-1}(U_i) = X_i \Rightarrow X_i = \sqrt{U_i}$$
the $X's$ are the square roots of uniform RV's |
53,779 | Sample $R^2$ consistent? | As whuber noted, consistency of the $\widehat{R^2}$ should be first examined under the assumption of correct specification, as we usually do with all estimators. It is a separate matter to examine what happens to consistency under misspecification, i.e. under the inclusion of irrelevant variables or the exclusion of relevant variables, or functional misspecification.
The population $R^2$, in a $y = X\beta +u$ framework, can be defined as
$$R^2_{pop} \equiv \frac {\text{Var}(X\beta)}{\text{Var}(y)}=1-\frac{\text{Var}(u)}{\text{Var}(y)} = 1-\frac{\sigma^2}{\text{Var}(y)}$$
By writing the above, we essentially assume that $\text{Var}(y)$ exists and is finite.
The sample estimator can be written
$$\widehat{R^2} = 1- \frac{[1/(n-k)]\sum\hat u_i^2}{[1/(n-k)]\sum(y_i-\bar y_i)^2}$$
Under the standard assumptions and correct specification, $1/(n-k)]\sum\hat u_i^2 \xrightarrow{p} \sigma^2$. Also, we have assumed an i.i.d. sample, and that the variance of $y$ exists and is finite. Therefore the sample analogue of this variance will be a consistent estimator of it, $[1/(n-k)]\sum(y_i-\bar y_i)^2 \xrightarrow{p}\text{Var}(y)$.
So under correct specification and since probability limits can be entered into the expression,
$$\widehat{R^2} \xrightarrow{p} R^2_{pop}$$.
Inclusion of irrelevant variables (or exclusion of relevant variables), i.e misspecification in the regressor matrix, will affect only the error variance estimator, not the dependent variable variance estimator, since this last one is calculated using only data of $y$, not of $X$. So: whenever a misspecification will cause the error variance estimator to be inconsistent, the $\widehat{R^2}$ estimator will also be inconsistent. | Sample $R^2$ consistent? | As whuber noted, consistency of the $\widehat{R^2}$ should be first examined under the assumption of correct specification, as we usually do with all estimators. It is a separate matter to examine wha | Sample $R^2$ consistent?
As whuber noted, consistency of the $\widehat{R^2}$ should be first examined under the assumption of correct specification, as we usually do with all estimators. It is a separate matter to examine what happens to consistency under misspecification, i.e. under the inclusion of irrelevant variables or the exclusion of relevant variables, or functional misspecification.
The population $R^2$, in a $y = X\beta +u$ framework, can be defined as
$$R^2_{pop} \equiv \frac {\text{Var}(X\beta)}{\text{Var}(y)}=1-\frac{\text{Var}(u)}{\text{Var}(y)} = 1-\frac{\sigma^2}{\text{Var}(y)}$$
By writing the above, we essentially assume that $\text{Var}(y)$ exists and is finite.
The sample estimator can be written
$$\widehat{R^2} = 1- \frac{[1/(n-k)]\sum\hat u_i^2}{[1/(n-k)]\sum(y_i-\bar y_i)^2}$$
Under the standard assumptions and correct specification, $1/(n-k)]\sum\hat u_i^2 \xrightarrow{p} \sigma^2$. Also, we have assumed an i.i.d. sample, and that the variance of $y$ exists and is finite. Therefore the sample analogue of this variance will be a consistent estimator of it, $[1/(n-k)]\sum(y_i-\bar y_i)^2 \xrightarrow{p}\text{Var}(y)$.
So under correct specification and since probability limits can be entered into the expression,
$$\widehat{R^2} \xrightarrow{p} R^2_{pop}$$.
Inclusion of irrelevant variables (or exclusion of relevant variables), i.e misspecification in the regressor matrix, will affect only the error variance estimator, not the dependent variable variance estimator, since this last one is calculated using only data of $y$, not of $X$. So: whenever a misspecification will cause the error variance estimator to be inconsistent, the $\widehat{R^2}$ estimator will also be inconsistent. | Sample $R^2$ consistent?
As whuber noted, consistency of the $\widehat{R^2}$ should be first examined under the assumption of correct specification, as we usually do with all estimators. It is a separate matter to examine wha |
53,780 | Help for Binomial Distribution question | What you are looking for is the probability generating function. This functions allows you to plug in the values given in your exercise and calculate the probabilities needed. A nice derivation of the probability generating function of the binomial distribution can be found under
http://economictheoryblog.com/2012/10/21/binomial-distribution/
If you are interested it will additionally provide you with the expected value and variance estimation. | Help for Binomial Distribution question | What you are looking for is the probability generating function. This functions allows you to plug in the values given in your exercise and calculate the probabilities needed. A nice derivation of the | Help for Binomial Distribution question
What you are looking for is the probability generating function. This functions allows you to plug in the values given in your exercise and calculate the probabilities needed. A nice derivation of the probability generating function of the binomial distribution can be found under
http://economictheoryblog.com/2012/10/21/binomial-distribution/
If you are interested it will additionally provide you with the expected value and variance estimation. | Help for Binomial Distribution question
What you are looking for is the probability generating function. This functions allows you to plug in the values given in your exercise and calculate the probabilities needed. A nice derivation of the |
53,781 | Help for Binomial Distribution question | A binomial distribution models the number of positive outcomes in a number of independent true or false events (Bernoulli events) that each have the same probability of being true. You should ask yourself: for the four individuals being tested, does each person have the same probability of testing positive?
If you have two independent events, the probability of any two events occurring is the probability of those two events occurring individually multiplied by each other, then multiplied by the number of permutations that it can occur in. Apply this to the case of four events. You should take note that this is exactly the principle that the binomial distribution is built off of. So if you only had two people, person A and person B, then
$$\begin{align*}
\mathbb{P}[&\text{exactly one person tests positive}] \\
&= \mathbb{P}[A \text{ tests positive}]\mathbb{P}[B \text{ tests negative}] + \mathbb{P}[A \text{ tests negative}]\mathbb{P}[B \text{ tests positive}]
\end{align*}$$ | Help for Binomial Distribution question | A binomial distribution models the number of positive outcomes in a number of independent true or false events (Bernoulli events) that each have the same probability of being true. You should ask your | Help for Binomial Distribution question
A binomial distribution models the number of positive outcomes in a number of independent true or false events (Bernoulli events) that each have the same probability of being true. You should ask yourself: for the four individuals being tested, does each person have the same probability of testing positive?
If you have two independent events, the probability of any two events occurring is the probability of those two events occurring individually multiplied by each other, then multiplied by the number of permutations that it can occur in. Apply this to the case of four events. You should take note that this is exactly the principle that the binomial distribution is built off of. So if you only had two people, person A and person B, then
$$\begin{align*}
\mathbb{P}[&\text{exactly one person tests positive}] \\
&= \mathbb{P}[A \text{ tests positive}]\mathbb{P}[B \text{ tests negative}] + \mathbb{P}[A \text{ tests negative}]\mathbb{P}[B \text{ tests positive}]
\end{align*}$$ | Help for Binomial Distribution question
A binomial distribution models the number of positive outcomes in a number of independent true or false events (Bernoulli events) that each have the same probability of being true. You should ask your |
53,782 | Obtaining an estimator via Rao-Blackwell theorem | We have
$$F_X(x) = \int_{\theta}^{x}e^{\theta -t} dt = -e^{\theta}e^{-t}\Big|^{x}_{\theta} = 1 - e^{\theta -x} $$
Since $F_{X_{(1)}}(x_{(1)}) = 1 -[1-F_X(x_{(1)})]^{n}$, the density function of the minimum order statistic is
$$f_{X_{(1)}}(x_{(1)}) = nf_X(x_{(1)})[1-F_X(x_{(1)})]^{n-1}I(x)_{(\theta, \infty)} = ne^{\theta -x_{(1)}}[e^{\theta -x_{(1)}}]^{n-1}I(x)_{(\theta, \infty)}$$
$$\Rightarrow f_{X_{(1)}}(x_{(1)}) =ne^{n(\theta -x_{(1)})}I(x)_{(\theta, \infty)}$$
Then
$$E[X_{(1)}] = \int_\theta^{\infty}x_{(1)}ne^{n(\theta -x_{(1)})}dx_{(1)} =\theta+\frac 1n$$
and so
$$ \hat \theta = X_{(1)} -\frac 1n$$
is an unbiased estimator based on the sufficient statistic. | Obtaining an estimator via Rao-Blackwell theorem | We have
$$F_X(x) = \int_{\theta}^{x}e^{\theta -t} dt = -e^{\theta}e^{-t}\Big|^{x}_{\theta} = 1 - e^{\theta -x} $$
Since $F_{X_{(1)}}(x_{(1)}) = 1 -[1-F_X(x_{(1)})]^{n}$, the density function of the m | Obtaining an estimator via Rao-Blackwell theorem
We have
$$F_X(x) = \int_{\theta}^{x}e^{\theta -t} dt = -e^{\theta}e^{-t}\Big|^{x}_{\theta} = 1 - e^{\theta -x} $$
Since $F_{X_{(1)}}(x_{(1)}) = 1 -[1-F_X(x_{(1)})]^{n}$, the density function of the minimum order statistic is
$$f_{X_{(1)}}(x_{(1)}) = nf_X(x_{(1)})[1-F_X(x_{(1)})]^{n-1}I(x)_{(\theta, \infty)} = ne^{\theta -x_{(1)}}[e^{\theta -x_{(1)}}]^{n-1}I(x)_{(\theta, \infty)}$$
$$\Rightarrow f_{X_{(1)}}(x_{(1)}) =ne^{n(\theta -x_{(1)})}I(x)_{(\theta, \infty)}$$
Then
$$E[X_{(1)}] = \int_\theta^{\infty}x_{(1)}ne^{n(\theta -x_{(1)})}dx_{(1)} =\theta+\frac 1n$$
and so
$$ \hat \theta = X_{(1)} -\frac 1n$$
is an unbiased estimator based on the sufficient statistic. | Obtaining an estimator via Rao-Blackwell theorem
We have
$$F_X(x) = \int_{\theta}^{x}e^{\theta -t} dt = -e^{\theta}e^{-t}\Big|^{x}_{\theta} = 1 - e^{\theta -x} $$
Since $F_{X_{(1)}}(x_{(1)}) = 1 -[1-F_X(x_{(1)})]^{n}$, the density function of the m |
53,783 | Obtaining an estimator via Rao-Blackwell theorem | The fact is that Alecos' answer is the easiest way to handle the problem, but the problem can be solved via Rao-Blackwell as well. Start with the joint density $$f(x_1,..., x_n | \theta) = e^{-\sum x_i + n\theta}\prod{I_{\theta < x_i}(x_i)}. $$
We know that $X_{[1]}$ is a sufficient statistic, so this factors as
$$f(x_1,...,x_n)=e^{-\sum x_i +nx_{[1]}}\times e^{-n(x_{[1]}-\theta)}\cdot I_{\theta < x_1}(x_{[1]}).$$
Now apply the Rao-Blackwell Theorem to this form of the density, and correct for the bias. You could also incorporate the shift ($\bar{X} - 1$) explicitly in the joint density, in which case the bias correction is incorporated directly. | Obtaining an estimator via Rao-Blackwell theorem | The fact is that Alecos' answer is the easiest way to handle the problem, but the problem can be solved via Rao-Blackwell as well. Start with the joint density $$f(x_1,..., x_n | \theta) = e^{-\sum x | Obtaining an estimator via Rao-Blackwell theorem
The fact is that Alecos' answer is the easiest way to handle the problem, but the problem can be solved via Rao-Blackwell as well. Start with the joint density $$f(x_1,..., x_n | \theta) = e^{-\sum x_i + n\theta}\prod{I_{\theta < x_i}(x_i)}. $$
We know that $X_{[1]}$ is a sufficient statistic, so this factors as
$$f(x_1,...,x_n)=e^{-\sum x_i +nx_{[1]}}\times e^{-n(x_{[1]}-\theta)}\cdot I_{\theta < x_1}(x_{[1]}).$$
Now apply the Rao-Blackwell Theorem to this form of the density, and correct for the bias. You could also incorporate the shift ($\bar{X} - 1$) explicitly in the joint density, in which case the bias correction is incorporated directly. | Obtaining an estimator via Rao-Blackwell theorem
The fact is that Alecos' answer is the easiest way to handle the problem, but the problem can be solved via Rao-Blackwell as well. Start with the joint density $$f(x_1,..., x_n | \theta) = e^{-\sum x |
53,784 | Analytic or sample standard deviation with binomial data | "Better" depends on context and purpose. Before addressing this issue, though, let's consider the data.
As a point of departure we might assume--hypothetically, being willing and happy to be proven wrong later in the analysis--that the outcomes of each subject's attempt at the task are independent. This permits us to hold up a simple model for scrutiny, one in which each subject $i$ has a constant chance $p_i$ of success with each attempt. It follows that the raw counts of successes $(x_i,\ i=1, 2, \ldots, n)$ consist of six (or, more generally, $n$) independent realizations of Binomial$(m, p_i)$ variables $X_i$ (with $m=20$ in this case). In this case the raw counts are $(19,16,16,13,18,14)$, obtained by multiplying the reported success rates by $20$.
This is a complicated model because it has as many parameters ($n$ of them) as there are data. To see whether the complication is worthwhile, we ought to compare this model to a simplified version. The simplest is that all the $p_i$ are equal: the subjects have equivalent abilities at the task. Is there a small set of simple, easily understood, summary statistics that might help give us some quick insight into which model would be appropriate?
In the spirit of an Analysis of Variance we might be inclined to compare the variance of the dataset--which will comprise the variances inherent in each of the $X_i$ together with the variance of the $p_i$--to some measure of the variance to be expected when all the $p_i$ are equal. Therefore we compute:
The mean of the $p_i$ is $p = (1/n)\sum_{i=1}^n p_i.$ There are several ways to estimate this, but one of the simplest--as justified by the hypothesis that all the $p_i$ are equal--is the sample mean,
$$\hat{p} = \frac{1}{n}\sum_{i=1}^n \frac{x_i}{m} = \frac{4}{5} = 0.8.$$
The variance of each $X_i$ is $m p_i(1-p_i)$; under the hypothesis of equality, this is $m p(1-p)$, which can be estimated as
$$\hat{\sigma} = m \hat{p} (1 - \hat{p}) = \frac{16}{5} = 3.2.$$
The variance of the data is
$$\text{Var}(x_i) = \frac{1}{n}\sum_{i=1}^n (x_i - m \hat{p})^2 = \frac{13}{3} = 4.\bar{3}.$$
Please notice that, in the spirit of description and exploration, division by $n-1$ in this variance calculation could be considered irrelevant. However, should one feel a need to so change the denominator, the result would be $26/5 = 5.2$.
The statistic (3) can be understood as arising from two components: the variation in subject performances due to chance plus the variation in capabilities between the subjects. That is why the two standard deviations computed in the question (which are the square roots of (2) and (3)) may differ. It becomes clear that they work together to give two separate pieces of information about the data.
It is attractive to take one more step. ANOVA teaches us that the relevant statistic to examine would be the ratio
$$\text{Var}(x_i) / \hat{\sigma}.$$
A value much greater than $1$ would indicate the $p_i$ should be treated as non-constant. In the present case--using the alternative expression for the variance employed in the question--this ratio equals $(26/5)/(16/5) = 13/8 = 1.625.$ This is precisely the square of the ratio of standard deviations reported in the question, $(0.1140/0.08944)^2$.
This analysis has provided a perspective in which the distinction between the two standard deviation calculations in the question can be both understood and used to gain insight into the data-generation mechanism:
The two calculations differ due to possible fluctuations in the subjects' capabilities.
Their ratio (when squared) can be interpreted as an ANOVA F-statistic, permitting its use in evaluating whether the apparent fluctuations may be due to chance or should be accepted as real.
To answer the question, then, one might wish to report both standard deviation calculations together with the F-like statistic given by the square of their ratio. | Analytic or sample standard deviation with binomial data | "Better" depends on context and purpose. Before addressing this issue, though, let's consider the data.
As a point of departure we might assume--hypothetically, being willing and happy to be proven w | Analytic or sample standard deviation with binomial data
"Better" depends on context and purpose. Before addressing this issue, though, let's consider the data.
As a point of departure we might assume--hypothetically, being willing and happy to be proven wrong later in the analysis--that the outcomes of each subject's attempt at the task are independent. This permits us to hold up a simple model for scrutiny, one in which each subject $i$ has a constant chance $p_i$ of success with each attempt. It follows that the raw counts of successes $(x_i,\ i=1, 2, \ldots, n)$ consist of six (or, more generally, $n$) independent realizations of Binomial$(m, p_i)$ variables $X_i$ (with $m=20$ in this case). In this case the raw counts are $(19,16,16,13,18,14)$, obtained by multiplying the reported success rates by $20$.
This is a complicated model because it has as many parameters ($n$ of them) as there are data. To see whether the complication is worthwhile, we ought to compare this model to a simplified version. The simplest is that all the $p_i$ are equal: the subjects have equivalent abilities at the task. Is there a small set of simple, easily understood, summary statistics that might help give us some quick insight into which model would be appropriate?
In the spirit of an Analysis of Variance we might be inclined to compare the variance of the dataset--which will comprise the variances inherent in each of the $X_i$ together with the variance of the $p_i$--to some measure of the variance to be expected when all the $p_i$ are equal. Therefore we compute:
The mean of the $p_i$ is $p = (1/n)\sum_{i=1}^n p_i.$ There are several ways to estimate this, but one of the simplest--as justified by the hypothesis that all the $p_i$ are equal--is the sample mean,
$$\hat{p} = \frac{1}{n}\sum_{i=1}^n \frac{x_i}{m} = \frac{4}{5} = 0.8.$$
The variance of each $X_i$ is $m p_i(1-p_i)$; under the hypothesis of equality, this is $m p(1-p)$, which can be estimated as
$$\hat{\sigma} = m \hat{p} (1 - \hat{p}) = \frac{16}{5} = 3.2.$$
The variance of the data is
$$\text{Var}(x_i) = \frac{1}{n}\sum_{i=1}^n (x_i - m \hat{p})^2 = \frac{13}{3} = 4.\bar{3}.$$
Please notice that, in the spirit of description and exploration, division by $n-1$ in this variance calculation could be considered irrelevant. However, should one feel a need to so change the denominator, the result would be $26/5 = 5.2$.
The statistic (3) can be understood as arising from two components: the variation in subject performances due to chance plus the variation in capabilities between the subjects. That is why the two standard deviations computed in the question (which are the square roots of (2) and (3)) may differ. It becomes clear that they work together to give two separate pieces of information about the data.
It is attractive to take one more step. ANOVA teaches us that the relevant statistic to examine would be the ratio
$$\text{Var}(x_i) / \hat{\sigma}.$$
A value much greater than $1$ would indicate the $p_i$ should be treated as non-constant. In the present case--using the alternative expression for the variance employed in the question--this ratio equals $(26/5)/(16/5) = 13/8 = 1.625.$ This is precisely the square of the ratio of standard deviations reported in the question, $(0.1140/0.08944)^2$.
This analysis has provided a perspective in which the distinction between the two standard deviation calculations in the question can be both understood and used to gain insight into the data-generation mechanism:
The two calculations differ due to possible fluctuations in the subjects' capabilities.
Their ratio (when squared) can be interpreted as an ANOVA F-statistic, permitting its use in evaluating whether the apparent fluctuations may be due to chance or should be accepted as real.
To answer the question, then, one might wish to report both standard deviation calculations together with the F-like statistic given by the square of their ratio. | Analytic or sample standard deviation with binomial data
"Better" depends on context and purpose. Before addressing this issue, though, let's consider the data.
As a point of departure we might assume--hypothetically, being willing and happy to be proven w |
53,785 | Analytic or sample standard deviation with binomial data | I've been looking for recommendations on whether it's better to use the sample standard deviation (SD) for a binomial distribution or use the analytic SD (or variance).
If it was an iid sample actually from a binomial distribution ... the MLE would be the usual analytic one; as such, at least in large samples, that's going to be a very good choice.
The problem is, the claim of being binomial isn't always as true as people might hope.
Summing 0-1 variables is not of itself sufficient to establish that something is binomial, since the binomial relies on independence and constant $p$, and frequently one or the other or both don't quite hold.
Do you know for certain that $p$ is constant, for example?
This is a query for descriptive purposes only. I could use a multi-level logistic regression to model but I'm just looking for simple descriptives.
If you're fitting a multilevel logistic model, that would suggest that $p$ isn't constant across all cases, in which case you shouldn't use the binomial to work out the standard deviation, and such a model would also suggest that independence (unconditionally) could be suspect.
If there's any doubt on constant $p$, make sure to quote the sample SD. If there's doubt on independence, it may be that neither measure is very meaningful, though it depends on what you're trying to achieve by quoting it.
Example: Each subject gets 20 tries at the task and there are 6 subjects with accuracy 0.95, 0.80, 0.80, 0.65, 0.90, 0.70. The mean accuracy would be 0.8 and therefore the analytic SD of that accuracy is sqrt(0.8 * 0.2 / 20) = 0.089. However, the SD of those six numbers calculated on the sample is 0.114. Which is the better SD estimate to use?
In these circumstances, I'd suggest you quote the sample SD (within-subject you might quote the analytic one, but beware of the possible effect of dependence within subjects). [The overall variance might be expected to be close to the sum of the within- and between- components.] | Analytic or sample standard deviation with binomial data | I've been looking for recommendations on whether it's better to use the sample standard deviation (SD) for a binomial distribution or use the analytic SD (or variance).
If it was an iid sample actua | Analytic or sample standard deviation with binomial data
I've been looking for recommendations on whether it's better to use the sample standard deviation (SD) for a binomial distribution or use the analytic SD (or variance).
If it was an iid sample actually from a binomial distribution ... the MLE would be the usual analytic one; as such, at least in large samples, that's going to be a very good choice.
The problem is, the claim of being binomial isn't always as true as people might hope.
Summing 0-1 variables is not of itself sufficient to establish that something is binomial, since the binomial relies on independence and constant $p$, and frequently one or the other or both don't quite hold.
Do you know for certain that $p$ is constant, for example?
This is a query for descriptive purposes only. I could use a multi-level logistic regression to model but I'm just looking for simple descriptives.
If you're fitting a multilevel logistic model, that would suggest that $p$ isn't constant across all cases, in which case you shouldn't use the binomial to work out the standard deviation, and such a model would also suggest that independence (unconditionally) could be suspect.
If there's any doubt on constant $p$, make sure to quote the sample SD. If there's doubt on independence, it may be that neither measure is very meaningful, though it depends on what you're trying to achieve by quoting it.
Example: Each subject gets 20 tries at the task and there are 6 subjects with accuracy 0.95, 0.80, 0.80, 0.65, 0.90, 0.70. The mean accuracy would be 0.8 and therefore the analytic SD of that accuracy is sqrt(0.8 * 0.2 / 20) = 0.089. However, the SD of those six numbers calculated on the sample is 0.114. Which is the better SD estimate to use?
In these circumstances, I'd suggest you quote the sample SD (within-subject you might quote the analytic one, but beware of the possible effect of dependence within subjects). [The overall variance might be expected to be close to the sum of the within- and between- components.] | Analytic or sample standard deviation with binomial data
I've been looking for recommendations on whether it's better to use the sample standard deviation (SD) for a binomial distribution or use the analytic SD (or variance).
If it was an iid sample actua |
53,786 | Analytic or sample standard deviation with binomial data | reporting the central tendency and variability of a sample is a good thing to do
You report the central tendency with the mean accuracy. The question is, what should you use for the variability?
Note that your "analytical SD" only depends on the mean accuracy (and the number of tests) -- it thus can't give more information than the mean accuracy alone. Indeed, the analytical SD answers this question: "Assuming all subjects perform the same way, how big will the expected spread in the data be?" (There still will be some spread due to randomness.)
I don't think that is what you want to report.
You want to report how much spread there actually is in your data. This is what your "sample SD" calculates, so you should report that.
N.B: You might want to compare your "sample SD" (how big is the actual spread) to your "analytical SD" (how big do we expect the spread if all subjects perform the same), to check whether some subjects performed better than others. But to do this properly, you should perform a significance test. | Analytic or sample standard deviation with binomial data | reporting the central tendency and variability of a sample is a good thing to do
You report the central tendency with the mean accuracy. The question is, what should you use for the variability?
Not | Analytic or sample standard deviation with binomial data
reporting the central tendency and variability of a sample is a good thing to do
You report the central tendency with the mean accuracy. The question is, what should you use for the variability?
Note that your "analytical SD" only depends on the mean accuracy (and the number of tests) -- it thus can't give more information than the mean accuracy alone. Indeed, the analytical SD answers this question: "Assuming all subjects perform the same way, how big will the expected spread in the data be?" (There still will be some spread due to randomness.)
I don't think that is what you want to report.
You want to report how much spread there actually is in your data. This is what your "sample SD" calculates, so you should report that.
N.B: You might want to compare your "sample SD" (how big is the actual spread) to your "analytical SD" (how big do we expect the spread if all subjects perform the same), to check whether some subjects performed better than others. But to do this properly, you should perform a significance test. | Analytic or sample standard deviation with binomial data
reporting the central tendency and variability of a sample is a good thing to do
You report the central tendency with the mean accuracy. The question is, what should you use for the variability?
Not |
53,787 | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)? | Independence is more like a best-case assumption than a worst-case assumption. Loosely, when data are independent, each datum contains as much information as possible. If data were dependent, because their values can be predicted from other data, each additional datum must have less new information to contribute (the part that could have been predicted you already knew in some sense). The situation can be similar with multiple testing. In terms of simple alpha correction strategies, if the tests are independent, the Dunn-Sidak correction can be used:
$$
\alpha_{\rm DS} = 1 - (1-\alpha)^{1/k}
$$
but if the tests are not independent, the Bonferroni correction must be used:
$$
\alpha_{\rm B} = \frac{\alpha}{k}
$$
As is clear from the formulas, $\alpha_{\rm DS}\ge \alpha_{\rm B}$. | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)? | Independence is more like a best-case assumption than a worst-case assumption. Loosely, when data are independent, each datum contains as much information as possible. If data were dependent, becaus | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)?
Independence is more like a best-case assumption than a worst-case assumption. Loosely, when data are independent, each datum contains as much information as possible. If data were dependent, because their values can be predicted from other data, each additional datum must have less new information to contribute (the part that could have been predicted you already knew in some sense). The situation can be similar with multiple testing. In terms of simple alpha correction strategies, if the tests are independent, the Dunn-Sidak correction can be used:
$$
\alpha_{\rm DS} = 1 - (1-\alpha)^{1/k}
$$
but if the tests are not independent, the Bonferroni correction must be used:
$$
\alpha_{\rm B} = \frac{\alpha}{k}
$$
As is clear from the formulas, $\alpha_{\rm DS}\ge \alpha_{\rm B}$. | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)?
Independence is more like a best-case assumption than a worst-case assumption. Loosely, when data are independent, each datum contains as much information as possible. If data were dependent, becaus |
53,788 | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)? | Gung is incorrect that Dunn-Sidak should be used under independence and Bonferroni must be used under dependence. In fact, Dunn-Sidak controls the FWER not only under independence, but also under positive dependence. And Bonferroni controls the FWER for any dependence structure--including independence.
To answer your question, the reason independence isn't the "worst case" is that in certain situations, you can theoretically have negative dependence. | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)? | Gung is incorrect that Dunn-Sidak should be used under independence and Bonferroni must be used under dependence. In fact, Dunn-Sidak controls the FWER not only under independence, but also under posi | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)?
Gung is incorrect that Dunn-Sidak should be used under independence and Bonferroni must be used under dependence. In fact, Dunn-Sidak controls the FWER not only under independence, but also under positive dependence. And Bonferroni controls the FWER for any dependence structure--including independence.
To answer your question, the reason independence isn't the "worst case" is that in certain situations, you can theoretically have negative dependence. | What is the intuition for dependence assumption in Benjamini and Hochberg (1995)?
Gung is incorrect that Dunn-Sidak should be used under independence and Bonferroni must be used under dependence. In fact, Dunn-Sidak controls the FWER not only under independence, but also under posi |
53,789 | importance of predictor variables in multiple linear regression | If you are using R you can use the caret package which has a built in method to give variable importance. See this link (http://caret.r-forge.r-project.org/varimp.html)
You basically will just have to do
varImp(mod, scale = FALSE) | importance of predictor variables in multiple linear regression | If you are using R you can use the caret package which has a built in method to give variable importance. See this link (http://caret.r-forge.r-project.org/varimp.html)
You basically will just have to | importance of predictor variables in multiple linear regression
If you are using R you can use the caret package which has a built in method to give variable importance. See this link (http://caret.r-forge.r-project.org/varimp.html)
You basically will just have to do
varImp(mod, scale = FALSE) | importance of predictor variables in multiple linear regression
If you are using R you can use the caret package which has a built in method to give variable importance. See this link (http://caret.r-forge.r-project.org/varimp.html)
You basically will just have to |
53,790 | Pull out most important variables from PCA | The "most important" principal component is usually considered to be the one with the largest eigenvalue. If your package works in the usual way this should be the first principal component, PC1. To see how important each component is, divide the eigenvalues by the number of variables you are decomposing. This tells you the percent of the variation in the data "explained" by each component. How many components you use is ultimately up to you, though you may want to look at this paper.
EDIT To find the most important variables in terms of their contributions to the principal component, you will indeed have to look at loadings. Loadings are the projections of the principal components onto your variables. A particularly high (or particularly low) loading for a specific variable means that principal component is intimately related to the variable. My experience with PCA is mostly from stock returns, where we think of high loadings as representing some exposure for a company from an underlying risk. In this setting, high loadings mean lots of exposure. Here you could think of them as some common subject matter across papers, and the loading is how much that paper fits into that subject (or depending on how your data is organized how much the keyword fits into that subject).
So it absolutely makes sense to look at those variables which contribute most to your principal component, and to find them in terms of the absolute value of their loadings - as the meaning of a principle component is ultimately unclear. | Pull out most important variables from PCA | The "most important" principal component is usually considered to be the one with the largest eigenvalue. If your package works in the usual way this should be the first principal component, PC1. To s | Pull out most important variables from PCA
The "most important" principal component is usually considered to be the one with the largest eigenvalue. If your package works in the usual way this should be the first principal component, PC1. To see how important each component is, divide the eigenvalues by the number of variables you are decomposing. This tells you the percent of the variation in the data "explained" by each component. How many components you use is ultimately up to you, though you may want to look at this paper.
EDIT To find the most important variables in terms of their contributions to the principal component, you will indeed have to look at loadings. Loadings are the projections of the principal components onto your variables. A particularly high (or particularly low) loading for a specific variable means that principal component is intimately related to the variable. My experience with PCA is mostly from stock returns, where we think of high loadings as representing some exposure for a company from an underlying risk. In this setting, high loadings mean lots of exposure. Here you could think of them as some common subject matter across papers, and the loading is how much that paper fits into that subject (or depending on how your data is organized how much the keyword fits into that subject).
So it absolutely makes sense to look at those variables which contribute most to your principal component, and to find them in terms of the absolute value of their loadings - as the meaning of a principle component is ultimately unclear. | Pull out most important variables from PCA
The "most important" principal component is usually considered to be the one with the largest eigenvalue. If your package works in the usual way this should be the first principal component, PC1. To s |
53,791 | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed] | R's factor() command is just a shortcut to having to manually create the indicator variables for each (except one) value of a categorical variable. Since you are starting with indicator variables, you don't need to do anything. | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed] | R's factor() command is just a shortcut to having to manually create the indicator variables for each (except one) value of a categorical variable. Since you are starting with indicator variables, you | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed]
R's factor() command is just a shortcut to having to manually create the indicator variables for each (except one) value of a categorical variable. Since you are starting with indicator variables, you don't need to do anything. | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed]
R's factor() command is just a shortcut to having to manually create the indicator variables for each (except one) value of a categorical variable. Since you are starting with indicator variables, you |
53,792 | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed] | But be careful if the values you mentioned are your output values. If you use for example the caret package, which describes itself as "a set of functions that attempt to streamline the process for creating predictive models." So it is basically a layer on top of many different machine learning packages in R.
If you use the train method and choose for example a SVM algorithm and your output values are integer values, the algorithm will handle it as a regression problem automatically and not as a classification problem. This will probably decrease the performance of the algorithm including accuracy and sensitivity.
So if these values are your output or y values and your dataset belongs to a classification problem, transform the values to factors. If the values are "just" x or feature values I totally agree with Andy and you can leave them as integers.
Regards | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed] | But be careful if the values you mentioned are your output values. If you use for example the caret package, which describes itself as "a set of functions that attempt to streamline the process for cr | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed]
But be careful if the values you mentioned are your output values. If you use for example the caret package, which describes itself as "a set of functions that attempt to streamline the process for creating predictive models." So it is basically a layer on top of many different machine learning packages in R.
If you use the train method and choose for example a SVM algorithm and your output values are integer values, the algorithm will handle it as a regression problem automatically and not as a classification problem. This will probably decrease the performance of the algorithm including accuracy and sensitivity.
So if these values are your output or y values and your dataset belongs to a classification problem, transform the values to factors. If the values are "just" x or feature values I totally agree with Andy and you can leave them as integers.
Regards | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed]
But be careful if the values you mentioned are your output values. If you use for example the caret package, which describes itself as "a set of functions that attempt to streamline the process for cr |
53,793 | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed] | Take a look at the R ade4 package's dudi.mix and dudi.hillsmith functions. These are natural extensions of PCA for mixed numerical and categorical data. You'll get different (better!) results using these functions if you first convert an integer column containing a categorical variable into a factor. From the documentation: "dudi.hillsmith allow to use various row weights, while dudi.mix deals with ordered variables. The principal components of this analysis are centered and normed vectors maximizing the sum of squared correlation coefficients with quantitative variables [and] correlation ratios with factors." See https://cran.r-project.org/web/packages/ade4/ade4.pdf. | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed] | Take a look at the R ade4 package's dudi.mix and dudi.hillsmith functions. These are natural extensions of PCA for mixed numerical and categorical data. You'll get different (better!) results using | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed]
Take a look at the R ade4 package's dudi.mix and dudi.hillsmith functions. These are natural extensions of PCA for mixed numerical and categorical data. You'll get different (better!) results using these functions if you first convert an integer column containing a categorical variable into a factor. From the documentation: "dudi.hillsmith allow to use various row weights, while dudi.mix deals with ordered variables. The principal components of this analysis are centered and normed vectors maximizing the sum of squared correlation coefficients with quantitative variables [and] correlation ratios with factors." See https://cran.r-project.org/web/packages/ade4/ade4.pdf. | Is it important in R to convert “integer” variables (with 0 or 1 values) to factors? [closed]
Take a look at the R ade4 package's dudi.mix and dudi.hillsmith functions. These are natural extensions of PCA for mixed numerical and categorical data. You'll get different (better!) results using |
53,794 | cubic relationship after linear relationship | While you don't specify, I assume you mean the model to be both continuous and smooth at the join.
Such a model might be called a semi-natural cubic spline (natural on one side, ordinary on the other). You can build one by having a linear predictor and a cubic-spline basis function of the form $(x-k)_+^3$ where $k$ is the knot location, and $(.)_+$ means that when the argument is negative the output is 0, while when it's non-negative, the argument is returned unchanged.
e.g. see section 5.2 of Hastie, Tibshirani and Friedman's "Elements of Statistical Learning" 2nd ed (10th printing is downloadable from at least two of the author's web pages) -- see the link here (scroll down if you don't see it).
Your problem reduces to one of automated knot selection (sometimes called knot placement). Hastie et al do discuss this knot selection issue (and problems with it, along with some alternatives to it), that's probably a good place to start. There are many papers relating to this issue (one example is here, though it may not be a suitable reference for you at present).
More prosaically, one such knot can be placed by trying a variety of points; it's equivalent to selecting between the possible candidate locations based on some criterion (such as AIC). However, this has the same problem that model selection more generally has (for which, again, Hastie et al is probably a good place to start, or you could read say chapter 4 of Frank Harrell's Regression Modeling Strategies).
You might want to investigate approaches like cross-validation. | cubic relationship after linear relationship | While you don't specify, I assume you mean the model to be both continuous and smooth at the join.
Such a model might be called a semi-natural cubic spline (natural on one side, ordinary on the other) | cubic relationship after linear relationship
While you don't specify, I assume you mean the model to be both continuous and smooth at the join.
Such a model might be called a semi-natural cubic spline (natural on one side, ordinary on the other). You can build one by having a linear predictor and a cubic-spline basis function of the form $(x-k)_+^3$ where $k$ is the knot location, and $(.)_+$ means that when the argument is negative the output is 0, while when it's non-negative, the argument is returned unchanged.
e.g. see section 5.2 of Hastie, Tibshirani and Friedman's "Elements of Statistical Learning" 2nd ed (10th printing is downloadable from at least two of the author's web pages) -- see the link here (scroll down if you don't see it).
Your problem reduces to one of automated knot selection (sometimes called knot placement). Hastie et al do discuss this knot selection issue (and problems with it, along with some alternatives to it), that's probably a good place to start. There are many papers relating to this issue (one example is here, though it may not be a suitable reference for you at present).
More prosaically, one such knot can be placed by trying a variety of points; it's equivalent to selecting between the possible candidate locations based on some criterion (such as AIC). However, this has the same problem that model selection more generally has (for which, again, Hastie et al is probably a good place to start, or you could read say chapter 4 of Frank Harrell's Regression Modeling Strategies).
You might want to investigate approaches like cross-validation. | cubic relationship after linear relationship
While you don't specify, I assume you mean the model to be both continuous and smooth at the join.
Such a model might be called a semi-natural cubic spline (natural on one side, ordinary on the other) |
53,795 | cubic relationship after linear relationship | Of course you could always include the sought point into your optimization problem, which then, however, will become non-linear in general and thus more difficult to solve than the normal linear regression problem.
I think one standard approach is to assign a set of points before the calculation and then use piecewiese polynomials in the arising regions (or splines if you need continuity at the boundaries) . Often, the nodes are determined by all or a subset of the data points.
Alternatively, also tree schmemes like CART (classification and regression trees) could be used (with a generalized linear model in the leafs). The approach the optimization problem mentioned before by a greedy algorithm. | cubic relationship after linear relationship | Of course you could always include the sought point into your optimization problem, which then, however, will become non-linear in general and thus more difficult to solve than the normal linear regre | cubic relationship after linear relationship
Of course you could always include the sought point into your optimization problem, which then, however, will become non-linear in general and thus more difficult to solve than the normal linear regression problem.
I think one standard approach is to assign a set of points before the calculation and then use piecewiese polynomials in the arising regions (or splines if you need continuity at the boundaries) . Often, the nodes are determined by all or a subset of the data points.
Alternatively, also tree schmemes like CART (classification and regression trees) could be used (with a generalized linear model in the leafs). The approach the optimization problem mentioned before by a greedy algorithm. | cubic relationship after linear relationship
Of course you could always include the sought point into your optimization problem, which then, however, will become non-linear in general and thus more difficult to solve than the normal linear regre |
53,796 | How to code binary (0/1) predictor variables in regression? Numeric versus factor | In linear regression, if they are independent variables and 1 and 0 are the only possible outcomes, then either way is fine.
Modeled as binary, but specified it as if it's continuous (data and syntax are of Stata 12):
. sysuse auto
. reg mpg foreign
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 1, 72) = 13.18
Model | 378.153515 1 378.153515 Prob > F = 0.0005
Residual | 2065.30594 72 28.6848048 R-squared = 0.1548
-------------+------------------------------ Adj R-squared = 0.1430
Total | 2443.45946 73 33.4720474 Root MSE = 5.3558
------------------------------------------------------------------------------
mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
foreign | 4.945804 1.362162 3.63 0.001 2.230384 7.661225
_cons | 19.82692 .7427186 26.70 0.000 18.34634 21.30751
------------------------------------------------------------------------------
Modeled as factors:
. reg mpg i.foreign
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 1, 72) = 13.18
Model | 378.153515 1 378.153515 Prob > F = 0.0005
Residual | 2065.30594 72 28.6848048 R-squared = 0.1548
-------------+------------------------------ Adj R-squared = 0.1430
Total | 2443.45946 73 33.4720474 Root MSE = 5.3558
------------------------------------------------------------------------------
mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.foreign | 4.945804 1.362162 3.63 0.001 2.230384 7.661225
_cons | 19.82692 .7427186 26.70 0.000 18.34634 21.30751
------------------------------------------------------------------------------
It's worth noticing that if modeled as a continuous variable (though bear in mind it's actually binary), the reference group is always whatever coded as 0. In some statistical software, however, binary variables modeled as factors may have its reference group swapped to whatever = 1. The ANOVA and F statistics will not be affected but the regression coefficients can change (due to reference group being reassigned.) Check the output carefully. | How to code binary (0/1) predictor variables in regression? Numeric versus factor | In linear regression, if they are independent variables and 1 and 0 are the only possible outcomes, then either way is fine.
Modeled as binary, but specified it as if it's continuous (data and syntax | How to code binary (0/1) predictor variables in regression? Numeric versus factor
In linear regression, if they are independent variables and 1 and 0 are the only possible outcomes, then either way is fine.
Modeled as binary, but specified it as if it's continuous (data and syntax are of Stata 12):
. sysuse auto
. reg mpg foreign
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 1, 72) = 13.18
Model | 378.153515 1 378.153515 Prob > F = 0.0005
Residual | 2065.30594 72 28.6848048 R-squared = 0.1548
-------------+------------------------------ Adj R-squared = 0.1430
Total | 2443.45946 73 33.4720474 Root MSE = 5.3558
------------------------------------------------------------------------------
mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
foreign | 4.945804 1.362162 3.63 0.001 2.230384 7.661225
_cons | 19.82692 .7427186 26.70 0.000 18.34634 21.30751
------------------------------------------------------------------------------
Modeled as factors:
. reg mpg i.foreign
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 1, 72) = 13.18
Model | 378.153515 1 378.153515 Prob > F = 0.0005
Residual | 2065.30594 72 28.6848048 R-squared = 0.1548
-------------+------------------------------ Adj R-squared = 0.1430
Total | 2443.45946 73 33.4720474 Root MSE = 5.3558
------------------------------------------------------------------------------
mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.foreign | 4.945804 1.362162 3.63 0.001 2.230384 7.661225
_cons | 19.82692 .7427186 26.70 0.000 18.34634 21.30751
------------------------------------------------------------------------------
It's worth noticing that if modeled as a continuous variable (though bear in mind it's actually binary), the reference group is always whatever coded as 0. In some statistical software, however, binary variables modeled as factors may have its reference group swapped to whatever = 1. The ANOVA and F statistics will not be affected but the regression coefficients can change (due to reference group being reassigned.) Check the output carefully. | How to code binary (0/1) predictor variables in regression? Numeric versus factor
In linear regression, if they are independent variables and 1 and 0 are the only possible outcomes, then either way is fine.
Modeled as binary, but specified it as if it's continuous (data and syntax |
53,797 | How to code binary (0/1) predictor variables in regression? Numeric versus factor | In R, it doesn't matter if they are factors or numeric variables. But be sure to indicate that you're doing a logistic regression by indicating family=binomial in, for example, a general linear model or mixed effects model.
Without indicating this, the assumed variance of the distribution will differ. In a binomial family, the variance (dispersion parameter) is taken to be 1, unlike in gaussian family. | How to code binary (0/1) predictor variables in regression? Numeric versus factor | In R, it doesn't matter if they are factors or numeric variables. But be sure to indicate that you're doing a logistic regression by indicating family=binomial in, for example, a general linear model | How to code binary (0/1) predictor variables in regression? Numeric versus factor
In R, it doesn't matter if they are factors or numeric variables. But be sure to indicate that you're doing a logistic regression by indicating family=binomial in, for example, a general linear model or mixed effects model.
Without indicating this, the assumed variance of the distribution will differ. In a binomial family, the variance (dispersion parameter) is taken to be 1, unlike in gaussian family. | How to code binary (0/1) predictor variables in regression? Numeric versus factor
In R, it doesn't matter if they are factors or numeric variables. But be sure to indicate that you're doing a logistic regression by indicating family=binomial in, for example, a general linear model |
53,798 | Adding interactions to logistic regression leads to high SEs | As @NickStauner and you have surmised, this is due to separation.
It is always worth looking at your data! When your data are binary, this is less obvious, but you can see a lot with table(). For example, another problem that causes SEs to expand is multicollinearity (which we think of with continuous variables, but can happen with binary covariates as well). Here's a quick check to see if A is collinear with B:
summary(my.data)
# X A B
# 0:17 1:26 no :23
# 1:32 2:23 yes:26
with(my.data, table(A, B))
# B
# A no yes
# 1 10 16
# 2 13 10
So, we don't see anything suspicious there. Now we can check for separation:
with(my.data, table(A, X, B))
# , , B = no
#
# X
# A 0 1
# 1 0 10
# 2 5 8
#
# , , B = yes
#
# X
# A 0 1
# 1 5 11
# 2 7 3
The culprit is that there are no instances of X = 0 when A = 1 and B = "no". To check, we can add such an observation and re-run the analysis:
my.data.a = rbind(my.data, c(0, 1, "no"))
tail(my.data.a)
# X A B
# 45 1 1 yes
# 46 0 1 yes
# 47 1 2 no
# 48 1 1 no
# 49 1 1 yes
# 50 0 1 no
The fake observation shows up in the 50th row. Lets run the analysis and compare the output:
model4a <- glm(X~A+B+A*B, family=binomial(logit), data=my.data.a)
summary(model4)
# ...
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 18.57 2062.64 0.009 0.993
# A2 -18.10 2062.64 -0.009 0.993
# Byes -17.78 2062.64 -0.009 0.993
# A2:Byes 16.46 2062.64 0.008 0.994
# ...
#
# Number of Fisher Scoring iterations: 17
summary(model4a)
# ...
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 2.3026 1.0486 2.196 0.0281 *
# A2 -1.8326 1.1935 -1.535 0.1247
# Byes -1.5141 1.1792 -1.284 0.1991
# A2:Byes 0.1968 1.4804 0.133 0.8942
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# ...
#
# Number of Fisher Scoring iterations: 4
With the fake observation added in, there is no separation in that combination of factor levels, and the SEs look normal.
Another indication that separation was to blame is that the Number of Fisher Scoring iterations was very high (17), whereas 4 is more typical of the Newton-Raphson search algorithm. It just kept going further and further out looking for the minimum deviance. Because of the separation, there is no minimum, but eventually the rate of decrease drops below some threshold and the algorithm stops. In that region, the deviance is very flat, so you get very large SEs.
Remember that adding a fake observation is not a valid analysis, so throw model4a away! There is an excellent answer discussing how to deal with separation here: How to deal with perfect separation in logistic regression? | Adding interactions to logistic regression leads to high SEs | As @NickStauner and you have surmised, this is due to separation.
It is always worth looking at your data! When your data are binary, this is less obvious, but you can see a lot with table(). For e | Adding interactions to logistic regression leads to high SEs
As @NickStauner and you have surmised, this is due to separation.
It is always worth looking at your data! When your data are binary, this is less obvious, but you can see a lot with table(). For example, another problem that causes SEs to expand is multicollinearity (which we think of with continuous variables, but can happen with binary covariates as well). Here's a quick check to see if A is collinear with B:
summary(my.data)
# X A B
# 0:17 1:26 no :23
# 1:32 2:23 yes:26
with(my.data, table(A, B))
# B
# A no yes
# 1 10 16
# 2 13 10
So, we don't see anything suspicious there. Now we can check for separation:
with(my.data, table(A, X, B))
# , , B = no
#
# X
# A 0 1
# 1 0 10
# 2 5 8
#
# , , B = yes
#
# X
# A 0 1
# 1 5 11
# 2 7 3
The culprit is that there are no instances of X = 0 when A = 1 and B = "no". To check, we can add such an observation and re-run the analysis:
my.data.a = rbind(my.data, c(0, 1, "no"))
tail(my.data.a)
# X A B
# 45 1 1 yes
# 46 0 1 yes
# 47 1 2 no
# 48 1 1 no
# 49 1 1 yes
# 50 0 1 no
The fake observation shows up in the 50th row. Lets run the analysis and compare the output:
model4a <- glm(X~A+B+A*B, family=binomial(logit), data=my.data.a)
summary(model4)
# ...
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 18.57 2062.64 0.009 0.993
# A2 -18.10 2062.64 -0.009 0.993
# Byes -17.78 2062.64 -0.009 0.993
# A2:Byes 16.46 2062.64 0.008 0.994
# ...
#
# Number of Fisher Scoring iterations: 17
summary(model4a)
# ...
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 2.3026 1.0486 2.196 0.0281 *
# A2 -1.8326 1.1935 -1.535 0.1247
# Byes -1.5141 1.1792 -1.284 0.1991
# A2:Byes 0.1968 1.4804 0.133 0.8942
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# ...
#
# Number of Fisher Scoring iterations: 4
With the fake observation added in, there is no separation in that combination of factor levels, and the SEs look normal.
Another indication that separation was to blame is that the Number of Fisher Scoring iterations was very high (17), whereas 4 is more typical of the Newton-Raphson search algorithm. It just kept going further and further out looking for the minimum deviance. Because of the separation, there is no minimum, but eventually the rate of decrease drops below some threshold and the algorithm stops. In that region, the deviance is very flat, so you get very large SEs.
Remember that adding a fake observation is not a valid analysis, so throw model4a away! There is an excellent answer discussing how to deal with separation here: How to deal with perfect separation in logistic regression? | Adding interactions to logistic regression leads to high SEs
As @NickStauner and you have surmised, this is due to separation.
It is always worth looking at your data! When your data are binary, this is less obvious, but you can see a lot with table(). For e |
53,799 | Adding interactions to logistic regression leads to high SEs | Sure enough, you have "perfect" prediction with the interaction term; subset(my.data,A==1&B=='no') yields all 1s for X. The Bayesian alternative you've already chosen is one way to go in handling this. As Avitus and Scortchi have suggested, Firth's (1993) method of penalizing the model to reduce bias is another. Here's how that performs by default (had to convert the data back to numeric to get it to run):
require(logistf);summary(logistf(X~A*B,lapply(my.data,as.numeric)))
Model fitted by Penalized ML; Confidence intervals and p-values by Profile Likelihood
coef se(coef) lower 0.95 upper 0.95 Chisq p
(Intercept) 8.7500000 13.253712 -14.45312 31.640625 10.176059 0.00142276
A -2.5489062 6.982541 -14.16579 13.233453 6.302140 0.01205923
B -2.3105651 6.984261 -13.93029 14.597185 4.436414 0.03518007
A:B 0.7167941 3.728474 -7.71979 6.651535 1.552630 0.21274756
Likelihood ratio test=18.18618 on 3 df, p=0.0004026211, n=49
Wald test = 1.084365 on 3 df, p = 0.7808497
Scortchi's answer suggests the hlr package offers yet another option (among others I won't review here), but I haven't been able to make it work for these data...
Reference
Firth, D. (1993). Bias reduction of maximum likelihood estimates. Biometrika, 80(1), 27–38. Retrieved from http://www.stat.duke.edu/~scs/Courses/Stat376/Papers/GibbsFieldEst/BiasReductionMLE.pdf. | Adding interactions to logistic regression leads to high SEs | Sure enough, you have "perfect" prediction with the interaction term; subset(my.data,A==1&B=='no') yields all 1s for X. The Bayesian alternative you've already chosen is one way to go in handling this | Adding interactions to logistic regression leads to high SEs
Sure enough, you have "perfect" prediction with the interaction term; subset(my.data,A==1&B=='no') yields all 1s for X. The Bayesian alternative you've already chosen is one way to go in handling this. As Avitus and Scortchi have suggested, Firth's (1993) method of penalizing the model to reduce bias is another. Here's how that performs by default (had to convert the data back to numeric to get it to run):
require(logistf);summary(logistf(X~A*B,lapply(my.data,as.numeric)))
Model fitted by Penalized ML; Confidence intervals and p-values by Profile Likelihood
coef se(coef) lower 0.95 upper 0.95 Chisq p
(Intercept) 8.7500000 13.253712 -14.45312 31.640625 10.176059 0.00142276
A -2.5489062 6.982541 -14.16579 13.233453 6.302140 0.01205923
B -2.3105651 6.984261 -13.93029 14.597185 4.436414 0.03518007
A:B 0.7167941 3.728474 -7.71979 6.651535 1.552630 0.21274756
Likelihood ratio test=18.18618 on 3 df, p=0.0004026211, n=49
Wald test = 1.084365 on 3 df, p = 0.7808497
Scortchi's answer suggests the hlr package offers yet another option (among others I won't review here), but I haven't been able to make it work for these data...
Reference
Firth, D. (1993). Bias reduction of maximum likelihood estimates. Biometrika, 80(1), 27–38. Retrieved from http://www.stat.duke.edu/~scs/Courses/Stat376/Papers/GibbsFieldEst/BiasReductionMLE.pdf. | Adding interactions to logistic regression leads to high SEs
Sure enough, you have "perfect" prediction with the interaction term; subset(my.data,A==1&B=='no') yields all 1s for X. The Bayesian alternative you've already chosen is one way to go in handling this |
53,800 | Why do the 95% confidence limits in ARIMA models widen at the forecasts? | Think of ARIMA as ARMA on the differences. For instance, if you have a variable $y_t$, then ARIMA model would be similar to a ARMA on $\Delta y_t=y_t-y_{t-1}$. Next, when you forecast $y_{t+h}=y_t+\sum_{i=1}^h\Delta y_{t+i}$, since $Var[\Delta y_{t+h}]=\sigma^2$, the sum will grow and the forecast confidence will too: $Var[\sum_{i=1}^h\Delta y_{t+i}]=h\cdot\sigma^2$ | Why do the 95% confidence limits in ARIMA models widen at the forecasts? | Think of ARIMA as ARMA on the differences. For instance, if you have a variable $y_t$, then ARIMA model would be similar to a ARMA on $\Delta y_t=y_t-y_{t-1}$. Next, when you forecast $y_{t+h}=y_t+\su | Why do the 95% confidence limits in ARIMA models widen at the forecasts?
Think of ARIMA as ARMA on the differences. For instance, if you have a variable $y_t$, then ARIMA model would be similar to a ARMA on $\Delta y_t=y_t-y_{t-1}$. Next, when you forecast $y_{t+h}=y_t+\sum_{i=1}^h\Delta y_{t+i}$, since $Var[\Delta y_{t+h}]=\sigma^2$, the sum will grow and the forecast confidence will too: $Var[\sum_{i=1}^h\Delta y_{t+i}]=h\cdot\sigma^2$ | Why do the 95% confidence limits in ARIMA models widen at the forecasts?
Think of ARIMA as ARMA on the differences. For instance, if you have a variable $y_t$, then ARIMA model would be similar to a ARMA on $\Delta y_t=y_t-y_{t-1}$. Next, when you forecast $y_{t+h}=y_t+\su |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.