idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
48,601 | AR(2) model is causal | A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process
$$\phi(B)X_t = \theta(B) W_t$$
is causal if and only if $\phi(z) \neq 0$ for all $z \in \mathbb{C}$ such that $\left|z\right|\leq 1$.
So in order the AR($2$) process to be causal, the coefficients $\phi_1$ and $\phi_2$ must satisfy
$$1 - \phi_1 z - \phi_2 z^2 \neq 0$$
for all $\left|z\right| \leq 1$. It is not a causal process for all $\phi_1, \phi_2$. For example, $\phi_1 = 2, \phi_2 = 0$. | AR(2) model is causal | A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process
$$\phi(B)X_t = \theta(B) W_t$$
is causal if and only if $\phi(z) \neq 0$ | AR(2) model is causal
A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process
$$\phi(B)X_t = \theta(B) W_t$$
is causal if and only if $\phi(z) \neq 0$ for all $z \in \mathbb{C}$ such that $\left|z\right|\leq 1$.
So in order the AR($2$) process to be causal, the coefficients $\phi_1$ and $\phi_2$ must satisfy
$$1 - \phi_1 z - \phi_2 z^2 \neq 0$$
for all $\left|z\right| \leq 1$. It is not a causal process for all $\phi_1, \phi_2$. For example, $\phi_1 = 2, \phi_2 = 0$. | AR(2) model is causal
A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process
$$\phi(B)X_t = \theta(B) W_t$$
is causal if and only if $\phi(z) \neq 0$ |
48,602 | AR(2) model is causal | your final equation leads to the MA representation of an AR process.
Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X.
All ARMA models can be presented as pure AR models (weighted average of the past )
or as a pure MA mode ( weighted average of the past errors ) | AR(2) model is causal | your final equation leads to the MA representation of an AR process.
Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X.
All ARMA models can be present | AR(2) model is causal
your final equation leads to the MA representation of an AR process.
Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X.
All ARMA models can be presented as pure AR models (weighted average of the past )
or as a pure MA mode ( weighted average of the past errors ) | AR(2) model is causal
your final equation leads to the MA representation of an AR process.
Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X.
All ARMA models can be present |
48,603 | AR(2) model is causal | You write:
"I want to prove AR(2) model is causal."
Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These explicitly have merely statistical meaning.
Causality is something the go beyond merely statistical relationship and involve more than one variable. If we are not aware about this, we surely conflate statistical association and causality. At most you can ask about Granger causality (unhappy name) but the univariate nature of ARMA put away this possibility too.
For these reasons I disagree with previous answers that give you other informations without warning about causal meaning. | AR(2) model is causal | You write:
"I want to prove AR(2) model is causal."
Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These ex | AR(2) model is causal
You write:
"I want to prove AR(2) model is causal."
Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These explicitly have merely statistical meaning.
Causality is something the go beyond merely statistical relationship and involve more than one variable. If we are not aware about this, we surely conflate statistical association and causality. At most you can ask about Granger causality (unhappy name) but the univariate nature of ARMA put away this possibility too.
For these reasons I disagree with previous answers that give you other informations without warning about causal meaning. | AR(2) model is causal
You write:
"I want to prove AR(2) model is causal."
Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These ex |
48,604 | AR(2) model is causal | AR(2) is causal if :
$$ \phi_1+\phi_2 < 1$$
and
$$ \phi_1 - \phi_2 < 1$$
and
$$ -1 < \phi_2 < 1$$
In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal. | AR(2) model is causal | AR(2) is causal if :
$$ \phi_1+\phi_2 < 1$$
and
$$ \phi_1 - \phi_2 < 1$$
and
$$ -1 < \phi_2 < 1$$
In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal. | AR(2) model is causal
AR(2) is causal if :
$$ \phi_1+\phi_2 < 1$$
and
$$ \phi_1 - \phi_2 < 1$$
and
$$ -1 < \phi_2 < 1$$
In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal. | AR(2) model is causal
AR(2) is causal if :
$$ \phi_1+\phi_2 < 1$$
and
$$ \phi_1 - \phi_2 < 1$$
and
$$ -1 < \phi_2 < 1$$
In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal. |
48,605 | How do I include measurement errors in a Bernoulli experiment? | We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have
\begin{align}
\mathbb{P}(Y=1 | X=0)&= \epsilon_1 \\
\mathbb{P}(Y=0 | X=0)&= 1-\epsilon_1 \\
\mathbb{P}(Y=0 | X=1 )&= \epsilon_0 \\
\mathbb{P}(Y=1 | X=1 )&= 1-\epsilon_0
\end{align}
and we asume the "error probabilities " $\epsilon_1, \epsilon_0$ are known.
Now we can find, using conditional probability and the law of total probability, the distribution of $Y$. Calculate
$$
\mathbb{P}(Y=1) = \mathbb{P}(Y=1 | X=0) (1-p_0) +
\mathbb{P}(Y=1 | X=1)p_0 \\ = \epsilon_1 (1-p_0) + (1-\epsilon_0) p_0
$$
and this probability we denote by $p$. Then we observe $n$ independent copies of $Y$, the sum of those is $Z$ which have a binomial distribition with parameters $(n,p)$. We can estimate $p$ as usual by
$$
\hat{p}=Z/n
$$
and then find the maximum likelihood estimator of $p_0$ by solving in
the equation
$$
\hat{p}=Z/n = \epsilon_1 (1-p_0) + (1-\epsilon_0) p_0
$$
giving
$$
\hat{p_0} = \frac{\hat{p}-\epsilon_1}{1-(\epsilon_0+\epsilon_1)}.
$$
Now an example: suppose that $\epsilon_0=\epsilon_1 = 0.05$, $n=100$ and we observe $Z=80$. Then we find
$$
\hat{p_0} = \frac{0.8-0.05}{1-0.1}= 0.83...
$$
You want an confidence interval, just use your usual procedure to find an confidence interval for $p$, and then transform the confidence limits in the same way as we transformed the estimate above. (But consider using a better confidence interval than the one you gave). | How do I include measurement errors in a Bernoulli experiment? | We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have
\begin{align}
| How do I include measurement errors in a Bernoulli experiment?
We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have
\begin{align}
\mathbb{P}(Y=1 | X=0)&= \epsilon_1 \\
\mathbb{P}(Y=0 | X=0)&= 1-\epsilon_1 \\
\mathbb{P}(Y=0 | X=1 )&= \epsilon_0 \\
\mathbb{P}(Y=1 | X=1 )&= 1-\epsilon_0
\end{align}
and we asume the "error probabilities " $\epsilon_1, \epsilon_0$ are known.
Now we can find, using conditional probability and the law of total probability, the distribution of $Y$. Calculate
$$
\mathbb{P}(Y=1) = \mathbb{P}(Y=1 | X=0) (1-p_0) +
\mathbb{P}(Y=1 | X=1)p_0 \\ = \epsilon_1 (1-p_0) + (1-\epsilon_0) p_0
$$
and this probability we denote by $p$. Then we observe $n$ independent copies of $Y$, the sum of those is $Z$ which have a binomial distribition with parameters $(n,p)$. We can estimate $p$ as usual by
$$
\hat{p}=Z/n
$$
and then find the maximum likelihood estimator of $p_0$ by solving in
the equation
$$
\hat{p}=Z/n = \epsilon_1 (1-p_0) + (1-\epsilon_0) p_0
$$
giving
$$
\hat{p_0} = \frac{\hat{p}-\epsilon_1}{1-(\epsilon_0+\epsilon_1)}.
$$
Now an example: suppose that $\epsilon_0=\epsilon_1 = 0.05$, $n=100$ and we observe $Z=80$. Then we find
$$
\hat{p_0} = \frac{0.8-0.05}{1-0.1}= 0.83...
$$
You want an confidence interval, just use your usual procedure to find an confidence interval for $p$, and then transform the confidence limits in the same way as we transformed the estimate above. (But consider using a better confidence interval than the one you gave). | How do I include measurement errors in a Bernoulli experiment?
We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have
\begin{align}
|
48,606 | How do I include measurement errors in a Bernoulli experiment? | I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is
$$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$
The likelihood when $Y=1$ is
$$ \epsilon_1 p_0 + (1-\epsilon_1)(1-p_0)$$
The likelihood for the sample is the product of the above terms for the relevant numbers of 0's and 1's. So, if you saw $r$ 1's from a sample of $n$, you get
$$(\epsilon_1 p_0 + (1-\epsilon_1)(1-p_0)^r(\epsilon_0(1-p_0) +
(1-\epsilon_0)p_0) ^{n-r}$$
Your estimate for $p_0$ is the MLE of this thing and you can get Wald confidence intervals from the second derivative. If you are using R, Ben Bolker's bbmle package can deliver estimates from a supplied likelihood function. I am assuming that the $\epsilon_i$ are known.
It may be possible to get a closed form expression for the variance of the MLE from this equation, but I haven't had my coffee yet this morning and I won't attempt it. | How do I include measurement errors in a Bernoulli experiment? | I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is
$$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$
The likelihood when $Y=1$ is
$$ \epsilon_1 p_0 + (1-\ | How do I include measurement errors in a Bernoulli experiment?
I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is
$$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$
The likelihood when $Y=1$ is
$$ \epsilon_1 p_0 + (1-\epsilon_1)(1-p_0)$$
The likelihood for the sample is the product of the above terms for the relevant numbers of 0's and 1's. So, if you saw $r$ 1's from a sample of $n$, you get
$$(\epsilon_1 p_0 + (1-\epsilon_1)(1-p_0)^r(\epsilon_0(1-p_0) +
(1-\epsilon_0)p_0) ^{n-r}$$
Your estimate for $p_0$ is the MLE of this thing and you can get Wald confidence intervals from the second derivative. If you are using R, Ben Bolker's bbmle package can deliver estimates from a supplied likelihood function. I am assuming that the $\epsilon_i$ are known.
It may be possible to get a closed form expression for the variance of the MLE from this equation, but I haven't had my coffee yet this morning and I won't attempt it. | How do I include measurement errors in a Bernoulli experiment?
I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is
$$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$
The likelihood when $Y=1$ is
$$ \epsilon_1 p_0 + (1-\ |
48,607 | Can neural network classify large images? | There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible.
I would strongly suggest to reduce the image size as much as possible, and at the same time use non-fully connected layers in the beginning, reducing the dimensionality of your optimisation problem.
Another approach that you could try is to use a sliding window as input instead of the whole image. This way you could take the features of the first layers of any pretrained ImageNet network, that would significantly decrease your training time. In case you are using Torch7 you can find them here (2).
In both cases, in order to train such convolutional nets you will need a lot of computational power and a (some) very good GPU(s). | Can neural network classify large images? | There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible.
I would strongly suggest to reduce the image size as much as possible, and at the same time use | Can neural network classify large images?
There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible.
I would strongly suggest to reduce the image size as much as possible, and at the same time use non-fully connected layers in the beginning, reducing the dimensionality of your optimisation problem.
Another approach that you could try is to use a sliding window as input instead of the whole image. This way you could take the features of the first layers of any pretrained ImageNet network, that would significantly decrease your training time. In case you are using Torch7 you can find them here (2).
In both cases, in order to train such convolutional nets you will need a lot of computational power and a (some) very good GPU(s). | Can neural network classify large images?
There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible.
I would strongly suggest to reduce the image size as much as possible, and at the same time use |
48,608 | Can neural network classify large images? | In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process.
EfficientNet uses an image size of 600x600 pixels in its largest setting, and Feature Pyramid Networks for Object Detection and Mask R-CNN, which perform object detection and semantic segmentation, respectively, resize the input image so that their scale (shorter edge) is 800 pixels.
There is an interesting trade-off between input size, network depth (the number of layers) and network width (the number of feature maps in a layer), which is the reason why you usually only use moderately large input sizes. The optimal balance between these parameters has been analyzed and exploited in EfficientNet, leading to a series of new convolutional neural networks (CNNs) with image classification performance superior to previous CNNs (see image). | Can neural network classify large images? | In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process.
EfficientNet us | Can neural network classify large images?
In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process.
EfficientNet uses an image size of 600x600 pixels in its largest setting, and Feature Pyramid Networks for Object Detection and Mask R-CNN, which perform object detection and semantic segmentation, respectively, resize the input image so that their scale (shorter edge) is 800 pixels.
There is an interesting trade-off between input size, network depth (the number of layers) and network width (the number of feature maps in a layer), which is the reason why you usually only use moderately large input sizes. The optimal balance between these parameters has been analyzed and exploited in EfficientNet, leading to a series of new convolutional neural networks (CNNs) with image classification performance superior to previous CNNs (see image). | Can neural network classify large images?
In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process.
EfficientNet us |
48,609 | Distribution of "sample" mahalanobis distances | if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$
and $\pmb \varSigma\succ0$ and we denote:
$$(\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)$$
the usual Gaussian estimates of mean and covariance, then
$$d^2(\pmb x_i,\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)=(\pmb x_i-\mbox{ave}\;\pmb x_i)^\top(\mbox{cov}\;\pmb x_i)^{-1}(\pmb x_i-\mbox{ave}\;\pmb x_i)$$
has distribution [0, p113][1, p562]:
$$d^2(\pmb x_i,\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)\sim\frac{(n-1)^2}{n}\mbox{Beta}\left(p/2,(n-p-1)/2\right)$$
[0] Gnanadesikan, R. and Kettenring, J.(1972).
Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics,28:81--124.
[1] Wilks, S. (1962). Mathematical Statistics. John Wiley. | Distribution of "sample" mahalanobis distances | if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$
and $\pmb \varSigma\succ0$ and we denote:
$$ | Distribution of "sample" mahalanobis distances
if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$
and $\pmb \varSigma\succ0$ and we denote:
$$(\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)$$
the usual Gaussian estimates of mean and covariance, then
$$d^2(\pmb x_i,\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)=(\pmb x_i-\mbox{ave}\;\pmb x_i)^\top(\mbox{cov}\;\pmb x_i)^{-1}(\pmb x_i-\mbox{ave}\;\pmb x_i)$$
has distribution [0, p113][1, p562]:
$$d^2(\pmb x_i,\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)\sim\frac{(n-1)^2}{n}\mbox{Beta}\left(p/2,(n-p-1)/2\right)$$
[0] Gnanadesikan, R. and Kettenring, J.(1972).
Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics,28:81--124.
[1] Wilks, S. (1962). Mathematical Statistics. John Wiley. | Distribution of "sample" mahalanobis distances
if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$
and $\pmb \varSigma\succ0$ and we denote:
$$ |
48,610 | Distribution of "sample" mahalanobis distances | If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed.
To understand this, assume your estimate is perfect: $\hat S =\Sigma$. The math then should be straightforward, because you can essentially remove all the variance, and it is the same solution as for $\Sigma=I$.
For small sample sizes $n$, the results can be all over the place. The covariance matrix can be arbitrary bad for small samples - just assume all your sample is the same point, $n$ times (so this could still happen at $n\rightarrow\infty$, except that it is unlikely to happen). | Distribution of "sample" mahalanobis distances | If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed.
To understand this, assume your estimate is perfe | Distribution of "sample" mahalanobis distances
If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed.
To understand this, assume your estimate is perfect: $\hat S =\Sigma$. The math then should be straightforward, because you can essentially remove all the variance, and it is the same solution as for $\Sigma=I$.
For small sample sizes $n$, the results can be all over the place. The covariance matrix can be arbitrary bad for small samples - just assume all your sample is the same point, $n$ times (so this could still happen at $n\rightarrow\infty$, except that it is unlikely to happen). | Distribution of "sample" mahalanobis distances
If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed.
To understand this, assume your estimate is perfe |
48,611 | Definition of p-value in carets confusion matrix method | If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are class #1, is an accuracy of 75% any better than the "non-information rate" of 70%.
confusionMatrix uses the binom.test function to test that the accuracy (a proportion) is better than the no-information rate. It is one-side since you probably only care about being better than chance.
Max | Definition of p-value in carets confusion matrix method | If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are cl | Definition of p-value in carets confusion matrix method
If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are class #1, is an accuracy of 75% any better than the "non-information rate" of 70%.
confusionMatrix uses the binom.test function to test that the accuracy (a proportion) is better than the no-information rate. It is one-side since you probably only care about being better than chance.
Max | Definition of p-value in carets confusion matrix method
If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are cl |
48,612 | Hellinger transformation with relative data | The Hellinger transformation is defined as
$$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$
Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample.
If your data are already of the form $\frac{y_{ij}}{y_{i.}}$, but you've only taken a subset of the species, then yes, you can just apply a square root transformation to the data you are using and it would have been the same if you'd done the entire Hellinger transformation on the entire data set and then thrown out some of the species.
If you have a large number of taxa, in my experience I have found applying the Hellinger transformation (or just the square root to already proportional abundance data) to be an improvement over and above just analysing the % (or proportional) abundance data. | Hellinger transformation with relative data | The Hellinger transformation is defined as
$$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$
Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample.
If you | Hellinger transformation with relative data
The Hellinger transformation is defined as
$$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$
Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample.
If your data are already of the form $\frac{y_{ij}}{y_{i.}}$, but you've only taken a subset of the species, then yes, you can just apply a square root transformation to the data you are using and it would have been the same if you'd done the entire Hellinger transformation on the entire data set and then thrown out some of the species.
If you have a large number of taxa, in my experience I have found applying the Hellinger transformation (or just the square root to already proportional abundance data) to be an improvement over and above just analysing the % (or proportional) abundance data. | Hellinger transformation with relative data
The Hellinger transformation is defined as
$$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$
Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample.
If you |
48,613 | How does the mean function work for a Gaussian Process? | Your understanding is correct. There is apparently mistake in the notes and the equations should be
\begin{array}
mm (x) &= E[ f(x) ], \\
k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))].
\end{array}
For reference, see Equation (2.13) in page 13 of C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006,
available online. | How does the mean function work for a Gaussian Process? | Your understanding is correct. There is apparently mistake in the notes and the equations should be
\begin{array}
mm (x) &= E[ f(x) ], \\
k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))].
\end{array}
For ref | How does the mean function work for a Gaussian Process?
Your understanding is correct. There is apparently mistake in the notes and the equations should be
\begin{array}
mm (x) &= E[ f(x) ], \\
k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))].
\end{array}
For reference, see Equation (2.13) in page 13 of C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006,
available online. | How does the mean function work for a Gaussian Process?
Your understanding is correct. There is apparently mistake in the notes and the equations should be
\begin{array}
mm (x) &= E[ f(x) ], \\
k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))].
\end{array}
For ref |
48,614 | How would I create a 95% confidence interval with log-transformed data? | In the same way that you would compute and other confidence interval:
Transform data to the log you want
Calculate the mean of the transformed data
Calculate the standard error of the transformed data
Compute the upper and lower bounds, with the choosen confidence level
I might add that you dont need the residuals (from regression I assume?) to be normal, in order to calculate the confidence band. Assuming you have a large sample. | How would I create a 95% confidence interval with log-transformed data? | In the same way that you would compute and other confidence interval:
Transform data to the log you want
Calculate the mean of the transformed data
Calculate the standard error of the transformed da | How would I create a 95% confidence interval with log-transformed data?
In the same way that you would compute and other confidence interval:
Transform data to the log you want
Calculate the mean of the transformed data
Calculate the standard error of the transformed data
Compute the upper and lower bounds, with the choosen confidence level
I might add that you dont need the residuals (from regression I assume?) to be normal, in order to calculate the confidence band. Assuming you have a large sample. | How would I create a 95% confidence interval with log-transformed data?
In the same way that you would compute and other confidence interval:
Transform data to the log you want
Calculate the mean of the transformed data
Calculate the standard error of the transformed da |
48,615 | Similarities and dissimilarities in classical multidimensional scaling | These two books are in full agreement.
Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ such that their scalar products $\langle z_i, z_j \rangle$ approximate a given similarity matrix as well as possible. However, any dissimilarity matrix can be converted into a similarity matrix: dissimilarities are assumed to be Euclidean distances, from which centered scalar products can be computed and taken as similarities.
So the algorithm of classical/Torgerson MDS is as follows: $$\text{Euclidean distances}\to\text{Centered scalar products}\to\text{Optimal mapping},$$ i.e. $$\text{Dissimilarities}\to\text{Similarities}\to\text{Optimal mapping}.$$ What you consider an "input" here, does not really matter.
This is exactly what is written in Hastie et al.:
In classical scaling, we instead [as opposed to metric scaling in general] start with similarities [...]. This is attractive because there is an explicit
solution in terms of eigenvectors [...]. If we have distances
rather than inner-products, we can convert them to centered inner-products
if the distances are Euclidean [...]. If the similarities are in fact centered inner-products, classical scaling is exactly equivalent to principal components [...]. Classical scaling is not equivalent to least squares scaling [that minimizes reconstruction of dissimilarities].
See my answer in What's the difference between principal components analysis and multidimensional scaling? for mathematical details. | Similarities and dissimilarities in classical multidimensional scaling | These two books are in full agreement.
Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ su | Similarities and dissimilarities in classical multidimensional scaling
These two books are in full agreement.
Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ such that their scalar products $\langle z_i, z_j \rangle$ approximate a given similarity matrix as well as possible. However, any dissimilarity matrix can be converted into a similarity matrix: dissimilarities are assumed to be Euclidean distances, from which centered scalar products can be computed and taken as similarities.
So the algorithm of classical/Torgerson MDS is as follows: $$\text{Euclidean distances}\to\text{Centered scalar products}\to\text{Optimal mapping},$$ i.e. $$\text{Dissimilarities}\to\text{Similarities}\to\text{Optimal mapping}.$$ What you consider an "input" here, does not really matter.
This is exactly what is written in Hastie et al.:
In classical scaling, we instead [as opposed to metric scaling in general] start with similarities [...]. This is attractive because there is an explicit
solution in terms of eigenvectors [...]. If we have distances
rather than inner-products, we can convert them to centered inner-products
if the distances are Euclidean [...]. If the similarities are in fact centered inner-products, classical scaling is exactly equivalent to principal components [...]. Classical scaling is not equivalent to least squares scaling [that minimizes reconstruction of dissimilarities].
See my answer in What's the difference between principal components analysis and multidimensional scaling? for mathematical details. | Similarities and dissimilarities in classical multidimensional scaling
These two books are in full agreement.
Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ su |
48,616 | Prediction based on bayesian model | Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to illustrate. Say you want to estimate linear regression model
$$ y_i = \beta_0 + \beta_1 x_i + \varepsilon_i $$
and based on the model you want to predict $y_\text{new}$ values given $x_\text{new}$ data. In this case you plug in the $x_\text{new}$ into estimated model and JAGS samples $y_\text{new}$ values based on your model. The code would look similar to this:
beta0 ~ dnorm(0, 10)
beta1 ~ dnorm(0, 10)
sigma ~ dunif(0, 50)
for (i in 1:N) {
y[i] ~ dnorm(beta0 + beta1 * x[i], sigma)
}
for (j in 1:Nnew) {
ynew[j] ~ dnorm(beta0 + beta1 * xnew[j], sigma)
}
where y, x and xnew are data vectors and ynew is a variable for storing predictions. What you get is a distribution of values that are plausible given your estimated model. Since the model is probabilistic, the prediction is also probabilistic, i.e. we get the whole distribution of possible ynew values. For point-values take the average of ynew, you can also make prediction intervals taking the highest density intervals from ynew values. | Prediction based on bayesian model | Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to ill | Prediction based on bayesian model
Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to illustrate. Say you want to estimate linear regression model
$$ y_i = \beta_0 + \beta_1 x_i + \varepsilon_i $$
and based on the model you want to predict $y_\text{new}$ values given $x_\text{new}$ data. In this case you plug in the $x_\text{new}$ into estimated model and JAGS samples $y_\text{new}$ values based on your model. The code would look similar to this:
beta0 ~ dnorm(0, 10)
beta1 ~ dnorm(0, 10)
sigma ~ dunif(0, 50)
for (i in 1:N) {
y[i] ~ dnorm(beta0 + beta1 * x[i], sigma)
}
for (j in 1:Nnew) {
ynew[j] ~ dnorm(beta0 + beta1 * xnew[j], sigma)
}
where y, x and xnew are data vectors and ynew is a variable for storing predictions. What you get is a distribution of values that are plausible given your estimated model. Since the model is probabilistic, the prediction is also probabilistic, i.e. we get the whole distribution of possible ynew values. For point-values take the average of ynew, you can also make prediction intervals taking the highest density intervals from ynew values. | Prediction based on bayesian model
Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to ill |
48,617 | Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model? | Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE).
That said, R^2 isn't a great measure since R^2 will always increase with additional variables, regardless of whether that variable is meaningful. For example:
> x <- rnorm(100)
> y <- 1 * x + rnorm(100, 0, 0.25)
> z <- rnorm(100)
> summary(lm(y ~ x))$r.squared
[1] 0.9224326
> summary(lm(y ~ x + z))$r.squared
[1] 0.9273826
R^2 doesn't make a good measure of model quality because of that. Information based measures, like AIC and BIC, are better.
This is especially true in a time series application where you expect your error terms to be auto-correlated. You should probably be looking at a time series model (ARIMA would be a good place to start) with exogenous regressors to account for the auto-correlation. As is, your model is likely massively overstating the explained error and inflating your R^2.
I'd strongly encourage you to look at time series modeling and AIC based measures of model fit.
EDIT: I wrote a little simulation to compute PRESS and the predicted R^2 for some simulated data and compared it against AIC.
sim <- function() {
x <- rnorm(100)
y <- 1 * x + rnorm(100, 0, .25)
z <- rnorm(100)
summary(lm(y[-1] ~ x[-1]))$r.squared
summary(lm(y[-1] ~ x[-1] + z[-1]))$r.squared
d <- rep(NA, 100)
press1 <- press2 <- rep(NA, 100)
for (i in 1:100) {
yt <- y[i]
x2 <- x[-i]
y2 <- y[-i]
z2 <- z[-i]
b1 <- coef(lm(y2[-1] ~ x2[-1]))
b2 <- coef(lm(y2[-1] ~ x2[-1] + z2[-1]))
press1[i] <- (yt - (b1) %*% c(1, x[i]))^2
press2[i] <- (yt - (b2) %*% c(1, x[i], z[i]))^2
}
sst <- sum((y - mean(y))^2)
p1 <- 1 - sum(press1)/sst
p2 <- 1 - sum(press2)/sst
a1 <- AIC(lm(y[-1] ~ x[-1]))
a2 <- AIC(lm(y[-1] ~ x[-1] + z[-1]))
c(p1 >= p2, a1 <= a2)
}
sim()
x <- replicate(100, sim())
Both methods preferred the better model about 85% of the time. AIC has the benefits on a stronger theoretical basis and generalizes better to other methods (e.g., GLM where R^2 is not defined).
The bigger issue here is using a linear model on something with likely autocorrelated errors (a time series).
Using a dataset (Seatbelts in R) to estimate the effect of a seatbelt law, when I use just a linear model and adjust for gas price and distance driven the law's effect is estimated as -11.89 with a standard error of 6.026.
If I account for the fact that the data is correlated with itself and estimate the law effect in the context of an ARIMA model, I estimate the law's effect as -20 and with a standard error of 7.9.
Because the linear model ignored the time series properties, the estimate was off by 2 fold and the standard error of the major variable of interest was underestimated. The same thing (but worse) happens with the gas price and distance variables. | Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model? | Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE).
That said, R^2 isn't a great measure since R^2 will always increase with additio | Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model?
Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE).
That said, R^2 isn't a great measure since R^2 will always increase with additional variables, regardless of whether that variable is meaningful. For example:
> x <- rnorm(100)
> y <- 1 * x + rnorm(100, 0, 0.25)
> z <- rnorm(100)
> summary(lm(y ~ x))$r.squared
[1] 0.9224326
> summary(lm(y ~ x + z))$r.squared
[1] 0.9273826
R^2 doesn't make a good measure of model quality because of that. Information based measures, like AIC and BIC, are better.
This is especially true in a time series application where you expect your error terms to be auto-correlated. You should probably be looking at a time series model (ARIMA would be a good place to start) with exogenous regressors to account for the auto-correlation. As is, your model is likely massively overstating the explained error and inflating your R^2.
I'd strongly encourage you to look at time series modeling and AIC based measures of model fit.
EDIT: I wrote a little simulation to compute PRESS and the predicted R^2 for some simulated data and compared it against AIC.
sim <- function() {
x <- rnorm(100)
y <- 1 * x + rnorm(100, 0, .25)
z <- rnorm(100)
summary(lm(y[-1] ~ x[-1]))$r.squared
summary(lm(y[-1] ~ x[-1] + z[-1]))$r.squared
d <- rep(NA, 100)
press1 <- press2 <- rep(NA, 100)
for (i in 1:100) {
yt <- y[i]
x2 <- x[-i]
y2 <- y[-i]
z2 <- z[-i]
b1 <- coef(lm(y2[-1] ~ x2[-1]))
b2 <- coef(lm(y2[-1] ~ x2[-1] + z2[-1]))
press1[i] <- (yt - (b1) %*% c(1, x[i]))^2
press2[i] <- (yt - (b2) %*% c(1, x[i], z[i]))^2
}
sst <- sum((y - mean(y))^2)
p1 <- 1 - sum(press1)/sst
p2 <- 1 - sum(press2)/sst
a1 <- AIC(lm(y[-1] ~ x[-1]))
a2 <- AIC(lm(y[-1] ~ x[-1] + z[-1]))
c(p1 >= p2, a1 <= a2)
}
sim()
x <- replicate(100, sim())
Both methods preferred the better model about 85% of the time. AIC has the benefits on a stronger theoretical basis and generalizes better to other methods (e.g., GLM where R^2 is not defined).
The bigger issue here is using a linear model on something with likely autocorrelated errors (a time series).
Using a dataset (Seatbelts in R) to estimate the effect of a seatbelt law, when I use just a linear model and adjust for gas price and distance driven the law's effect is estimated as -11.89 with a standard error of 6.026.
If I account for the fact that the data is correlated with itself and estimate the law effect in the context of an ARIMA model, I estimate the law's effect as -20 and with a standard error of 7.9.
Because the linear model ignored the time series properties, the estimate was off by 2 fold and the standard error of the major variable of interest was underestimated. The same thing (but worse) happens with the gas price and distance variables. | Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model?
Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE).
That said, R^2 isn't a great measure since R^2 will always increase with additio |
48,618 | PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s | I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer:
import pymc3 as pm
import numpy as np
import pandas as pd
import theano
import scipy as sp
data = pd.read_csv('jester-dense-subset-100x20.csv')
n, m = data.shape
test_size = m / 10
train_size = m - test_size
train = data.copy()
train.ix[:,train_size:] = np.nan # remove test set data
train[train.isnull()] = train.mean().mean() # mean value imputation
train = train.values
test = data.copy()
test.ix[:,:train_size] = np.nan # remove train set data
test = test.values
# Low precision reflects uncertainty; prevents overfitting
alpha_u = alpha_v = 1/np.var(train)
alpha = np.ones((n,m)) * 2 # fixed precision for likelihood function
dim = 10 # dimensionality
# Specify the model.
with pm.Model() as pmf:
pmf_U = pm.MvNormal('U', mu=0, tau=alpha_u * np.eye(dim),
shape=(n, dim), testval=np.random.randn(n, dim)*.01)
pmf_V = pm.MvNormal('V', mu=0, tau=alpha_v * np.eye(dim),
shape=(m, dim), testval=np.random.randn(m, dim)*.01)
pmf_R = pm.Normal('R', mu=theano.tensor.dot(pmf_U, pmf_V.T),
tau=alpha, observed=train)
# Find mode of posterior using optimization
start = pm.find_MAP(fmin=sp.optimize.fmin_powell) # Find starting values by optimization
step = pm.NUTS(scaling=start)
trace = pm.sample(500, step, start=start)
This is an interesting model that would make a great contribution. Please consider adding this, once you're certain it works as desired, to the examples folder and do a pull request. | PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s | I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer:
import pymc3 as pm
import numpy as np
import pandas as pd
im | PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s
I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer:
import pymc3 as pm
import numpy as np
import pandas as pd
import theano
import scipy as sp
data = pd.read_csv('jester-dense-subset-100x20.csv')
n, m = data.shape
test_size = m / 10
train_size = m - test_size
train = data.copy()
train.ix[:,train_size:] = np.nan # remove test set data
train[train.isnull()] = train.mean().mean() # mean value imputation
train = train.values
test = data.copy()
test.ix[:,:train_size] = np.nan # remove train set data
test = test.values
# Low precision reflects uncertainty; prevents overfitting
alpha_u = alpha_v = 1/np.var(train)
alpha = np.ones((n,m)) * 2 # fixed precision for likelihood function
dim = 10 # dimensionality
# Specify the model.
with pm.Model() as pmf:
pmf_U = pm.MvNormal('U', mu=0, tau=alpha_u * np.eye(dim),
shape=(n, dim), testval=np.random.randn(n, dim)*.01)
pmf_V = pm.MvNormal('V', mu=0, tau=alpha_v * np.eye(dim),
shape=(m, dim), testval=np.random.randn(m, dim)*.01)
pmf_R = pm.Normal('R', mu=theano.tensor.dot(pmf_U, pmf_V.T),
tau=alpha, observed=train)
# Find mode of posterior using optimization
start = pm.find_MAP(fmin=sp.optimize.fmin_powell) # Find starting values by optimization
step = pm.NUTS(scaling=start)
trace = pm.sample(500, step, start=start)
This is an interesting model that would make a great contribution. Please consider adding this, once you're certain it works as desired, to the examples folder and do a pull request. | PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s
I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer:
import pymc3 as pm
import numpy as np
import pandas as pd
im |
48,619 | Count data and heteroscedasticity | Q1 "why [do] count data tend to be heteroscedastic"?
If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random counts' might usefully mean. Hence one way to ask why count data is heteroscedastic is to ask why count data might be Poisson distributed. For this there are various derivations e.g. the 'Law of Rare Events' discussed in the link.
Poisson is not the only characterisation of 'random counts' that is possible, of which more below.
Q2 "is heteroscedasticity...something that [I] should be concerned about in [a] [P]oisson model if [I'm] using [dependent] variable that is consider to be count data?"
If you are running a regression that assumes that your dependent variable is Poisson distributed with a mean that depends on some covariates, e.g. a Generalised Linear Model, then you are already taking into account the heteroscedasticity due to being Poisson. However...
Overdispersion
This kind of model assumes that once the covariates have determined the expected mean then the remaining variation in your data is Poisson. But if you have missed out some important variables (which most of us do, most of the time) then the true mean might still be different for different values of those unseen variables, even if the variables that are in the model are the same. This is referred to as overdispersion and is a distinct variance-related issue you will want to think about. (Actually this is only one of several mechanisms that generates overdispersion, but it's enough for now).
The solution is to model the extra variation explicitly: Negative Binomial regression models are one class of models that do that. | Count data and heteroscedasticity | Q1 "why [do] count data tend to be heteroscedastic"?
If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random | Count data and heteroscedasticity
Q1 "why [do] count data tend to be heteroscedastic"?
If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random counts' might usefully mean. Hence one way to ask why count data is heteroscedastic is to ask why count data might be Poisson distributed. For this there are various derivations e.g. the 'Law of Rare Events' discussed in the link.
Poisson is not the only characterisation of 'random counts' that is possible, of which more below.
Q2 "is heteroscedasticity...something that [I] should be concerned about in [a] [P]oisson model if [I'm] using [dependent] variable that is consider to be count data?"
If you are running a regression that assumes that your dependent variable is Poisson distributed with a mean that depends on some covariates, e.g. a Generalised Linear Model, then you are already taking into account the heteroscedasticity due to being Poisson. However...
Overdispersion
This kind of model assumes that once the covariates have determined the expected mean then the remaining variation in your data is Poisson. But if you have missed out some important variables (which most of us do, most of the time) then the true mean might still be different for different values of those unseen variables, even if the variables that are in the model are the same. This is referred to as overdispersion and is a distinct variance-related issue you will want to think about. (Actually this is only one of several mechanisms that generates overdispersion, but it's enough for now).
The solution is to model the extra variation explicitly: Negative Binomial regression models are one class of models that do that. | Count data and heteroscedasticity
Q1 "why [do] count data tend to be heteroscedastic"?
If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random |
48,620 | What is the significance of a linear dependency in a polynomial regression? | Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because the matrix $X^TX$ is singular, so there is not a uniquely defined solution to estimating your regression coefficients. (The matrix $X^TX$ is important because of its role in estimating OLS regression coefficients $\hat{\beta}$: $\hat{\beta}=(X^TX)^{-1}X^Ty$.)
Linear dependence is a technical phenomenon distinct from the ordinary usage of "dependence" as you express it.
So with an understanding of linear dependence in hand, we can begin to examine likely sources of this problem.
More features than rows It's unclear how much data you have, but it's possible that by adding polynomial terms, you're inadvertently creating a dimensionality problem: an ordinary regression with more columns than observations will fail! This is because the system of equations that you've defined has infinitely many solutions, i.e. there is not a pivot point in every column. This amounts to the same problem as having a singular $X^TX$.
Duplicate data Even if you have more rows than features, it's important that each of those rows provides unique information. Duplicate rows are unhelpful in this context. By definition, the polynomial regression matrix $X$ (consisting of $[\mathbf{1} , x, x^2, ... x^m]$ as columns) is a Vandermonde matrix, so it will have a unique OLS solution provided that there are $m+1$ unique entries in your original $x$ vector. So despite having 100 observations, perhaps you have a smaller number of unique $x$ entries?
Ill-conditioned matrix Even if neither (1) nor (2) is true, it's possible that the matrix is numerically singular, i.e. singular due to machine precision reasons. There are a number methods of dealing with this, depending on what kinds of compromises you're willing to make. These include orthogonal polynomials, regularization, and splines among others.
A general strategy to address an ill-conditioned matrix $X$ is called ridge regression, which works by finding an optimal amount of regularization to apply to your problem. Ridge regression is discussed all over this website. One place to start would be with this excellent answer.
AndyW points out that fitting very high-order polynomials is often ill-advised since it increases the risk of overfitting. In predictive settings, it's often advised that one use cross validation to assess the fitness of a given model. Depending on your application, you might care about different out-of-sample tests, but a typical metric for this type of problem is mean squared error. | What is the significance of a linear dependency in a polynomial regression? | Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because | What is the significance of a linear dependency in a polynomial regression?
Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because the matrix $X^TX$ is singular, so there is not a uniquely defined solution to estimating your regression coefficients. (The matrix $X^TX$ is important because of its role in estimating OLS regression coefficients $\hat{\beta}$: $\hat{\beta}=(X^TX)^{-1}X^Ty$.)
Linear dependence is a technical phenomenon distinct from the ordinary usage of "dependence" as you express it.
So with an understanding of linear dependence in hand, we can begin to examine likely sources of this problem.
More features than rows It's unclear how much data you have, but it's possible that by adding polynomial terms, you're inadvertently creating a dimensionality problem: an ordinary regression with more columns than observations will fail! This is because the system of equations that you've defined has infinitely many solutions, i.e. there is not a pivot point in every column. This amounts to the same problem as having a singular $X^TX$.
Duplicate data Even if you have more rows than features, it's important that each of those rows provides unique information. Duplicate rows are unhelpful in this context. By definition, the polynomial regression matrix $X$ (consisting of $[\mathbf{1} , x, x^2, ... x^m]$ as columns) is a Vandermonde matrix, so it will have a unique OLS solution provided that there are $m+1$ unique entries in your original $x$ vector. So despite having 100 observations, perhaps you have a smaller number of unique $x$ entries?
Ill-conditioned matrix Even if neither (1) nor (2) is true, it's possible that the matrix is numerically singular, i.e. singular due to machine precision reasons. There are a number methods of dealing with this, depending on what kinds of compromises you're willing to make. These include orthogonal polynomials, regularization, and splines among others.
A general strategy to address an ill-conditioned matrix $X$ is called ridge regression, which works by finding an optimal amount of regularization to apply to your problem. Ridge regression is discussed all over this website. One place to start would be with this excellent answer.
AndyW points out that fitting very high-order polynomials is often ill-advised since it increases the risk of overfitting. In predictive settings, it's often advised that one use cross validation to assess the fitness of a given model. Depending on your application, you might care about different out-of-sample tests, but a typical metric for this type of problem is mean squared error. | What is the significance of a linear dependency in a polynomial regression?
Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because |
48,621 | Computing Paired Samples (pre/post) Effect Size with Limited Information | Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-test and post-test scores. | Computing Paired Samples (pre/post) Effect Size with Limited Information | Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-tes | Computing Paired Samples (pre/post) Effect Size with Limited Information
Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-test and post-test scores. | Computing Paired Samples (pre/post) Effect Size with Limited Information
Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-tes |
48,622 | Computing Paired Samples (pre/post) Effect Size with Limited Information | Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size.
However, this correlation value can be imputed to obtain reasonable results, especially if you can draw upon previous research and/or have a strong theoretical rationale for the particular value. After an initial effect size estimate is calculated from the imputed correlation, sensitivity analyses (within a reasonable range of imputed values) should be conducted . If they result in similar final aggregate/omnibus estimates you can (usually) have greater confidence in those initial estimates. | Computing Paired Samples (pre/post) Effect Size with Limited Information | Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size.
However, this correlation value can be imputed to obtain reasonable | Computing Paired Samples (pre/post) Effect Size with Limited Information
Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size.
However, this correlation value can be imputed to obtain reasonable results, especially if you can draw upon previous research and/or have a strong theoretical rationale for the particular value. After an initial effect size estimate is calculated from the imputed correlation, sensitivity analyses (within a reasonable range of imputed values) should be conducted . If they result in similar final aggregate/omnibus estimates you can (usually) have greater confidence in those initial estimates. | Computing Paired Samples (pre/post) Effect Size with Limited Information
Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size.
However, this correlation value can be imputed to obtain reasonable |
48,623 | Computing Paired Samples (pre/post) Effect Size with Limited Information | I am also working with a similar meta-analysis.
SDd can be imputed by several methods.
1. Taking it from other studies. Use maximum of observed from other studies.
2. If any of other studies have reported r, use it. Base it on maximum of observed r values.
3. If any of other studies in your meta-analysis is has mentioned, SE or 95% CI or P-value all these could be used to derive your SD for mean change.
Here is a good paper that addresses all these with formulas.
Fu R, Vandermeer BW, Shamliyan TA, O’Neil ME, Yazdi F, Fox SH,
Morton SC. Handling Continuous Outcomes in Quantitative Synthesis. Methods Guide for Comparative Effectiveness Reviews. | Computing Paired Samples (pre/post) Effect Size with Limited Information | I am also working with a similar meta-analysis.
SDd can be imputed by several methods.
1. Taking it from other studies. Use maximum of observed from other studies.
2. If any of other studies have repo | Computing Paired Samples (pre/post) Effect Size with Limited Information
I am also working with a similar meta-analysis.
SDd can be imputed by several methods.
1. Taking it from other studies. Use maximum of observed from other studies.
2. If any of other studies have reported r, use it. Base it on maximum of observed r values.
3. If any of other studies in your meta-analysis is has mentioned, SE or 95% CI or P-value all these could be used to derive your SD for mean change.
Here is a good paper that addresses all these with formulas.
Fu R, Vandermeer BW, Shamliyan TA, O’Neil ME, Yazdi F, Fox SH,
Morton SC. Handling Continuous Outcomes in Quantitative Synthesis. Methods Guide for Comparative Effectiveness Reviews. | Computing Paired Samples (pre/post) Effect Size with Limited Information
I am also working with a similar meta-analysis.
SDd can be imputed by several methods.
1. Taking it from other studies. Use maximum of observed from other studies.
2. If any of other studies have repo |
48,624 | Identifiability of the linear regression model: necessary and sufficient condition | Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\binom{n+1}{2}$ distinct points, their agreement at all points of $\mathbb{R}^n$ is far more than needed to conclude the two forms are identical, whence their coefficients must be the same.
The coefficients of $y_1^2$ are $1/\sigma^2$ and $1/\nu^2$, whence $\sigma=\pm \nu$. We always stipulate that $\sigma$ and $\nu$ are nonnegative, implying $\sigma=\nu$. (The "real" parameter should be considered to be $\sigma^2$ or $1/\sigma^2$ rather than $\sigma$ itself.)
The linear terms in $y_i$ are both proportional to $b_0+b_1 x_i = a_0 + a_1 x_i$. Letting $\mathrm{1} = (1,1,\ldots, 1)$ and $\mathrm{x} = (x_1, x_2, \ldots, x_n)$, we conclude
$$(a_0 - b_0)\mathrm{1} + (a_1 - b_1)\mathrm{x} = \mathrm{0}.$$
Thus either
$\mathrm{1}$ and $\mathrm{x}$ are linearly independent, which by definition implies both $a_0 = b_0$ and $a_1 = b_1$, or
$\mathrm{1}$ and $\mathrm{x}$ are linearly dependent, which means $x_1 = x_2 = \cdots = x_n = x$, say. In that case
If $x \ne 0$, $a_0 - b_0 = (a_1 - b_1) x$ determines one of $(a_0, a_1, b_0, b_1)$ in terms of the other three, or
Otherwise $a_0=b_0$ and $a_1$ and $b_1$ could have any values.
In case (1) all parameters are uniquely determined: this is the identifiable model. In case (2) $\sigma = \nu$ is identifiable no matter what and various linear combinations of $(a_0,a_1,b_0,b_1)$ can be identified.
Evidently, linear independence of $\mathrm{x}$ and $\mathrm{1}$ is both necessary and sufficient for identifiability.
This criterion easily generalizes to multiple regression, where the ordinary least squares model is identifiable if and only if the design matrix $X$ (whose columns are formed from $\mathrm{1}, \mathrm{x}$, and any other variables in any order) has full rank: that is, there is no linear dependence among its columns. | Identifiability of the linear regression model: necessary and sufficient condition | Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\ | Identifiability of the linear regression model: necessary and sufficient condition
Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\binom{n+1}{2}$ distinct points, their agreement at all points of $\mathbb{R}^n$ is far more than needed to conclude the two forms are identical, whence their coefficients must be the same.
The coefficients of $y_1^2$ are $1/\sigma^2$ and $1/\nu^2$, whence $\sigma=\pm \nu$. We always stipulate that $\sigma$ and $\nu$ are nonnegative, implying $\sigma=\nu$. (The "real" parameter should be considered to be $\sigma^2$ or $1/\sigma^2$ rather than $\sigma$ itself.)
The linear terms in $y_i$ are both proportional to $b_0+b_1 x_i = a_0 + a_1 x_i$. Letting $\mathrm{1} = (1,1,\ldots, 1)$ and $\mathrm{x} = (x_1, x_2, \ldots, x_n)$, we conclude
$$(a_0 - b_0)\mathrm{1} + (a_1 - b_1)\mathrm{x} = \mathrm{0}.$$
Thus either
$\mathrm{1}$ and $\mathrm{x}$ are linearly independent, which by definition implies both $a_0 = b_0$ and $a_1 = b_1$, or
$\mathrm{1}$ and $\mathrm{x}$ are linearly dependent, which means $x_1 = x_2 = \cdots = x_n = x$, say. In that case
If $x \ne 0$, $a_0 - b_0 = (a_1 - b_1) x$ determines one of $(a_0, a_1, b_0, b_1)$ in terms of the other three, or
Otherwise $a_0=b_0$ and $a_1$ and $b_1$ could have any values.
In case (1) all parameters are uniquely determined: this is the identifiable model. In case (2) $\sigma = \nu$ is identifiable no matter what and various linear combinations of $(a_0,a_1,b_0,b_1)$ can be identified.
Evidently, linear independence of $\mathrm{x}$ and $\mathrm{1}$ is both necessary and sufficient for identifiability.
This criterion easily generalizes to multiple regression, where the ordinary least squares model is identifiable if and only if the design matrix $X$ (whose columns are formed from $\mathrm{1}, \mathrm{x}$, and any other variables in any order) has full rank: that is, there is no linear dependence among its columns. | Identifiability of the linear regression model: necessary and sufficient condition
Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\ |
48,625 | Identifiability of the linear regression model: necessary and sufficient condition | Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic nature of the cost function, which it turns out, is part of the proof. This is because a quadratic function will always have one unique maximum/minimum.
There is a proof in terms of matrices here (on page 43):
http://dept.stat.lsa.umich.edu/~kshedden/Courses/Stat600/Notes/least-squares.pdf
It hinges on the Hessian of second derivatives being positive definite, and the least squares/MLE cost function being quadratic.
The only condition on the x variables is that there is some variability in the sample. Otherwise, if (without loss of generality) for example $x_i=1$. Then the above equations would simply be:
$\sum_{i=1}^n \frac{(y_i-b_0-b_1)^2}{\sigma^2}=\sum_{i=1}^n \frac{(y_i-a_0-a_1)^2}{\nu^2}$
Which have an infinite number of combinations of $a_0,a_1$ and $b_0,b_1$ which satisfy these.
Best,
Ben | Identifiability of the linear regression model: necessary and sufficient condition | Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic | Identifiability of the linear regression model: necessary and sufficient condition
Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic nature of the cost function, which it turns out, is part of the proof. This is because a quadratic function will always have one unique maximum/minimum.
There is a proof in terms of matrices here (on page 43):
http://dept.stat.lsa.umich.edu/~kshedden/Courses/Stat600/Notes/least-squares.pdf
It hinges on the Hessian of second derivatives being positive definite, and the least squares/MLE cost function being quadratic.
The only condition on the x variables is that there is some variability in the sample. Otherwise, if (without loss of generality) for example $x_i=1$. Then the above equations would simply be:
$\sum_{i=1}^n \frac{(y_i-b_0-b_1)^2}{\sigma^2}=\sum_{i=1}^n \frac{(y_i-a_0-a_1)^2}{\nu^2}$
Which have an infinite number of combinations of $a_0,a_1$ and $b_0,b_1$ which satisfy these.
Best,
Ben | Identifiability of the linear regression model: necessary and sufficient condition
Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic |
48,626 | Making two vectors uncorrelated in terms of Kendall Tau correlation | Covariance is linear, so a linear projection can be used to zero it out.
Concordance is not linear, so a linear projection won't (in general) work to zero it out.
However, one can still come up with vectors which have zero Kendall correlation.
Specifically, if $\hat{\beta}^K$ is the slope estimate for the Theil-Sen regression of $y$ on $x$, then the Kendall correlation of $x$ and $r=y-\hat{\beta}^K x$ will be 0. | Making two vectors uncorrelated in terms of Kendall Tau correlation | Covariance is linear, so a linear projection can be used to zero it out.
Concordance is not linear, so a linear projection won't (in general) work to zero it out.
However, one can still come up with | Making two vectors uncorrelated in terms of Kendall Tau correlation
Covariance is linear, so a linear projection can be used to zero it out.
Concordance is not linear, so a linear projection won't (in general) work to zero it out.
However, one can still come up with vectors which have zero Kendall correlation.
Specifically, if $\hat{\beta}^K$ is the slope estimate for the Theil-Sen regression of $y$ on $x$, then the Kendall correlation of $x$ and $r=y-\hat{\beta}^K x$ will be 0. | Making two vectors uncorrelated in terms of Kendall Tau correlation
Covariance is linear, so a linear projection can be used to zero it out.
Concordance is not linear, so a linear projection won't (in general) work to zero it out.
However, one can still come up with |
48,627 | A framework for multi-valued categorical attributes | The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is absent or present, respectively.
See for example how to do it in Pandas (if you are using Python) and Generate a dummy-variable in R.
The good thing is that you can treat 0 and 1 as categorical (e.g. for decision trees or random forests) or numerical (for various regressions, k-nearest neighbors, principal component analysis, k-mean, etc). Sometimes you need to convert all variables to numerical, even if there is only a single attribute per entry.
The bad thing is that if there are many options, either you need to restrict yourself to only the most common or perform some dimensional reduction with the principal component analysis.
The ugly thing is that even if you are using categorical-only variables, then you typically present single-valued variables with text/id, while multi-valued with dummy variables. | A framework for multi-valued categorical attributes | The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is abse | A framework for multi-valued categorical attributes
The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is absent or present, respectively.
See for example how to do it in Pandas (if you are using Python) and Generate a dummy-variable in R.
The good thing is that you can treat 0 and 1 as categorical (e.g. for decision trees or random forests) or numerical (for various regressions, k-nearest neighbors, principal component analysis, k-mean, etc). Sometimes you need to convert all variables to numerical, even if there is only a single attribute per entry.
The bad thing is that if there are many options, either you need to restrict yourself to only the most common or perform some dimensional reduction with the principal component analysis.
The ugly thing is that even if you are using categorical-only variables, then you typically present single-valued variables with text/id, while multi-valued with dummy variables. | A framework for multi-valued categorical attributes
The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is abse |
48,628 | Do I get the nice asymptotic properties of MLE when I restrict the parameter space? | The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for the true value of the parameter to be within an open set of the parameter space. In your example, if the true value of $p$ is 0.10, then it's impossible with respect to your restricted parameter space, so of course everything will fail. But if it's an interior point of (.25,.75), then the mle will still be the usual $\hat{p}$ and the nice asymptotic properties will hold. And if $p=0.25$, you won't get the nice asymptotics either.
This is not a purely academic question. In mixed effects models, we often want to test if the random effect variance is 0, but under the null hypothesis that it is 0, the usual mle asymptotics no longer apply. | Do I get the nice asymptotic properties of MLE when I restrict the parameter space? | The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for | Do I get the nice asymptotic properties of MLE when I restrict the parameter space?
The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for the true value of the parameter to be within an open set of the parameter space. In your example, if the true value of $p$ is 0.10, then it's impossible with respect to your restricted parameter space, so of course everything will fail. But if it's an interior point of (.25,.75), then the mle will still be the usual $\hat{p}$ and the nice asymptotic properties will hold. And if $p=0.25$, you won't get the nice asymptotics either.
This is not a purely academic question. In mixed effects models, we often want to test if the random effect variance is 0, but under the null hypothesis that it is 0, the usual mle asymptotics no longer apply. | Do I get the nice asymptotic properties of MLE when I restrict the parameter space?
The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for |
48,629 | Can a forecast that reaches further into the future be less uncertain? | In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in variations in future forecasts. In that case, the band is not a statistical uncertainty band per se, it's the results of the most extreme models based on assumptions regarding unknown present conditions. Since the model can be complex and takes into account current conditions in other areas on the globe, the result for a particular day in the future may have higher variation than a day farther into future. | Can a forecast that reaches further into the future be less uncertain? | In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in | Can a forecast that reaches further into the future be less uncertain?
In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in variations in future forecasts. In that case, the band is not a statistical uncertainty band per se, it's the results of the most extreme models based on assumptions regarding unknown present conditions. Since the model can be complex and takes into account current conditions in other areas on the globe, the result for a particular day in the future may have higher variation than a day farther into future. | Can a forecast that reaches further into the future be less uncertain?
In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in |
48,630 | Advantages of counterbalancing vs. randomizing stimuli | I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and test to see if it has any effect.
The cons of counterbalancing are that there may be some effect of, say, $Q1$ and $Q3$ being in the same condition. That is the case now in both of your lists (they are both in $A$ in list 1, and both in $B$ in list 2). In fact there are many such possibilities (all evens in same condition, etc.). There are also possible order effects ($Q1$ always comes before $Q2$, etc.). It is possible to create a set of lists that counterbalances across all possibilities, but that is a lot of permutations. Randomizing makes all possibilities equally likely (in the long run) and thus marginalizes over (washes out) these possible effects. Presumably, these effects are not actually of interest to you (they are nuisance variables). If so, randomizing better controls for these issues. As such, randomization has a theoretical advantage. However, the effects may be quite small in reality; so small in fact, that this isn't something you really need to worry about. And randomization is likely to be a pain. | Advantages of counterbalancing vs. randomizing stimuli | I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and | Advantages of counterbalancing vs. randomizing stimuli
I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and test to see if it has any effect.
The cons of counterbalancing are that there may be some effect of, say, $Q1$ and $Q3$ being in the same condition. That is the case now in both of your lists (they are both in $A$ in list 1, and both in $B$ in list 2). In fact there are many such possibilities (all evens in same condition, etc.). There are also possible order effects ($Q1$ always comes before $Q2$, etc.). It is possible to create a set of lists that counterbalances across all possibilities, but that is a lot of permutations. Randomizing makes all possibilities equally likely (in the long run) and thus marginalizes over (washes out) these possible effects. Presumably, these effects are not actually of interest to you (they are nuisance variables). If so, randomizing better controls for these issues. As such, randomization has a theoretical advantage. However, the effects may be quite small in reality; so small in fact, that this isn't something you really need to worry about. And randomization is likely to be a pain. | Advantages of counterbalancing vs. randomizing stimuli
I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and |
48,631 | Fitted model of linear spline regression in R | The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R :
bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff
new.x <- c(10.2, 11.8, 13, 30)
predict(x.bspline.bff, new.x)
Most software will have functions to generate these (e.g. SAS, Stata); should you need to do it yourself, a recursive procedure is given in Hastie et.al (2009), The Elements of Statistical Learning, Ch.5, "Appendix: Computational considerations for splines".
You could also use an equivalent reëxpression with truncated power functions, but in general that's not a good idea—there's a danger of numerical instability with higher order splines & interactions. See here for an example of exporting a spline function to Excel. | Fitted model of linear spline regression in R | The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R :
bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff
new.x | Fitted model of linear spline regression in R
The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R :
bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff
new.x <- c(10.2, 11.8, 13, 30)
predict(x.bspline.bff, new.x)
Most software will have functions to generate these (e.g. SAS, Stata); should you need to do it yourself, a recursive procedure is given in Hastie et.al (2009), The Elements of Statistical Learning, Ch.5, "Appendix: Computational considerations for splines".
You could also use an equivalent reëxpression with truncated power functions, but in general that's not a good idea—there's a danger of numerical instability with higher order splines & interactions. See here for an example of exporting a spline function to Excel. | Fitted model of linear spline regression in R
The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R :
bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff
new.x |
48,632 | What statistical test should I use to look at change in a binary outcome over time? | Two approaches that work in your case are:
Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works.
Generalized Linear Mixed Models (GLMM). Of course you would want to choose the logit link.
With above approaches, you can easily incorporate your explanatory variables you wish to investigate into the model. I would not recommend survival-type analysis since you just have two time points since no much time information included.
As for coding the outcome, you can do in the normal way, i.e., y=1 if adherent and y=0 if non-adherent. You will have a time factor with two levels, at 6 weeks or at 6 months, to take care of the correlated outcome measurements. That is, there are two observations associated with each subject ID. | What statistical test should I use to look at change in a binary outcome over time? | Two approaches that work in your case are:
Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works.
Generalized Linear Mixed Models (GLMM). Of course you would | What statistical test should I use to look at change in a binary outcome over time?
Two approaches that work in your case are:
Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works.
Generalized Linear Mixed Models (GLMM). Of course you would want to choose the logit link.
With above approaches, you can easily incorporate your explanatory variables you wish to investigate into the model. I would not recommend survival-type analysis since you just have two time points since no much time information included.
As for coding the outcome, you can do in the normal way, i.e., y=1 if adherent and y=0 if non-adherent. You will have a time factor with two levels, at 6 weeks or at 6 months, to take care of the correlated outcome measurements. That is, there are two observations associated with each subject ID. | What statistical test should I use to look at change in a binary outcome over time?
Two approaches that work in your case are:
Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works.
Generalized Linear Mixed Models (GLMM). Of course you would |
48,633 | What statistical test should I use to look at change in a binary outcome over time? | If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadherance as "failure" event. Besides showing the Kaplan-Meier curves, you could use a Cox regression model to evaluate the effect of other variables on your outcome. If you have time-varying covariates, like employment, you can model these as such.
If, on the other hand, you only have the 2 timepoints at 6w and 6mt, you could go for a logistic regression model at each of these points, with inadherance as your outcome variable and your measured "risk factors" as explaining variables. | What statistical test should I use to look at change in a binary outcome over time? | If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadh | What statistical test should I use to look at change in a binary outcome over time?
If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadherance as "failure" event. Besides showing the Kaplan-Meier curves, you could use a Cox regression model to evaluate the effect of other variables on your outcome. If you have time-varying covariates, like employment, you can model these as such.
If, on the other hand, you only have the 2 timepoints at 6w and 6mt, you could go for a logistic regression model at each of these points, with inadherance as your outcome variable and your measured "risk factors" as explaining variables. | What statistical test should I use to look at change in a binary outcome over time?
If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadh |
48,634 | KL divergence between a gamma distribution and a lognormal distribution? | Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$:
and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$:
Then, the Kullback-Leibler divergence between the true distribution $f$ and the Lognormal approximation $g$ is given by:
$$E_f\big[\log f(x)\big] - E_f\big[\log g(x)\big]$$
The first term $E_f\big[\log f(x)\big]$ is:
and the second term $E_f\big[\log g(x)\big]$ is:
The solution is $P-Q$.
Notes:
The Expect function is from the mathStatica package for Mathematica.
PolyGamma[n,z] denotes the $n^{th}$ derivative of the digamma function $\psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}$ | KL divergence between a gamma distribution and a lognormal distribution? | Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$:
and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$:
Then, the Kullback-Leibler divergence between th | KL divergence between a gamma distribution and a lognormal distribution?
Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$:
and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$:
Then, the Kullback-Leibler divergence between the true distribution $f$ and the Lognormal approximation $g$ is given by:
$$E_f\big[\log f(x)\big] - E_f\big[\log g(x)\big]$$
The first term $E_f\big[\log f(x)\big]$ is:
and the second term $E_f\big[\log g(x)\big]$ is:
The solution is $P-Q$.
Notes:
The Expect function is from the mathStatica package for Mathematica.
PolyGamma[n,z] denotes the $n^{th}$ derivative of the digamma function $\psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}$ | KL divergence between a gamma distribution and a lognormal distribution?
Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$:
and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$:
Then, the Kullback-Leibler divergence between th |
48,635 | Is it possible to compare probabilities of 2 logistic different models? | Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper).
In your case, combine the data from different days, and model this:
$answer=\alpha+\beta_1Tues+\beta_2Wed+\beta_3Thurs+\beta_4Fri+\beta_5Sat+\beta_6Sun$
You can do simple Wald tests or likelihood ratio tests to compare whether the coefficients for each day are statistically different. You may find, for example, that there is no statistical difference between Sat and Sun, in which case you could update your model:
$answer=\alpha+\beta_1Tues+\beta_2Wed+\beta_3Thurs+\beta_4Fri+\beta_5Weekend$
You can also estimate the marginal effects of each day, as odds ratios can be confusing or misleading depending on what you are really interested in.
If you have time of day, that can be a multiplying effect, which may moderate the day, though interpreting interaction terms in logit models can be confusing.
In addition, other variables may mediate the effect of the specific day - employment status, marital and parental status, etc. If you have these you may want to include them as controls. | Is it possible to compare probabilities of 2 logistic different models? | Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper).
In your c | Is it possible to compare probabilities of 2 logistic different models?
Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper).
In your case, combine the data from different days, and model this:
$answer=\alpha+\beta_1Tues+\beta_2Wed+\beta_3Thurs+\beta_4Fri+\beta_5Sat+\beta_6Sun$
You can do simple Wald tests or likelihood ratio tests to compare whether the coefficients for each day are statistically different. You may find, for example, that there is no statistical difference between Sat and Sun, in which case you could update your model:
$answer=\alpha+\beta_1Tues+\beta_2Wed+\beta_3Thurs+\beta_4Fri+\beta_5Weekend$
You can also estimate the marginal effects of each day, as odds ratios can be confusing or misleading depending on what you are really interested in.
If you have time of day, that can be a multiplying effect, which may moderate the day, though interpreting interaction terms in logit models can be confusing.
In addition, other variables may mediate the effect of the specific day - employment status, marital and parental status, etc. If you have these you may want to include them as controls. | Is it possible to compare probabilities of 2 logistic different models?
Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper).
In your c |
48,636 | Question about posterior mean calibration | Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive miscalibration.
Therefore I think the intended definition of miscalibration is:
$$
\text{miscalibration} = \text{(posterior mean using inferential prior)} - \text{(posterior mean using true prior)}
$$ | Question about posterior mean calibration | Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive mi | Question about posterior mean calibration
Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive miscalibration.
Therefore I think the intended definition of miscalibration is:
$$
\text{miscalibration} = \text{(posterior mean using inferential prior)} - \text{(posterior mean using true prior)}
$$ | Question about posterior mean calibration
Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive mi |
48,637 | Q-Q plot and sample size | I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with:
set.seed(2501)
par(mfrow=c(3,3), pty="s")
And then ran the rest of your code three times. Here is the resulting plot:
Sometimes the distinction between the left and center plots is clear and sometimes it isn't. That's the way it goes. Data are information. More data give you more information (all else being equal), and it is easier to see / figure out what you want to know.
One thing that may help you is to explore the qqPlot function in the car package, which will plot a 95% confidence band around the plot to help you see how much a dataset might vary from the ideal form to help you judge the deviations that you see in your observed data. Here it is with the last iteration of y:
Given the amount that 100 data can vary from the ideal, you just don't have enough information to reject the possibility of normality for these data (even though they were drawn from a $t$-distribution with 3 degrees of freedom). | Q-Q plot and sample size | I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with:
set.seed(2501)
par(mfrow= | Q-Q plot and sample size
I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with:
set.seed(2501)
par(mfrow=c(3,3), pty="s")
And then ran the rest of your code three times. Here is the resulting plot:
Sometimes the distinction between the left and center plots is clear and sometimes it isn't. That's the way it goes. Data are information. More data give you more information (all else being equal), and it is easier to see / figure out what you want to know.
One thing that may help you is to explore the qqPlot function in the car package, which will plot a 95% confidence band around the plot to help you see how much a dataset might vary from the ideal form to help you judge the deviations that you see in your observed data. Here it is with the last iteration of y:
Given the amount that 100 data can vary from the ideal, you just don't have enough information to reject the possibility of normality for these data (even though they were drawn from a $t$-distribution with 3 degrees of freedom). | Q-Q plot and sample size
I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with:
set.seed(2501)
par(mfrow= |
48,638 | Q-Q plot and sample size | I could think of at least two approaches to better diagnostics for a small sample size case:
To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line;
To augment visual diagnostics with analytical approach, as described, for example, here. | Q-Q plot and sample size | I could think of at least two approaches to better diagnostics for a small sample size case:
To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line;
To a | Q-Q plot and sample size
I could think of at least two approaches to better diagnostics for a small sample size case:
To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line;
To augment visual diagnostics with analytical approach, as described, for example, here. | Q-Q plot and sample size
I could think of at least two approaches to better diagnostics for a small sample size case:
To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line;
To a |
48,639 | Chance of me beating my friend in trivia | This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The Problem of Points itself has been traced back to problems of insurance raised under 13th century (CE) Islamic contract law. It concerns the situation where each play has equal chances of $0.5$ to win.
Recursion is the answer--but it requires a nice trick to work. With you to start, your chances are about $12.4\%$, but if you go second they drop to $7.6\%$. An analysis and working code follow. The analysis is similar to that proposed by Fermat.
Fix $p=0.7$ and $q=0.85$. Let $f(m,n)$ be the chance you will win the game when you need $m$ questions to win, your opponent needs $n$, and it's your turn. Similarly, let $g(n,m)$ (notice the reversal of arguments!) be the chance your opponent will win when she needs $n$ questions to win, you need $m$, and it is her turn.
Obviously $f(0,n) = g(0,m) = 1$ whenever $n\gt 0$ and $m\gt 0$.
On your turn to play, either
With probability $p$ you give a correct answer. It is still your turn and your chances of winning have become $f(m-1,n)$.
With probability $1-p$ your answer is wrong. It is now your opponent's turn. Her chances of winning are $g(n,m)$, so your chances of winning are $1-g(n,m)$.
Therefore
$$f(m,n) = p f(m-1,n) + (1-p)(1 - g(n,m)).$$
There is a comparable relation for $g$,
$$g(n,m) = q g(n-1,m) + (1-q)(1 - f(m,n)).$$
Unfortunately, these relations do not suffice for a recursive solution. The problem is that the $g(n,m)$ at the end will be expressed in terms of $g(n-1,m)$ and $f(m,n)$--but that brings us right back where we were before.
The solution is to replace $g(n,m)$ in the preceding equation with its equivalent:
$$\eqalign{
f(m,n) &= p f(m-1,n) + (1-p)(1 - \color{blue}{g(n,m)}) \\
&= p f(m-1,n) + (1-p)(1 - (\color{blue}{q g(n-1,m) + (1-q)(1 - f(m,n))})) .
}$$
Isolating $f(m,n)$ yields
$$(1 - (1-p)(1-q))f(m,n) = p f(m-1,n) + q(1-p)\left(1 - g(n-1,m)\right).$$
Similarly
$$(1 - (1-p)(1-q))g(n,m) = q g(n-1,m) + p(1-q)\left(1 - f(m-1,n)\right).$$
Each lets us solve for $f$ or $g$ in terms of values of the other where $m+n$ has decreased by $1$. This will assuredly terminate with one of $m$ or $n$ equal to $0$ within $m+n-1$ moves. The algorithm, implemented as a dynamic program, requires $O(mn)$ time and space, making it practicable for $mn \lt 10^6$, more or less (where it will start taking around a minute in R or Mathematica, for instance).
With $m=n=20$, $p=0.7$, and $q=0.85$, we easily find
$$f(20,20) \approx 0.1238327668,\ g(20,20) \approx 0.9238111399.$$
The following is working R code.
f <- function(a, b, p, q, F, G) {
if (missing(F)) F <- matrix(NA, a+1, b+1)
if (missing(G)) G <- matrix(NA, b+1, a+1)
F[1, ] <- G[1, ] <- 1
d <- 1 - (1-p)*(1-q)
pp <- p / d; pq <- (1-p)*q / d
qq <- q / d; qp <- (1-q)*p / d
f <- function(m, n) {
x <- F[m+1, n+1]
if (is.na(x)) F[m+1, n+1] <<- x <- pp * f(m-1, n) + pq * (1 - g(n-1, m))
return (x)
}
g <- function(m, n) {
x <- G[m+1, n+1]
if (is.na(x)) G[m+1, n+1] <<- x <- qq * g(m-1, n) + qp * (1 - f(n-1, m))
return (x)
}
return (list(Value=f(a, b), F=F, G=G, a=a, b=b))
}
m <- n <- 20
p <- 0.70; q <- 0.85
x <- f(m, n, p, q)
y <- f(n, m, q, p, x$G, x$F) # Don't recalculate the stored arrays
cat("Your chances of winning (if you start) are", 100*x$Value, "%\n")
cat("If you do not start they are", 100*(1 - y$Value), "%\n") | Chance of me beating my friend in trivia | This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The P | Chance of me beating my friend in trivia
This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The Problem of Points itself has been traced back to problems of insurance raised under 13th century (CE) Islamic contract law. It concerns the situation where each play has equal chances of $0.5$ to win.
Recursion is the answer--but it requires a nice trick to work. With you to start, your chances are about $12.4\%$, but if you go second they drop to $7.6\%$. An analysis and working code follow. The analysis is similar to that proposed by Fermat.
Fix $p=0.7$ and $q=0.85$. Let $f(m,n)$ be the chance you will win the game when you need $m$ questions to win, your opponent needs $n$, and it's your turn. Similarly, let $g(n,m)$ (notice the reversal of arguments!) be the chance your opponent will win when she needs $n$ questions to win, you need $m$, and it is her turn.
Obviously $f(0,n) = g(0,m) = 1$ whenever $n\gt 0$ and $m\gt 0$.
On your turn to play, either
With probability $p$ you give a correct answer. It is still your turn and your chances of winning have become $f(m-1,n)$.
With probability $1-p$ your answer is wrong. It is now your opponent's turn. Her chances of winning are $g(n,m)$, so your chances of winning are $1-g(n,m)$.
Therefore
$$f(m,n) = p f(m-1,n) + (1-p)(1 - g(n,m)).$$
There is a comparable relation for $g$,
$$g(n,m) = q g(n-1,m) + (1-q)(1 - f(m,n)).$$
Unfortunately, these relations do not suffice for a recursive solution. The problem is that the $g(n,m)$ at the end will be expressed in terms of $g(n-1,m)$ and $f(m,n)$--but that brings us right back where we were before.
The solution is to replace $g(n,m)$ in the preceding equation with its equivalent:
$$\eqalign{
f(m,n) &= p f(m-1,n) + (1-p)(1 - \color{blue}{g(n,m)}) \\
&= p f(m-1,n) + (1-p)(1 - (\color{blue}{q g(n-1,m) + (1-q)(1 - f(m,n))})) .
}$$
Isolating $f(m,n)$ yields
$$(1 - (1-p)(1-q))f(m,n) = p f(m-1,n) + q(1-p)\left(1 - g(n-1,m)\right).$$
Similarly
$$(1 - (1-p)(1-q))g(n,m) = q g(n-1,m) + p(1-q)\left(1 - f(m-1,n)\right).$$
Each lets us solve for $f$ or $g$ in terms of values of the other where $m+n$ has decreased by $1$. This will assuredly terminate with one of $m$ or $n$ equal to $0$ within $m+n-1$ moves. The algorithm, implemented as a dynamic program, requires $O(mn)$ time and space, making it practicable for $mn \lt 10^6$, more or less (where it will start taking around a minute in R or Mathematica, for instance).
With $m=n=20$, $p=0.7$, and $q=0.85$, we easily find
$$f(20,20) \approx 0.1238327668,\ g(20,20) \approx 0.9238111399.$$
The following is working R code.
f <- function(a, b, p, q, F, G) {
if (missing(F)) F <- matrix(NA, a+1, b+1)
if (missing(G)) G <- matrix(NA, b+1, a+1)
F[1, ] <- G[1, ] <- 1
d <- 1 - (1-p)*(1-q)
pp <- p / d; pq <- (1-p)*q / d
qq <- q / d; qp <- (1-q)*p / d
f <- function(m, n) {
x <- F[m+1, n+1]
if (is.na(x)) F[m+1, n+1] <<- x <- pp * f(m-1, n) + pq * (1 - g(n-1, m))
return (x)
}
g <- function(m, n) {
x <- G[m+1, n+1]
if (is.na(x)) G[m+1, n+1] <<- x <- qq * g(m-1, n) + qp * (1 - f(n-1, m))
return (x)
}
return (list(Value=f(a, b), F=F, G=G, a=a, b=b))
}
m <- n <- 20
p <- 0.70; q <- 0.85
x <- f(m, n, p, q)
y <- f(n, m, q, p, x$G, x$F) # Don't recalculate the stored arrays
cat("Your chances of winning (if you start) are", 100*x$Value, "%\n")
cat("If you do not start they are", 100*(1 - y$Value), "%\n") | Chance of me beating my friend in trivia
This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The P |
48,640 | Confused about 0 intercept in logistic regression in R | The issue is not specific to a GLM. It's an issue of treatment contrasts.
You should also look at the model with intercept:
set.seed(42)
y <- as.factor(sample(rep(1:2), 30, T))
x <- as.factor(sample(rep(1:2), 30, T))
z <- as.factor(sample(rep(1:2), 30, T))
fit0 <- glm(y ~ z + x, binomial)
predict(fit0, newdata=data.frame(z=factor(2), x=factor(1)))
coef(fit0)
#(Intercept) z2 x2
# -0.1151303 0.3228803 1.0588217
predict(fit0, newdata=data.frame(z=factor(2), x=factor(1)))
# 1
#0.20775
Here the intercept represents the group x1/z1 and the other group means are calculated by adding the coefficients of z2 and/or x2.
fit1 <- glm(y ~ z + x - 1, binomial)
coef(fit1)
# z1 z2 x2
#-0.1151303 0.2077500 1.0588217
predict(fit1, newdata=data.frame(z=factor(2), x=factor(1)))
# 1
#0.20775
Here the coefficient of z1 represents the group x1/z1 which is the same as the intercept in fit0. However, the coefficient of z2 represents the group x1/z2 instead of the difference between the group means. Note that 0.208 = -0.115 + 0.323. The x2/* group means are calculated by adding the x2 coefficient to the x1/* group means.
It should now be easy to understand why order matters here. | Confused about 0 intercept in logistic regression in R | The issue is not specific to a GLM. It's an issue of treatment contrasts.
You should also look at the model with intercept:
set.seed(42)
y <- as.factor(sample(rep(1:2), 30, T))
x <- as.factor(sample(r | Confused about 0 intercept in logistic regression in R
The issue is not specific to a GLM. It's an issue of treatment contrasts.
You should also look at the model with intercept:
set.seed(42)
y <- as.factor(sample(rep(1:2), 30, T))
x <- as.factor(sample(rep(1:2), 30, T))
z <- as.factor(sample(rep(1:2), 30, T))
fit0 <- glm(y ~ z + x, binomial)
predict(fit0, newdata=data.frame(z=factor(2), x=factor(1)))
coef(fit0)
#(Intercept) z2 x2
# -0.1151303 0.3228803 1.0588217
predict(fit0, newdata=data.frame(z=factor(2), x=factor(1)))
# 1
#0.20775
Here the intercept represents the group x1/z1 and the other group means are calculated by adding the coefficients of z2 and/or x2.
fit1 <- glm(y ~ z + x - 1, binomial)
coef(fit1)
# z1 z2 x2
#-0.1151303 0.2077500 1.0588217
predict(fit1, newdata=data.frame(z=factor(2), x=factor(1)))
# 1
#0.20775
Here the coefficient of z1 represents the group x1/z1 which is the same as the intercept in fit0. However, the coefficient of z2 represents the group x1/z2 instead of the difference between the group means. Note that 0.208 = -0.115 + 0.323. The x2/* group means are calculated by adding the x2 coefficient to the x1/* group means.
It should now be easy to understand why order matters here. | Confused about 0 intercept in logistic regression in R
The issue is not specific to a GLM. It's an issue of treatment contrasts.
You should also look at the model with intercept:
set.seed(42)
y <- as.factor(sample(rep(1:2), 30, T))
x <- as.factor(sample(r |
48,641 | What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$? | Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the square root is a non-linear transformation. In other words, if a straight line was most appropriate before the transformation, necessarily a straight line will not be the most appropriate fit afterwards. You should probably add a squared term, $x^2$, or something similar to compensate for the transformation. It is hard to diagnose this in the abstract, but you should look at plots of your data and your model both with and without the transformed $y$ to ensure the assumptions are met and the functional form is appropriate. | What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$ | Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the | What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$?
Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the square root is a non-linear transformation. In other words, if a straight line was most appropriate before the transformation, necessarily a straight line will not be the most appropriate fit afterwards. You should probably add a squared term, $x^2$, or something similar to compensate for the transformation. It is hard to diagnose this in the abstract, but you should look at plots of your data and your model both with and without the transformed $y$ to ensure the assumptions are met and the functional form is appropriate. | What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$
Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the |
48,642 | Why is independence required for two- sample proportions z test? | All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers is significantly larger for the first than the second question.
That would be a paired test.
Why is wrong to use a two-proportions z test in this case?
Because the independent-sample proportions test relies on ... independence. Specifically, the (normal approximation of the) distribution of the test statistic under the null hypothesis is computed on the basis that the observations are independent.
Does it also depend on the question one would like to answer with the statistical test?
No, at least not for any of the questions that occur to me.
What are the consequences of using the procedure nonetheless (e.g. will the significance values be systematically too high or low)?
If you do it with samples that are paired (and so positively correlated within the pairs), as in your example, then the variance of the difference in proportions will be different from what the independence assumption would suggest.
As a result, your true significance level will be larger than you chose it to be so you'll reject more often (much more often) than you should.
Below is the results of a simulation, first when the two columns are independent, and second when the variables are correlated (to get correlated binary variables I generated correlated standard normals with $\rho=0.6$ and dichotomized them by recording $1$ if they were less than 0.1**; the independent variables were created the same way but from independent normals).
** I chose a $p$ that was not exactly 1/2, in case there was any thought that $p$=1/2 might be a special case
These are 10000 simulations at n=100 for a two-tailed two sample proportions test (here done via a chi-square using R's default settings; the chi-square should be the square of the z-test done with the same settings). The true distribution of the test statistic is discrete and the chi-square (and the corresponding z-test) is approximate. The small spike in the left-side plot is due to that discreteness (and leads to mild conservatism in the test with independent proportions); ideally it should look uniform. In the right hand plot, correlated binaries (as described above) were used. There, about 98% of the tables generated had p-value <0.05. This is when the null hypothesis is true.
A small amount of effect might be tolerable, but this is quite dramatic. | Why is independence required for two- sample proportions z test? | All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers i | Why is independence required for two- sample proportions z test?
All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers is significantly larger for the first than the second question.
That would be a paired test.
Why is wrong to use a two-proportions z test in this case?
Because the independent-sample proportions test relies on ... independence. Specifically, the (normal approximation of the) distribution of the test statistic under the null hypothesis is computed on the basis that the observations are independent.
Does it also depend on the question one would like to answer with the statistical test?
No, at least not for any of the questions that occur to me.
What are the consequences of using the procedure nonetheless (e.g. will the significance values be systematically too high or low)?
If you do it with samples that are paired (and so positively correlated within the pairs), as in your example, then the variance of the difference in proportions will be different from what the independence assumption would suggest.
As a result, your true significance level will be larger than you chose it to be so you'll reject more often (much more often) than you should.
Below is the results of a simulation, first when the two columns are independent, and second when the variables are correlated (to get correlated binary variables I generated correlated standard normals with $\rho=0.6$ and dichotomized them by recording $1$ if they were less than 0.1**; the independent variables were created the same way but from independent normals).
** I chose a $p$ that was not exactly 1/2, in case there was any thought that $p$=1/2 might be a special case
These are 10000 simulations at n=100 for a two-tailed two sample proportions test (here done via a chi-square using R's default settings; the chi-square should be the square of the z-test done with the same settings). The true distribution of the test statistic is discrete and the chi-square (and the corresponding z-test) is approximate. The small spike in the left-side plot is due to that discreteness (and leads to mild conservatism in the test with independent proportions); ideally it should look uniform. In the right hand plot, correlated binaries (as described above) were used. There, about 98% of the tables generated had p-value <0.05. This is when the null hypothesis is true.
A small amount of effect might be tolerable, but this is quite dramatic. | Why is independence required for two- sample proportions z test?
All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers i |
48,643 | Pre Window Length Selection with Difference-In-Differences | This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatment is consistent while using just the most recent pre-treatment period is not consistent. His framework is somewhat restrictive though and also he tries to answer a different question. In terms of the literature this will be the closest related to your question which, as it stands, has not yet been looked at in particular (afaik).
Other papers like Slaughter (2001) play around with the pre-treatment window as a robustness check. In his case the results don't change when he lengthens or shortens the pre-treatment period used in the analysis. Unfortunately few people provide such evidence in their studies.
To my knowledge there is no paper yet that considers the optimal pre-treatment window in DID analysis. I would assume that this hasn't been looked at so far because many papers use micro data which usually do not come with large numbers of time periods to begin with. The other reason might be that as long as treatment and control groups exhibit parallel trends before the treatment the only effect from changing the pre-treatment window on the estimated treatment parameter should come from negligible sampling variation. If it is not negligible one might as well question whether a DID makes sense to begin with. Nonetheless this is an important question and one that is fairly underexplored (at least in the econometrics literature). | Pre Window Length Selection with Difference-In-Differences | This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatme | Pre Window Length Selection with Difference-In-Differences
This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatment is consistent while using just the most recent pre-treatment period is not consistent. His framework is somewhat restrictive though and also he tries to answer a different question. In terms of the literature this will be the closest related to your question which, as it stands, has not yet been looked at in particular (afaik).
Other papers like Slaughter (2001) play around with the pre-treatment window as a robustness check. In his case the results don't change when he lengthens or shortens the pre-treatment period used in the analysis. Unfortunately few people provide such evidence in their studies.
To my knowledge there is no paper yet that considers the optimal pre-treatment window in DID analysis. I would assume that this hasn't been looked at so far because many papers use micro data which usually do not come with large numbers of time periods to begin with. The other reason might be that as long as treatment and control groups exhibit parallel trends before the treatment the only effect from changing the pre-treatment window on the estimated treatment parameter should come from negligible sampling variation. If it is not negligible one might as well question whether a DID makes sense to begin with. Nonetheless this is an important question and one that is fairly underexplored (at least in the econometrics literature). | Pre Window Length Selection with Difference-In-Differences
This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatme |
48,644 | What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not)? | $\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless either it can only return $0$ when the null hypothesis is true, or if half its distribution is $<0$. Since it cannot be $<0$, and it can yield non-zero values, even when the null obtains, it must be biased. On the other hand, it is consistent, in the sense that $\eta^2\rightarrow 0$ as $N$ goes to infinity when the null holds. | What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not) | $\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless eit | What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not)?
$\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless either it can only return $0$ when the null hypothesis is true, or if half its distribution is $<0$. Since it cannot be $<0$, and it can yield non-zero values, even when the null obtains, it must be biased. On the other hand, it is consistent, in the sense that $\eta^2\rightarrow 0$ as $N$ goes to infinity when the null holds. | What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not)
$\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless eit |
48,645 | Is the converse of this statement true? | Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transformation from $F\to F$ via
$$(T_{\mu,\sigma}(F))(x) = F((x-\mu)/\sigma)(x)$$
for all $x\in \mathbb R$. (This is the action of the affine group of the real line induced on the set of measures on the real line.) A functional S is a map $S:\mathcal{X}\to \mathbb{R}$. It is invariant when
$$S[T_{\mu,\sigma}(F)] = S[F]$$
for all $F$, $\mu$, and $\sigma$, and it is equivariant when
$$S[T_{\mu,\sigma}(F)] = |\sigma|S[F]$$
for all nondegenerate $F$, $\mu$, and $\sigma$.
The idea is to find a natural location and scale for all relevant distributions. Here is one way.
For all $F$ and $0\lt q \lt 1$, the set $$\{x\in \mathbb{R}\,|\, F(x)\ge q\}$$ is nonempty and must have a lower bound, whence it has a greatest lower bound $F_{[q]}$. Define $$m(F) = F_{[1/2]}.$$ The set $$\{x\in \mathbb{R}_{+}\,|\, F(m(F)+x) - F(m(F)-x)\ge q\}$$ is bounded below by $0$ and nonempty, whence it must have a greatest lower bound $F^\prime_{[q]}$. Moreover, for $q$ sufficiently large, this glb must be strictly positive provided $F$ is nondegenerate. The set of such $q$ for which $F^\prime_{[q]} \gt 0$ has a greatest lower bound $q^\prime_F$. Define $$s(F) = F^\prime_{[(1-q)/2]}.$$
It follows that $m(F)$ (the location) and $s(F)$ (the scale) are well-defined and $s(F)\gt 0$.
It is straightforward to check that
$$m(T_{\mu,\sigma}(F)) = m(F) + \mu$$
and
$$s(T_{\mu,\sigma}(F)) = s(F)|\sigma|$$
for $\sigma\ne 0$ (which makes $s$ an equivariant functional). Choosing $\mu=-m(F)$ and $\sigma = 1/s(F)$ yields a transformation
$$Z: F \to F^{0} = T_{-m(F),1/s(F)} (F)$$
which is well-defined on all non-degenerate distributions. $F^{0}$ is the standardized version of $F$ and
$$F = T_{m(F),s(F)}(F^{0}).$$
In other words, every nondegenerate distribution is a shifted and scaled version of its standardized version.
Here is one solution. Let $S$ be any invariant functional. It is immediate that
$$S_1[F] = s(F)S[F^{0}]$$
defines an equivariant functional, exhibiting
$$S = \frac{S_1}{s}$$
explicitly as the ratio of two equivariant functionals, because $S[F] = S[F^{0}]$. | Is the converse of this statement true? | Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transfo | Is the converse of this statement true?
Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transformation from $F\to F$ via
$$(T_{\mu,\sigma}(F))(x) = F((x-\mu)/\sigma)(x)$$
for all $x\in \mathbb R$. (This is the action of the affine group of the real line induced on the set of measures on the real line.) A functional S is a map $S:\mathcal{X}\to \mathbb{R}$. It is invariant when
$$S[T_{\mu,\sigma}(F)] = S[F]$$
for all $F$, $\mu$, and $\sigma$, and it is equivariant when
$$S[T_{\mu,\sigma}(F)] = |\sigma|S[F]$$
for all nondegenerate $F$, $\mu$, and $\sigma$.
The idea is to find a natural location and scale for all relevant distributions. Here is one way.
For all $F$ and $0\lt q \lt 1$, the set $$\{x\in \mathbb{R}\,|\, F(x)\ge q\}$$ is nonempty and must have a lower bound, whence it has a greatest lower bound $F_{[q]}$. Define $$m(F) = F_{[1/2]}.$$ The set $$\{x\in \mathbb{R}_{+}\,|\, F(m(F)+x) - F(m(F)-x)\ge q\}$$ is bounded below by $0$ and nonempty, whence it must have a greatest lower bound $F^\prime_{[q]}$. Moreover, for $q$ sufficiently large, this glb must be strictly positive provided $F$ is nondegenerate. The set of such $q$ for which $F^\prime_{[q]} \gt 0$ has a greatest lower bound $q^\prime_F$. Define $$s(F) = F^\prime_{[(1-q)/2]}.$$
It follows that $m(F)$ (the location) and $s(F)$ (the scale) are well-defined and $s(F)\gt 0$.
It is straightforward to check that
$$m(T_{\mu,\sigma}(F)) = m(F) + \mu$$
and
$$s(T_{\mu,\sigma}(F)) = s(F)|\sigma|$$
for $\sigma\ne 0$ (which makes $s$ an equivariant functional). Choosing $\mu=-m(F)$ and $\sigma = 1/s(F)$ yields a transformation
$$Z: F \to F^{0} = T_{-m(F),1/s(F)} (F)$$
which is well-defined on all non-degenerate distributions. $F^{0}$ is the standardized version of $F$ and
$$F = T_{m(F),s(F)}(F^{0}).$$
In other words, every nondegenerate distribution is a shifted and scaled version of its standardized version.
Here is one solution. Let $S$ be any invariant functional. It is immediate that
$$S_1[F] = s(F)S[F^{0}]$$
defines an equivariant functional, exhibiting
$$S = \frac{S_1}{s}$$
explicitly as the ratio of two equivariant functionals, because $S[F] = S[F^{0}]$. | Is the converse of this statement true?
Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transfo |
48,646 | Clarifications about probit and logit models | Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be better to write: $Prob(y=1|x)=G^{-1}(β0+xβ)$. Second, if the probability that y=1 is 50%, then the probability y=0 must also be 50%, so it's best to leave that out.
Yes, in both the case of the logit and probit link functions, when the linear predictor, z sums to 0, the predicted probability that y=1 is $0$. However, this is a little bit tricky. People usually talk about what happens when x=0, in which case you have the predicted probability of y=1 being $g^{-1}(\beta_0)$, which is only $50\%$ if $\hat\beta_0=0$.
It isn't quite right that the probabilities are spread over a wider range with the logit than the probit. They both range $(0,\ 1)$. Instead, they differ in the rate of change in predicted probabilities as they approach the bounds and 'turn the corner'. I think the main issue that may be causing you difficulty is that the fitted values of $\hat\beta_1$ will differ depending on whether you use the logit or the probit. The slope with the logit link will be larger than the slope with the probit link. Thus, what looks like a large difference in your plot will mostly disappear. | Clarifications about probit and logit models | Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be | Clarifications about probit and logit models
Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be better to write: $Prob(y=1|x)=G^{-1}(β0+xβ)$. Second, if the probability that y=1 is 50%, then the probability y=0 must also be 50%, so it's best to leave that out.
Yes, in both the case of the logit and probit link functions, when the linear predictor, z sums to 0, the predicted probability that y=1 is $0$. However, this is a little bit tricky. People usually talk about what happens when x=0, in which case you have the predicted probability of y=1 being $g^{-1}(\beta_0)$, which is only $50\%$ if $\hat\beta_0=0$.
It isn't quite right that the probabilities are spread over a wider range with the logit than the probit. They both range $(0,\ 1)$. Instead, they differ in the rate of change in predicted probabilities as they approach the bounds and 'turn the corner'. I think the main issue that may be causing you difficulty is that the fitted values of $\hat\beta_1$ will differ depending on whether you use the logit or the probit. The slope with the logit link will be larger than the slope with the probit link. Thus, what looks like a large difference in your plot will mostly disappear. | Clarifications about probit and logit models
Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be |
48,647 | How to standardize text data for training Neural Networks? | I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dense and much smaller dimensionality representations. I tried LDA (Latent Dirichlet Allocation) and some other feature selection/extraction methods but only got inferior performance compared to sparse unigram/bigram features used in Logistic Regression.
I am well aware of recent papers using Recurrent Neural Networks and other deep learning techniques but they need lots of data and demand significant computing power. While the latter I have, in my application I don't possess lots of data. So I must stick with shallow machine learning methods.
I am very interested to find out what dense and low-dimensional features give at least comparable performance to unigrams/bigrams in a setting when datasets are not large enough for deep learning. I am particularly interested in methods analyzing/mining short text documents. | How to standardize text data for training Neural Networks? | I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dens | How to standardize text data for training Neural Networks?
I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dense and much smaller dimensionality representations. I tried LDA (Latent Dirichlet Allocation) and some other feature selection/extraction methods but only got inferior performance compared to sparse unigram/bigram features used in Logistic Regression.
I am well aware of recent papers using Recurrent Neural Networks and other deep learning techniques but they need lots of data and demand significant computing power. While the latter I have, in my application I don't possess lots of data. So I must stick with shallow machine learning methods.
I am very interested to find out what dense and low-dimensional features give at least comparable performance to unigrams/bigrams in a setting when datasets are not large enough for deep learning. I am particularly interested in methods analyzing/mining short text documents. | How to standardize text data for training Neural Networks?
I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dens |
48,648 | How to standardize text data for training Neural Networks? | Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a good results for text classification tasks, but it hard to train for a complex tasks. Basically word Embedding is some input layers in network which transform your word (letter) in multi-dimensional space. The best thing that after long time of training words which have similar meaning would be together in a vector space. For example the can be words Cat, Dog, Mouse and so on. And in NN classification tasks will track all changes between similar words in sentence and put the in the same class.
The best way to start with RNN is the original Elman paper Finding Structure in Time where he present his Elman RNN. There are a lot of simple examples and also you can find very simple word embeding for a small group of words. So this is of course a one of the simplest RNN but it will show you some basic ideas behind RNN. | How to standardize text data for training Neural Networks? | Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a | How to standardize text data for training Neural Networks?
Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a good results for text classification tasks, but it hard to train for a complex tasks. Basically word Embedding is some input layers in network which transform your word (letter) in multi-dimensional space. The best thing that after long time of training words which have similar meaning would be together in a vector space. For example the can be words Cat, Dog, Mouse and so on. And in NN classification tasks will track all changes between similar words in sentence and put the in the same class.
The best way to start with RNN is the original Elman paper Finding Structure in Time where he present his Elman RNN. There are a lot of simple examples and also you can find very simple word embeding for a small group of words. So this is of course a one of the simplest RNN but it will show you some basic ideas behind RNN. | How to standardize text data for training Neural Networks?
Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a |
48,649 | How to standardize text data for training Neural Networks? | Nevermind... I found the answer here PDF link.
Using word-of-bag or word class is also possible. | How to standardize text data for training Neural Networks? | Nevermind... I found the answer here PDF link.
Using word-of-bag or word class is also possible. | How to standardize text data for training Neural Networks?
Nevermind... I found the answer here PDF link.
Using word-of-bag or word class is also possible. | How to standardize text data for training Neural Networks?
Nevermind... I found the answer here PDF link.
Using word-of-bag or word class is also possible. |
48,650 | Acceptable values for the intraclass correlation coefficient (empty model) | John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying within-group relationships when intraclass correlations are 0 (attached below). So some, like Nezlek, would say that this is not a problem.
See:
Nezlek, J.B. (2008). An Introduction to Multilevel Modeling for Social and Personality Psychology. Social and Personality Psychology Compass, 2(2): 842–860. | Acceptable values for the intraclass correlation coefficient (empty model) | John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying wit | Acceptable values for the intraclass correlation coefficient (empty model)
John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying within-group relationships when intraclass correlations are 0 (attached below). So some, like Nezlek, would say that this is not a problem.
See:
Nezlek, J.B. (2008). An Introduction to Multilevel Modeling for Social and Personality Psychology. Social and Personality Psychology Compass, 2(2): 842–860. | Acceptable values for the intraclass correlation coefficient (empty model)
John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying wit |
48,651 | Applying linear function approximation to reinforcement learning | If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html
Sutton's book is really confusing in how they describe how to set up your feature space F(s,a), but in the web page above, they describe it in a simple example. Applying the architecture of theta and F(s,a) from that page to Sutton's algorithm works very well.
Suppose you have 4 possible actions in a state. Create a reward Q distribution (in this case a 4-value array), with one value for each possible action in the given state. Iterate over each action, and for that action, populate the feature space based on what that action will do to/for the agent.
For example, if the agent is directly below a wall, and the chosen action is 'up', there should be a 1 for the feature 'is the agent about to try to move into a wall'. Likewise, for action='right' and wall to the right, the same feature would be a 1, etc. for all other possibilities.
You've probably moved past this problem a while ago, but if not, hope this helped! | Applying linear function approximation to reinforcement learning | If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html
Sutton's book is really confusing in how they describe how to set up your feature space F(s | Applying linear function approximation to reinforcement learning
If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html
Sutton's book is really confusing in how they describe how to set up your feature space F(s,a), but in the web page above, they describe it in a simple example. Applying the architecture of theta and F(s,a) from that page to Sutton's algorithm works very well.
Suppose you have 4 possible actions in a state. Create a reward Q distribution (in this case a 4-value array), with one value for each possible action in the given state. Iterate over each action, and for that action, populate the feature space based on what that action will do to/for the agent.
For example, if the agent is directly below a wall, and the chosen action is 'up', there should be a 1 for the feature 'is the agent about to try to move into a wall'. Likewise, for action='right' and wall to the right, the same feature would be a 1, etc. for all other possibilities.
You've probably moved past this problem a while ago, but if not, hope this helped! | Applying linear function approximation to reinforcement learning
If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html
Sutton's book is really confusing in how they describe how to set up your feature space F(s |
48,652 | How can I implement lasso in R using optim function | With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function being minimized isn't differentiable at zero so unless you hit zero exactly, you're likely to get all coefficients non-zero (but some very small, depending on your step size). That's why lasso and similar specialized algorithms are useful.
But if you insist on using these algorithms, you can truncate values, e.g., once you've got the "optimal" solution set all betas under 1e-9 or something to zero. | How can I implement lasso in R using optim function | With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function | How can I implement lasso in R using optim function
With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function being minimized isn't differentiable at zero so unless you hit zero exactly, you're likely to get all coefficients non-zero (but some very small, depending on your step size). That's why lasso and similar specialized algorithms are useful.
But if you insist on using these algorithms, you can truncate values, e.g., once you've got the "optimal" solution set all betas under 1e-9 or something to zero. | How can I implement lasso in R using optim function
With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function |
48,653 | How should I evaluate the expectation of the ratio of two random variables? | $1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality
$$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$
the last equality if we assume that each respondent has an equal probability to respond or not. An estimator of this probability is the sample proportion $\hat P(S_i=1) =n_r/n$ so we get
$$est.\left(\frac{1}{E[\sum{S_i}]}\right) =\frac 1{n_r}$$
So "between" $E\left(\frac{1}{\sum{S_i}}\right)$ and $1/n_r$ there exists both the distance due to the non-linearity, as well as the estimation error. The estimation error can go either way, so we cannot conclude on the final relation between the two.
Nevertheless appealing to asymptotics, $\hat P(S_i=1) \xrightarrow{p} P(S_i=1)$ and $E\left(\frac{1}{\sum{S_i}}\right)\rightarrow \left(\frac{1}{E[\sum{S_i}]}\right)$, so for "large samples" we accept $1/n_r$ as an approximation to $E\left(\frac{1}{\sum{S_i}}\right)$.
But for the general case, the situation changes. Write $w_i \equiv S_i/\sum S_i$
and so $\sum w_i =1$, and we have
$$\hat \mu = \sum w_iY_i$$
If we assume that
a) $S_i$ and $Y_i$ are independent,(i.e. that whether somebody responds or not does not depend on his own value of $Y$ -and this is not always the case, for example think of a survey that asks something "sensitive", say "what is your monthly income"? People with high income may choose not to respond rather than record a true or false statement), and taking into account that
b) all members of the population are identically distributed as random variables, and have the common mean $E(Y_i) =\mu, \; \forall i$, then
$$E(\hat \mu) = \sum E(w_i)E(Y_i) = \mu\cdot\sum E(w_i) = \mu\cdot E\left(\sum w_i\right) = \mu\cdot E(1) = \mu$$
so the estimator is, after all, unbiased. This depends crucially on the independence assumption between $S_i$ and $Y_i$ because it implies that the sub-sample of those that responded remains a random sample from, a "representative" sample of, the population, and so its sample average is still an unbiased estimator.
ADDENDUM
Regarding the Taylor series expansion, for the function $1/Z$ it is, around some center $z_0$,
$$E(1/Z) = \frac 1{z_0} - \frac 1{z_0^2}[E(Z) - z_0] + \frac 1{z_0^3}E[Z - z_0]^2 + E(R_2)$$
$\sum S_i$ is a binomial random variable. So centering on $z_0=E(\sum S_i) = np_s$ we have
$$E\left(\frac{1}{\sum{S_i}}\right) = \frac 1{np_s}-\frac 1{n^2p_s^2}(E(S_i)-np_s)+\frac 1{n^3p_s^3}\text{Var}(S_i) +E(R_2)$$
$$=\frac 1{np_s} -0+\frac {np_s(1-p_s)}{n^3p_s^3} + E(R_2) = \frac 1{np_s} + O(n^{-2})$$
Since $\hat p_s = n_r/n$ we arrive at
$$\hat E\left(\frac{1}{\sum{S_i}}\right) \approx \frac 1{n_r}$$ | How should I evaluate the expectation of the ratio of two random variables? | $1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality
$$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$
the last equality if | How should I evaluate the expectation of the ratio of two random variables?
$1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality
$$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$
the last equality if we assume that each respondent has an equal probability to respond or not. An estimator of this probability is the sample proportion $\hat P(S_i=1) =n_r/n$ so we get
$$est.\left(\frac{1}{E[\sum{S_i}]}\right) =\frac 1{n_r}$$
So "between" $E\left(\frac{1}{\sum{S_i}}\right)$ and $1/n_r$ there exists both the distance due to the non-linearity, as well as the estimation error. The estimation error can go either way, so we cannot conclude on the final relation between the two.
Nevertheless appealing to asymptotics, $\hat P(S_i=1) \xrightarrow{p} P(S_i=1)$ and $E\left(\frac{1}{\sum{S_i}}\right)\rightarrow \left(\frac{1}{E[\sum{S_i}]}\right)$, so for "large samples" we accept $1/n_r$ as an approximation to $E\left(\frac{1}{\sum{S_i}}\right)$.
But for the general case, the situation changes. Write $w_i \equiv S_i/\sum S_i$
and so $\sum w_i =1$, and we have
$$\hat \mu = \sum w_iY_i$$
If we assume that
a) $S_i$ and $Y_i$ are independent,(i.e. that whether somebody responds or not does not depend on his own value of $Y$ -and this is not always the case, for example think of a survey that asks something "sensitive", say "what is your monthly income"? People with high income may choose not to respond rather than record a true or false statement), and taking into account that
b) all members of the population are identically distributed as random variables, and have the common mean $E(Y_i) =\mu, \; \forall i$, then
$$E(\hat \mu) = \sum E(w_i)E(Y_i) = \mu\cdot\sum E(w_i) = \mu\cdot E\left(\sum w_i\right) = \mu\cdot E(1) = \mu$$
so the estimator is, after all, unbiased. This depends crucially on the independence assumption between $S_i$ and $Y_i$ because it implies that the sub-sample of those that responded remains a random sample from, a "representative" sample of, the population, and so its sample average is still an unbiased estimator.
ADDENDUM
Regarding the Taylor series expansion, for the function $1/Z$ it is, around some center $z_0$,
$$E(1/Z) = \frac 1{z_0} - \frac 1{z_0^2}[E(Z) - z_0] + \frac 1{z_0^3}E[Z - z_0]^2 + E(R_2)$$
$\sum S_i$ is a binomial random variable. So centering on $z_0=E(\sum S_i) = np_s$ we have
$$E\left(\frac{1}{\sum{S_i}}\right) = \frac 1{np_s}-\frac 1{n^2p_s^2}(E(S_i)-np_s)+\frac 1{n^3p_s^3}\text{Var}(S_i) +E(R_2)$$
$$=\frac 1{np_s} -0+\frac {np_s(1-p_s)}{n^3p_s^3} + E(R_2) = \frac 1{np_s} + O(n^{-2})$$
Since $\hat p_s = n_r/n$ we arrive at
$$\hat E\left(\frac{1}{\sum{S_i}}\right) \approx \frac 1{n_r}$$ | How should I evaluate the expectation of the ratio of two random variables?
$1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality
$$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$
the last equality if |
48,654 | Adding a variance structure when fitting a gamm with Gamma distribution | Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distribution such as Gamma) then the weights argument should be specified as it would be for GlmmPQL().
The explanation is that GAMM is essentially a wrapper function and depending on how it is used it may utilize the functions nlme() or GlmmPQL(). If you specify a Gaussian distribution for your model then GAMM makes a call directly to nlme() by default. With this default method the weights argument specifies a gls variance structure, because that is what nlme() does (read ?nlme and the weights argument).
If you switch to a generalized distribution (eg, gamma, beta, poisson) GAMM calls to GlmmPQL(), which does have a weights argument but it entirely different (read ?GlmmPQL and the weights argument).
Thus, as far as I know you cannot access gls weights through gamm with a non-Gaussian distribution. If I am mistaken, please somebody correct this. | Adding a variance structure when fitting a gamm with Gamma distribution | Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distributio | Adding a variance structure when fitting a gamm with Gamma distribution
Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distribution such as Gamma) then the weights argument should be specified as it would be for GlmmPQL().
The explanation is that GAMM is essentially a wrapper function and depending on how it is used it may utilize the functions nlme() or GlmmPQL(). If you specify a Gaussian distribution for your model then GAMM makes a call directly to nlme() by default. With this default method the weights argument specifies a gls variance structure, because that is what nlme() does (read ?nlme and the weights argument).
If you switch to a generalized distribution (eg, gamma, beta, poisson) GAMM calls to GlmmPQL(), which does have a weights argument but it entirely different (read ?GlmmPQL and the weights argument).
Thus, as far as I know you cannot access gls weights through gamm with a non-Gaussian distribution. If I am mistaken, please somebody correct this. | Adding a variance structure when fitting a gamm with Gamma distribution
Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distributio |
48,655 | Poisson as a limiting case of negative binomial | Consider that
$$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$
and then take the denominator over into the ratio of Gammas.
I think all you need to do then is make an argument that the resulting term with the gammas and the denominator goes to 1.
I believe this is one of the relations discussed in the middle of this section of the Wikipedia page on the Gamma function. | Poisson as a limiting case of negative binomial | Consider that
$$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$
and then take the denominator over into the ratio of Gammas.
I think all you need to do then is make an argument that the resulting term with | Poisson as a limiting case of negative binomial
Consider that
$$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$
and then take the denominator over into the ratio of Gammas.
I think all you need to do then is make an argument that the resulting term with the gammas and the denominator goes to 1.
I believe this is one of the relations discussed in the middle of this section of the Wikipedia page on the Gamma function. | Poisson as a limiting case of negative binomial
Consider that
$$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$
and then take the denominator over into the ratio of Gammas.
I think all you need to do then is make an argument that the resulting term with |
48,656 | Poisson as a limiting case of negative binomial | This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution
The key is the parameterization of the dispersion parameter. | Poisson as a limiting case of negative binomial | This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution
The key is the parameterization of the dispersion parameter. | Poisson as a limiting case of negative binomial
This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution
The key is the parameterization of the dispersion parameter. | Poisson as a limiting case of negative binomial
This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution
The key is the parameterization of the dispersion parameter. |
48,657 | Can correlated random effects "steal" the variability (and the significance) from the regression coefficient? | Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the variance $\sigma^2$. Thus it will be "cheaper" in terms of likelihood to put year-consistent variability into the fixed effect $\beta$.
The reason that it is cheaper to put the variability in a fixed effect rather than a random one is that the fixed effects will be estimated to minimize the residual, and there will not be a penalization if they are too big or too small. In contrast random effects are supposed to vary following a probability distribution (in your case a normal) around zero, and thus if you need an extreme value of the random effect to explain a given observation, the observation will become very unlikely given the model, which will give a lower likelihood value. If you can explain consistent variability in terms of fixed effects rather than random effects, you will increase the likelihood values. Thus, if you use maximum likelihood estimation, you will choose parameters that do that.
For example, consider the simple model
$$y_i=\alpha + x_i + \varepsilon_i,\qquad i = 1,\dots,n$$
where $(x_i)_i=\boldsymbol{x}\sim\mathcal{N}(0, S)$ and $(\varepsilon_i)_i\sim \mathcal{N}(0,\sigma^2\mathbb{I}_n)$. Let $\boldsymbol{y}=(y_i)_i$ and $\boldsymbol{\alpha}=(\alpha)_i$, $i=1,\dots, n$. The log-likelihood function is
$$
\ell_{\boldsymbol y}(\alpha, \sigma^2, S) = -\frac{1}{2}\log\mathrm{det}(S+\sigma^2\mathbb{I})- \frac{1}{2}(\boldsymbol{y} - \boldsymbol\alpha)^\top(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha)
$$
If we consider the part that depends on $\alpha$, namely the quadratic term, we can use some linear algebra to rewrite it
$$
-(\boldsymbol{y} - \boldsymbol\alpha)^\top(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha) = -\frac{1}{\sigma^2}(\boldsymbol{y} - \boldsymbol\alpha)^\top(\boldsymbol{y} - \boldsymbol\alpha- S(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha)).
$$
The term $S(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha)$ is the conditional expectation of $\boldsymbol{x}$ given the observation $\boldsymbol{y}$ which we usually denote $\mathrm{E}[\boldsymbol{x}|\boldsymbol{y}]$. This conditional expectation is in fact the best linear unbiased predictor of $\boldsymbol{x}$. With this in mind we can finally rewrite the square as
$$
-\frac{1}{\sigma^2}(\boldsymbol{y} - \boldsymbol\alpha - \mathrm{E}[\boldsymbol{x}|\boldsymbol{y}])^\top(\boldsymbol{y} - \boldsymbol\alpha- \mathrm{E}[\boldsymbol{x}|\boldsymbol{y}])-\frac{1}{\sigma^2}\mathrm{E}[\boldsymbol{x}|\boldsymbol{y}]^\top S^{-1}\mathrm{E}[\boldsymbol{x}|\boldsymbol{y}].
$$
From this expression we see that in the first square of residuals, $\alpha$ and the predicted value of $\boldsymbol{x}$ plays a similar role, but the second penalizes deviation of the predicted value from zero. Thus maximum-likelihood estimation will always seek to describe as much variation as possible through the fixed effect $\alpha$. This holds in general. | Can correlated random effects "steal" the variability (and the significance) from the regression coe | Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the varianc | Can correlated random effects "steal" the variability (and the significance) from the regression coefficient?
Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the variance $\sigma^2$. Thus it will be "cheaper" in terms of likelihood to put year-consistent variability into the fixed effect $\beta$.
The reason that it is cheaper to put the variability in a fixed effect rather than a random one is that the fixed effects will be estimated to minimize the residual, and there will not be a penalization if they are too big or too small. In contrast random effects are supposed to vary following a probability distribution (in your case a normal) around zero, and thus if you need an extreme value of the random effect to explain a given observation, the observation will become very unlikely given the model, which will give a lower likelihood value. If you can explain consistent variability in terms of fixed effects rather than random effects, you will increase the likelihood values. Thus, if you use maximum likelihood estimation, you will choose parameters that do that.
For example, consider the simple model
$$y_i=\alpha + x_i + \varepsilon_i,\qquad i = 1,\dots,n$$
where $(x_i)_i=\boldsymbol{x}\sim\mathcal{N}(0, S)$ and $(\varepsilon_i)_i\sim \mathcal{N}(0,\sigma^2\mathbb{I}_n)$. Let $\boldsymbol{y}=(y_i)_i$ and $\boldsymbol{\alpha}=(\alpha)_i$, $i=1,\dots, n$. The log-likelihood function is
$$
\ell_{\boldsymbol y}(\alpha, \sigma^2, S) = -\frac{1}{2}\log\mathrm{det}(S+\sigma^2\mathbb{I})- \frac{1}{2}(\boldsymbol{y} - \boldsymbol\alpha)^\top(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha)
$$
If we consider the part that depends on $\alpha$, namely the quadratic term, we can use some linear algebra to rewrite it
$$
-(\boldsymbol{y} - \boldsymbol\alpha)^\top(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha) = -\frac{1}{\sigma^2}(\boldsymbol{y} - \boldsymbol\alpha)^\top(\boldsymbol{y} - \boldsymbol\alpha- S(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha)).
$$
The term $S(S+\sigma^2\mathbb{I})^{-1}(\boldsymbol{y} - \boldsymbol\alpha)$ is the conditional expectation of $\boldsymbol{x}$ given the observation $\boldsymbol{y}$ which we usually denote $\mathrm{E}[\boldsymbol{x}|\boldsymbol{y}]$. This conditional expectation is in fact the best linear unbiased predictor of $\boldsymbol{x}$. With this in mind we can finally rewrite the square as
$$
-\frac{1}{\sigma^2}(\boldsymbol{y} - \boldsymbol\alpha - \mathrm{E}[\boldsymbol{x}|\boldsymbol{y}])^\top(\boldsymbol{y} - \boldsymbol\alpha- \mathrm{E}[\boldsymbol{x}|\boldsymbol{y}])-\frac{1}{\sigma^2}\mathrm{E}[\boldsymbol{x}|\boldsymbol{y}]^\top S^{-1}\mathrm{E}[\boldsymbol{x}|\boldsymbol{y}].
$$
From this expression we see that in the first square of residuals, $\alpha$ and the predicted value of $\boldsymbol{x}$ plays a similar role, but the second penalizes deviation of the predicted value from zero. Thus maximum-likelihood estimation will always seek to describe as much variation as possible through the fixed effect $\alpha$. This holds in general. | Can correlated random effects "steal" the variability (and the significance) from the regression coe
Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the varianc |
48,658 | Can correlated random effects "steal" the variability (and the significance) from the regression coefficient? | If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\gamma$, you now explicitly distinguish between-year variability and within-year variability. The overall variance in your data $V$ now has 2 parts
The variability from $\gamma$: $V_\gamma$
The variability from $\epsilon$: $V_\epsilon$
Without a random effect, $V_\gamma =0$ and all variability is attributed to $V_\epsilon$. The variance of the fixed effect estimator $\hat{\beta}$ is actually proportional to $V_\epsilon$. If you introduce random effects, $V_\epsilon$ will be smaller meaning $\hat{\beta}$ will have a short confidence interval (recall the confidence interval is inverse proportional to the square root of variance), thus a smaller pvalue.
Having said that, I am not sure why you want to include a random effect at the first place. My understanding is that the purpose of using random effect is to introduce correlation structure in your covariance matrix. In your case, $\gamma_j$ will make sure observations within a year are positively correlated. However, this is already achieved by using a time series model. If you extend the expression of $\log(\mu_{i,j})$ in term of $\epsilon_{i,1... j}$ (assuming no random effect) and calculate covariance between $\log(\mu_{i,j})$ and $\log(\mu_{i,k})$ for any $j \ne k$, you will find they are always positively correlated. That is the purpose of time series models. Random effect models are essentially doing the same thing, except for using some different techniques and being designed for a particular type of data (a few time points but multiple sequences).
Peter | Can correlated random effects "steal" the variability (and the significance) from the regression coe | If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\g | Can correlated random effects "steal" the variability (and the significance) from the regression coefficient?
If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\gamma$, you now explicitly distinguish between-year variability and within-year variability. The overall variance in your data $V$ now has 2 parts
The variability from $\gamma$: $V_\gamma$
The variability from $\epsilon$: $V_\epsilon$
Without a random effect, $V_\gamma =0$ and all variability is attributed to $V_\epsilon$. The variance of the fixed effect estimator $\hat{\beta}$ is actually proportional to $V_\epsilon$. If you introduce random effects, $V_\epsilon$ will be smaller meaning $\hat{\beta}$ will have a short confidence interval (recall the confidence interval is inverse proportional to the square root of variance), thus a smaller pvalue.
Having said that, I am not sure why you want to include a random effect at the first place. My understanding is that the purpose of using random effect is to introduce correlation structure in your covariance matrix. In your case, $\gamma_j$ will make sure observations within a year are positively correlated. However, this is already achieved by using a time series model. If you extend the expression of $\log(\mu_{i,j})$ in term of $\epsilon_{i,1... j}$ (assuming no random effect) and calculate covariance between $\log(\mu_{i,j})$ and $\log(\mu_{i,k})$ for any $j \ne k$, you will find they are always positively correlated. That is the purpose of time series models. Random effect models are essentially doing the same thing, except for using some different techniques and being designed for a particular type of data (a few time points but multiple sequences).
Peter | Can correlated random effects "steal" the variability (and the significance) from the regression coe
If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\g |
48,659 | Question about definition of random sample | It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat.
The answer probably requires some familiarity with non-basic probability theory, but I think you'll follow just fine.
A stochastic variable is a measurable function from a background probability space, $\Omega$, into some other space. In our case we'll call this function $X$ such that $X: \Omega \rightarrow \mathbb{R}^n$. In this way, every coordinate will give the height of one the the students in a specific sample.
We now have the stochastic variable as a function from the background space.
What confuses you is probably just notation. In some books $X$ is reserved for the stochastic variable, while $x$ is reserved for some specific outcome. I think this a good way of doing it as it helps teach the distinction, but it is obvious that you're already aware of the distinction. If $x$ occurs with positive probability, we know that there exists $\omega \in \Omega$ such that $X(\omega) = x$. Or if we know that $X$ is surjective, we can also find such a $\omega$. Note that $x$ is a vector of $n$ heights, analogously to $X$ being a vector function.
In your above presentation, you are using multiple values in the background space for each outcome and talking about a restriction of $X$ to some student. It is more fruitful to think of one outcome (one $\omega$) and a vector function that determines the heights of all students.
In your case, the probability distribution is discrete in the sense that we only have finitely many students to choose from (each with one height), thus we can only combine them in finitely many ways. However, we can still define $X$ to take values in $\mathbb{R}^n$; zero probability is just assigned to most points. From this joint probability distribution, marginal and conditional ones can be calculated.
Alternatively, we could drop the notion of examining one specific class room and think of sampling from all potential class rooms. | Question about definition of random sample | It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat.
The answer probably requires some familiarity with non-b | Question about definition of random sample
It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat.
The answer probably requires some familiarity with non-basic probability theory, but I think you'll follow just fine.
A stochastic variable is a measurable function from a background probability space, $\Omega$, into some other space. In our case we'll call this function $X$ such that $X: \Omega \rightarrow \mathbb{R}^n$. In this way, every coordinate will give the height of one the the students in a specific sample.
We now have the stochastic variable as a function from the background space.
What confuses you is probably just notation. In some books $X$ is reserved for the stochastic variable, while $x$ is reserved for some specific outcome. I think this a good way of doing it as it helps teach the distinction, but it is obvious that you're already aware of the distinction. If $x$ occurs with positive probability, we know that there exists $\omega \in \Omega$ such that $X(\omega) = x$. Or if we know that $X$ is surjective, we can also find such a $\omega$. Note that $x$ is a vector of $n$ heights, analogously to $X$ being a vector function.
In your above presentation, you are using multiple values in the background space for each outcome and talking about a restriction of $X$ to some student. It is more fruitful to think of one outcome (one $\omega$) and a vector function that determines the heights of all students.
In your case, the probability distribution is discrete in the sense that we only have finitely many students to choose from (each with one height), thus we can only combine them in finitely many ways. However, we can still define $X$ to take values in $\mathbb{R}^n$; zero probability is just assigned to most points. From this joint probability distribution, marginal and conditional ones can be calculated.
Alternatively, we could drop the notion of examining one specific class room and think of sampling from all potential class rooms. | Question about definition of random sample
It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat.
The answer probably requires some familiarity with non-b |
48,660 | mtry tuning given by caret higher than the number of predictors | Try using train with the matrix argument, i.e.
tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed
# but change to
tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3
I think train creates the model matrix and then passes it to randomForest when using the formula argument, thus considering every column of that matrix a separate variable. This does not seem to happen when using the matrix argument.
I am not entirely up to speed on the inner workings of train but from what I have read this seems to be the case.
Hope this helps! | mtry tuning given by caret higher than the number of predictors | Try using train with the matrix argument, i.e.
tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed
# but change to
tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3
I think t | mtry tuning given by caret higher than the number of predictors
Try using train with the matrix argument, i.e.
tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed
# but change to
tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3
I think train creates the model matrix and then passes it to randomForest when using the formula argument, thus considering every column of that matrix a separate variable. This does not seem to happen when using the matrix argument.
I am not entirely up to speed on the inner workings of train but from what I have read this seems to be the case.
Hope this helps! | mtry tuning given by caret higher than the number of predictors
Try using train with the matrix argument, i.e.
tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed
# but change to
tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3
I think t |
48,661 | Quantiles of a compound gamma/negative binomial distribution | As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right censoring and how heavy the tails are in the components).
But in any case - especially when dealing with high quantiles where ordinary simulation becomes impractical - that has great value in its own right; it's an interesting question from both theoretical and practical standpoints.
A couple of other approaches to this problem are (1) using the Fast Fourier Transform and (2) direct numerical integration.
One useful reference on this topic is Luo and Shevchenko (2009)$^{[1]}$.
In it they develop an adaptive direct numerical integration approach that's faster than simulation and competitive with FFT.
The more traditional approach in actuarial work was been (3) Panjer recursion, which can be found in numerous texts. Embrechts and Frei (2009)$^{[2]}$ discuss and compare Panjer recursion and FFT. (Note that both of these techniques involve discretization of the continuous distribution.)
On the other hand, doing a very unsophisticated version of simulation, and with no effort whatever made to be efficient, generating from a compound gamma-negative binomial isn't particularly onerous. This is timing on my kids' little laptop:
system.time(replicate(100000,sum(rgamma(MASS:::rnegbin(1,4,2),5,.1))))
user system elapsed
2.82 0.00 2.84
I think 2.8 seconds to generate 100$^{\,}$000 simulations of a compound distribution on a slow little laptop really isn't bad. With some effort to be efficient (of which one might suggest many possibilities), I imagine that could be made a good deal faster.
Here's the ecdf for $10^6$ simulations (which took about 29 seconds):
$\hspace{1cm}$
We see the characteristic discrete jump at zero you expect to see with a compound distribution.
[While it should be easy to make simulation a lot faster, all three alternatives mentioned here - if carried out sensibly - should be a lot faster still.]
You should note that the actuar package supports computation with compound distributions, and offers several methods for calculation with them.
See, for example, this vignette which discusses this facility.
[Of possibly some further passing interest, note that there is an R package for the Poisson-lognormal distribution -- poilog; if you need that distribution at some point it may be useful.]
Added in edit:
A potential quick approximation where the gamma shape parameter isn't changing -
In the gamma case, because a convolution of gammas with constant shape parameter is another gamma, you could write down the distribution of $Y|N=n$, and then evaluate the cdf and the density at a large number of grid-values at each $n$, then simply accumulate the sum directly (rather as one would for a KDE). The direct calculation only yields a lower bound to the true quantile, but if the negative binomial is not heavy tailed it should be quite rapid.
References:
[1]: Luo, X. and Shevchenko, P.V. (2009),
"Computing Tails of Compound Distributions Using Direct Numerical Integration,"
Journal of Computational Finance, 13 (2), 73-111.
[arXiv preprint available here]
[2]: Embrechts, P., and Frei, M. (2009),
"Panjer recursion versus FFT for compound distributions,"
Mathematical Methods of Operations Research, 69:3 (July) pp 497-508.
[seems to be a pre-publication version here] | Quantiles of a compound gamma/negative binomial distribution | As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right | Quantiles of a compound gamma/negative binomial distribution
As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right censoring and how heavy the tails are in the components).
But in any case - especially when dealing with high quantiles where ordinary simulation becomes impractical - that has great value in its own right; it's an interesting question from both theoretical and practical standpoints.
A couple of other approaches to this problem are (1) using the Fast Fourier Transform and (2) direct numerical integration.
One useful reference on this topic is Luo and Shevchenko (2009)$^{[1]}$.
In it they develop an adaptive direct numerical integration approach that's faster than simulation and competitive with FFT.
The more traditional approach in actuarial work was been (3) Panjer recursion, which can be found in numerous texts. Embrechts and Frei (2009)$^{[2]}$ discuss and compare Panjer recursion and FFT. (Note that both of these techniques involve discretization of the continuous distribution.)
On the other hand, doing a very unsophisticated version of simulation, and with no effort whatever made to be efficient, generating from a compound gamma-negative binomial isn't particularly onerous. This is timing on my kids' little laptop:
system.time(replicate(100000,sum(rgamma(MASS:::rnegbin(1,4,2),5,.1))))
user system elapsed
2.82 0.00 2.84
I think 2.8 seconds to generate 100$^{\,}$000 simulations of a compound distribution on a slow little laptop really isn't bad. With some effort to be efficient (of which one might suggest many possibilities), I imagine that could be made a good deal faster.
Here's the ecdf for $10^6$ simulations (which took about 29 seconds):
$\hspace{1cm}$
We see the characteristic discrete jump at zero you expect to see with a compound distribution.
[While it should be easy to make simulation a lot faster, all three alternatives mentioned here - if carried out sensibly - should be a lot faster still.]
You should note that the actuar package supports computation with compound distributions, and offers several methods for calculation with them.
See, for example, this vignette which discusses this facility.
[Of possibly some further passing interest, note that there is an R package for the Poisson-lognormal distribution -- poilog; if you need that distribution at some point it may be useful.]
Added in edit:
A potential quick approximation where the gamma shape parameter isn't changing -
In the gamma case, because a convolution of gammas with constant shape parameter is another gamma, you could write down the distribution of $Y|N=n$, and then evaluate the cdf and the density at a large number of grid-values at each $n$, then simply accumulate the sum directly (rather as one would for a KDE). The direct calculation only yields a lower bound to the true quantile, but if the negative binomial is not heavy tailed it should be quite rapid.
References:
[1]: Luo, X. and Shevchenko, P.V. (2009),
"Computing Tails of Compound Distributions Using Direct Numerical Integration,"
Journal of Computational Finance, 13 (2), 73-111.
[arXiv preprint available here]
[2]: Embrechts, P., and Frei, M. (2009),
"Panjer recursion versus FFT for compound distributions,"
Mathematical Methods of Operations Research, 69:3 (July) pp 497-508.
[seems to be a pre-publication version here] | Quantiles of a compound gamma/negative binomial distribution
As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right |
48,662 | Kolmogorov distribution | The function that is shown implements the CDF for one sided KS statistic
$$
D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\},
$$
where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial CDF of the sample of size $n$. So, $D_n^{+}$ has a CDF shown in the question:
$$
F_{D_n^{+}}(x) = 1-x\sum_{j=0}^{\lfloor n(1-x)\rfloor} {n\choose j}\left(\frac{j}{n}+x\right)^{j-1}\left(1-x-\frac{j}{n}\right)^{n-j}
$$
Source: Simard and L'Ecuyer (2011)
The two-sided KS statistic
$$
D_n=\sup_x|\hat{F}_n(x)-F(x)|
$$
doesn't have such a simple expression. It can be computed precisely using Durbin matrix - Marsaglia, Tsang and Wang mentioned earlier provide such an implementation, but it is computationally very expensive for large $n$ and it also may produce NaNs on some inputs (Simard and L'Ecuyer, 2011). Simard and L'Ecuyer give implementation for $D_n$ CDF that chooses different methods depending in the combination of $n$ and $x$ to give precise and efficient implementation. They published C code, but not R package. I am working on implementing their method in Fortran and improving the efficiency of Durbin matrix method (from Carvalho, 2015). I will add R interface.
If you are looking for the limiting distribution of $\sqrt{n}D_n$ as $n\to\infty$ you can use the series from Wikipedia -- it converges quite quickly. Also Wikipedia article gives Vrbik's correction to make that series work for moderate values of $n$. | Kolmogorov distribution | The function that is shown implements the CDF for one sided KS statistic
$$
D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\},
$$
where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial | Kolmogorov distribution
The function that is shown implements the CDF for one sided KS statistic
$$
D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\},
$$
where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial CDF of the sample of size $n$. So, $D_n^{+}$ has a CDF shown in the question:
$$
F_{D_n^{+}}(x) = 1-x\sum_{j=0}^{\lfloor n(1-x)\rfloor} {n\choose j}\left(\frac{j}{n}+x\right)^{j-1}\left(1-x-\frac{j}{n}\right)^{n-j}
$$
Source: Simard and L'Ecuyer (2011)
The two-sided KS statistic
$$
D_n=\sup_x|\hat{F}_n(x)-F(x)|
$$
doesn't have such a simple expression. It can be computed precisely using Durbin matrix - Marsaglia, Tsang and Wang mentioned earlier provide such an implementation, but it is computationally very expensive for large $n$ and it also may produce NaNs on some inputs (Simard and L'Ecuyer, 2011). Simard and L'Ecuyer give implementation for $D_n$ CDF that chooses different methods depending in the combination of $n$ and $x$ to give precise and efficient implementation. They published C code, but not R package. I am working on implementing their method in Fortran and improving the efficiency of Durbin matrix method (from Carvalho, 2015). I will add R interface.
If you are looking for the limiting distribution of $\sqrt{n}D_n$ as $n\to\infty$ you can use the series from Wikipedia -- it converges quite quickly. Also Wikipedia article gives Vrbik's correction to make that series work for moderate values of $n$. | Kolmogorov distribution
The function that is shown implements the CDF for one sided KS statistic
$$
D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\},
$$
where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial |
48,663 | Kolmogorov distribution | The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution
Kolmogorov distribution
The Kolmogorov distribution is the distribution of the random variable
$K=\sup_{t\in[0,1]}|B(t)|$
where $B(t)$ is the Brownian bridge. The cumulative distribution function of $K$ is given by
$\operatorname{Pr}(K\leq x)=1-2\sum_{k=1}^\infty (-1)^{k-1} e^{-2k^2 x^2}=\frac{\sqrt{2\pi}}{x}\sum_{k=1}^\infty e^{-(2k-1)^2\pi^2/(8x^2)}.$
Note that this distribution arises as an asymptotic result, detailed in the same link. | Kolmogorov distribution | The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution
Kolmogorov distribution
The Kolmo | Kolmogorov distribution
The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution
Kolmogorov distribution
The Kolmogorov distribution is the distribution of the random variable
$K=\sup_{t\in[0,1]}|B(t)|$
where $B(t)$ is the Brownian bridge. The cumulative distribution function of $K$ is given by
$\operatorname{Pr}(K\leq x)=1-2\sum_{k=1}^\infty (-1)^{k-1} e^{-2k^2 x^2}=\frac{\sqrt{2\pi}}{x}\sum_{k=1}^\infty e^{-(2k-1)^2\pi^2/(8x^2)}.$
Note that this distribution arises as an asymptotic result, detailed in the same link. | Kolmogorov distribution
The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link:
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution
Kolmogorov distribution
The Kolmo |
48,664 | Is residuals autocorrelation always a problem? | Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels, means, variances, etc. may be spurious (with unknown direction of bias) because the population mean is undefined and the population variance is infinite (so, for example, the finite values $\bar{x}$ and $s_{x}$, and quantities derived from these are always false estimates of the corresponding population statistics).
That's not a problem that can be resolved by increasing sample size to offset inefficiency.
If autocorrelated errors obtain in OLS, I would say that the same issues may be present (it depends on the data generating process). Again: not an issue of efficiency.
The critical caveat is whether ordering of your data is meaningful: if the order has meaning in that it relates to the data generating process then you're in trouble. | Is residuals autocorrelation always a problem? | Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels, | Is residuals autocorrelation always a problem?
Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels, means, variances, etc. may be spurious (with unknown direction of bias) because the population mean is undefined and the population variance is infinite (so, for example, the finite values $\bar{x}$ and $s_{x}$, and quantities derived from these are always false estimates of the corresponding population statistics).
That's not a problem that can be resolved by increasing sample size to offset inefficiency.
If autocorrelated errors obtain in OLS, I would say that the same issues may be present (it depends on the data generating process). Again: not an issue of efficiency.
The critical caveat is whether ordering of your data is meaningful: if the order has meaning in that it relates to the data generating process then you're in trouble. | Is residuals autocorrelation always a problem?
Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels, |
48,665 | Is residuals autocorrelation always a problem? | 1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by shifting car maker or model is not very meaningful. For new cars, shifting year (comparing year over year sales of the same type of car) makes sense, but for used cars it would be less meaningful, since the random usage the car would have being exposed to would erase correlations if there was one. I think you are fine going ahead applying the OLS technology
2) You would be fitting an unbiased linear estimator, a special case of an M-estimator. If your objective is to build a predictive model (as oppose to testing hypothesis expresable in terms of model parameter), then the OSL is appropiate. To cover for the possibility un-met model assumptions, use a training to build your model, and a validation sample to assess its performance on out-of-sample cases. | Is residuals autocorrelation always a problem? | 1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by | Is residuals autocorrelation always a problem?
1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by shifting car maker or model is not very meaningful. For new cars, shifting year (comparing year over year sales of the same type of car) makes sense, but for used cars it would be less meaningful, since the random usage the car would have being exposed to would erase correlations if there was one. I think you are fine going ahead applying the OLS technology
2) You would be fitting an unbiased linear estimator, a special case of an M-estimator. If your objective is to build a predictive model (as oppose to testing hypothesis expresable in terms of model parameter), then the OSL is appropiate. To cover for the possibility un-met model assumptions, use a training to build your model, and a validation sample to assess its performance on out-of-sample cases. | Is residuals autocorrelation always a problem?
1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by |
48,666 | Logrank test for trend (proportional hazards) | Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and tests for trend (for all of the above mentioned). The example taken from the manual is the following:
data(larynx, package="KMsurv")
s4 <- survfit(Surv(time, delta) ~ stage, data=larynx)
comp(s4)
comp(s4)$tests$trendTests # outputs only the results for trend tests
If you compare the results with
survdiff(Surv(time, delta) ~ stage, data=larynx)
you get the same result for 'traditional' logrank test (not the trend test). | Logrank test for trend (proportional hazards) | Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and | Logrank test for trend (proportional hazards)
Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and tests for trend (for all of the above mentioned). The example taken from the manual is the following:
data(larynx, package="KMsurv")
s4 <- survfit(Surv(time, delta) ~ stage, data=larynx)
comp(s4)
comp(s4)$tests$trendTests # outputs only the results for trend tests
If you compare the results with
survdiff(Surv(time, delta) ~ stage, data=larynx)
you get the same result for 'traditional' logrank test (not the trend test). | Logrank test for trend (proportional hazards)
Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and |
48,667 | Logrank test for trend (proportional hazards) | Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the proportional hazards model. This essentially tests the slope of (scaled) residuals as a function of follow-up time. | Logrank test for trend (proportional hazards) | Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the prop | Logrank test for trend (proportional hazards)
Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the proportional hazards model. This essentially tests the slope of (scaled) residuals as a function of follow-up time. | Logrank test for trend (proportional hazards)
Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the prop |
48,668 | Anderson Darling exponential distribution | The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1) doesn't depend on the null-hypothesis distribution when all parameters are known, & (2) depends only on the functional form of the null-hypothesis distribution when location & scale parameters are estimated. I don't know of an R implementation of the A–D test specifically for the exponential distribution with estimated rate parameter, but you could quickly make a function to calculate the test statistic by adapting the ad.test function from the nortest package: change the distribution function from the best-fit normal, pnorm((x - mean(x))/sd(x)), to the best-fit exponential,pexp(x/mean(x)). Then get critical values for any desired significance level & sample size by simulation.
As to the "best" test, note that different tests are more powerful against different kinds of departure from the null-hypothesis distribution. If you have a quite specific alternative in mind, e.g. a Weibull distribution with shape parameter greater than one, a likelihood ratio test will be more powerful than a general-purpose goodness-of-fit test. For more vaguely specified alternatives it might be helpful to compare the power of various tests against a rogues gallery, following the approach of Stephens (1974), "EDF statistics for goodness of fit and some comparisons", JASA, 69, 347. | Anderson Darling exponential distribution | The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1) | Anderson Darling exponential distribution
The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1) doesn't depend on the null-hypothesis distribution when all parameters are known, & (2) depends only on the functional form of the null-hypothesis distribution when location & scale parameters are estimated. I don't know of an R implementation of the A–D test specifically for the exponential distribution with estimated rate parameter, but you could quickly make a function to calculate the test statistic by adapting the ad.test function from the nortest package: change the distribution function from the best-fit normal, pnorm((x - mean(x))/sd(x)), to the best-fit exponential,pexp(x/mean(x)). Then get critical values for any desired significance level & sample size by simulation.
As to the "best" test, note that different tests are more powerful against different kinds of departure from the null-hypothesis distribution. If you have a quite specific alternative in mind, e.g. a Weibull distribution with shape parameter greater than one, a likelihood ratio test will be more powerful than a general-purpose goodness-of-fit test. For more vaguely specified alternatives it might be helpful to compare the power of various tests against a rogues gallery, following the approach of Stephens (1974), "EDF statistics for goodness of fit and some comparisons", JASA, 69, 347. | Anderson Darling exponential distribution
The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1) |
48,669 | Estimating the ratio of cell means in ANOVA under lognormal assumption | First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions.
As a second remark, I would check the assumptions of ANOVA: it might be that the variances of the two samples differ significantly.
Eventually, in a linear regression setting with logged dependent variable, your problem might be due to heteroscedastic residuals, like in the following faked example performed in Stata 13.1/SE:
The slight difference between the two ratios of the arithmetic means is due to residuals heteroscedasticity.
As a sidelight, the ratio of the geometric means is: exp(1.725205)/exp(1.352162)=1.4521468. | Estimating the ratio of cell means in ANOVA under lognormal assumption | First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions.
As a second remark, I would check the assumptions of ANOVA: it might | Estimating the ratio of cell means in ANOVA under lognormal assumption
First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions.
As a second remark, I would check the assumptions of ANOVA: it might be that the variances of the two samples differ significantly.
Eventually, in a linear regression setting with logged dependent variable, your problem might be due to heteroscedastic residuals, like in the following faked example performed in Stata 13.1/SE:
The slight difference between the two ratios of the arithmetic means is due to residuals heteroscedasticity.
As a sidelight, the ratio of the geometric means is: exp(1.725205)/exp(1.352162)=1.4521468. | Estimating the ratio of cell means in ANOVA under lognormal assumption
First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions.
As a second remark, I would check the assumptions of ANOVA: it might |
48,670 | Estimating the ratio of cell means in ANOVA under lognormal assumption | $\log Y = b_0 + b_1 X$
When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need to do estimation. Two $y$ values would be sufficient to estimate two parameters exactly (two equations in two unknowns). You mean something like
$\log Y = b_0 + b_1 X+\varepsilon$
where $\varepsilon\sim N(0,\sigma^2I)$ ... assuming that your $x$-variable is binary. (Not sure why you'd need to write it in this form, though, since there's only two groups.)
However, that gives the ratio of geometric cell means rather than arithmetic cell means.
Under the assumption of constant $\sigma^2$ parameters, the ratio of population means will be identical to the ratio of population medians (or geometric means, since both medians and GMs are $\exp(\mu)$ in the lognormal), since $e^{\mu_1+\sigma^2/2}/e^{\mu_2+\sigma^2/2}=e^{\mu_1}/e^{\mu_2}=e^{\mu_1-\mu_2}$.
As such, you can simply work directly on the log-scale and work with differences of means of logs, and when you exponentiate the result, it's still estimating the ratio of means - in the sense, for example, that an interval can be transformed. (If you want an unbiased estimator, you may need to take a little more effort.) | Estimating the ratio of cell means in ANOVA under lognormal assumption | $\log Y = b_0 + b_1 X$
When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need | Estimating the ratio of cell means in ANOVA under lognormal assumption
$\log Y = b_0 + b_1 X$
When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need to do estimation. Two $y$ values would be sufficient to estimate two parameters exactly (two equations in two unknowns). You mean something like
$\log Y = b_0 + b_1 X+\varepsilon$
where $\varepsilon\sim N(0,\sigma^2I)$ ... assuming that your $x$-variable is binary. (Not sure why you'd need to write it in this form, though, since there's only two groups.)
However, that gives the ratio of geometric cell means rather than arithmetic cell means.
Under the assumption of constant $\sigma^2$ parameters, the ratio of population means will be identical to the ratio of population medians (or geometric means, since both medians and GMs are $\exp(\mu)$ in the lognormal), since $e^{\mu_1+\sigma^2/2}/e^{\mu_2+\sigma^2/2}=e^{\mu_1}/e^{\mu_2}=e^{\mu_1-\mu_2}$.
As such, you can simply work directly on the log-scale and work with differences of means of logs, and when you exponentiate the result, it's still estimating the ratio of means - in the sense, for example, that an interval can be transformed. (If you want an unbiased estimator, you may need to take a little more effort.) | Estimating the ratio of cell means in ANOVA under lognormal assumption
$\log Y = b_0 + b_1 X$
When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need |
48,671 | Estimating the ratio of cell means in ANOVA under lognormal assumption | The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means.
In other words $E[\log Y | X]$ is the arithmetic mean of $\log Y$, and exponentiating that gives you the geometric mean of $Y$. This carries over to interpretation of coefficients.
However, when using a log link function in a GLM, you are modeling $\log (E[Y|X])$, and exponentiating gives you the arithmetic mean of $Y$
As for practical application via gamlss OR GLIMMIX, make sure you're supplying the correct arguments to model exactly what you want. | Estimating the ratio of cell means in ANOVA under lognormal assumption | The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means.
In other words $E[\log Y | Estimating the ratio of cell means in ANOVA under lognormal assumption
The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means.
In other words $E[\log Y | X]$ is the arithmetic mean of $\log Y$, and exponentiating that gives you the geometric mean of $Y$. This carries over to interpretation of coefficients.
However, when using a log link function in a GLM, you are modeling $\log (E[Y|X])$, and exponentiating gives you the arithmetic mean of $Y$
As for practical application via gamlss OR GLIMMIX, make sure you're supplying the correct arguments to model exactly what you want. | Estimating the ratio of cell means in ANOVA under lognormal assumption
The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means.
In other words $E[\log Y |
48,672 | From joint cdf to joint pdf | A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we get $4$ subsets,
$$(-\infty, x] \times (-\infty, y],\;\;(-\infty, x] \times [y,\infty),\\
[x, \infty) \times (-\infty, y],\;\;[x, \infty) \times [y,\infty)$$
made up of intersections of two events,
$$A = P(X\le x), \;\; B = P(Y\le y)$$
and their corresponding complements.
Then (as the OP noted in a commnent),
$$\Pr(X\ge x, Y\ge y) = P(A^c\cap B^c) = 1 - P(A\cup B)$$
$$=1-\big[P(A) + P(B) - P(A\cap B)\big]$$
So it appears that by taking the cross-partial derivative of $\Pr(X\ge x, Y\ge y)$ we should again get the joint density. Let's verify that:
$$\Pr(X\ge x, Y\ge y) = \int_x^{\infty}\int_y^{\infty}f(s,t)dtds$$
$$\frac {\partial \Pr(X\ge x, Y\ge y)}{\partial y} = \int_x^{\infty} \left(\frac{\partial}{\partial y}\int_y^{\infty}f(s,t)dt\right)ds $$
$$=\int_x^{\infty}-f(s,y) ds$$
$$\frac {\partial^2 \Pr(X\ge x, Y\ge y)}{\partial y\partial x} = \frac {\partial }{\partial x} \int_x^{\infty}-f(s,y) ds = -\left(-f(x,y)\right) = f(x,y)$$
The above also means that we can obtain the joint pdf from any of the four joint events indicated by the breakdown of the support -but in the other two cases, we should multiply by $-1$.
$$\begin{align} f(x,y) =& \frac {\partial^2 \Pr(X\le x, Y\le y)}{\partial y\partial x}\\
=&\frac {\partial^2 \Pr(X\ge x, Y\ge y)}{\partial y\partial x}\\
=&-\frac {\partial^2 \Pr(X\le x, Y\ge y)}{\partial y\partial x}\\
=&-\frac {\partial^2 \Pr(X\ge x, Y\le y)}{\partial y\partial x}
\end{align}$$ | From joint cdf to joint pdf | A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we ge | From joint cdf to joint pdf
A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we get $4$ subsets,
$$(-\infty, x] \times (-\infty, y],\;\;(-\infty, x] \times [y,\infty),\\
[x, \infty) \times (-\infty, y],\;\;[x, \infty) \times [y,\infty)$$
made up of intersections of two events,
$$A = P(X\le x), \;\; B = P(Y\le y)$$
and their corresponding complements.
Then (as the OP noted in a commnent),
$$\Pr(X\ge x, Y\ge y) = P(A^c\cap B^c) = 1 - P(A\cup B)$$
$$=1-\big[P(A) + P(B) - P(A\cap B)\big]$$
So it appears that by taking the cross-partial derivative of $\Pr(X\ge x, Y\ge y)$ we should again get the joint density. Let's verify that:
$$\Pr(X\ge x, Y\ge y) = \int_x^{\infty}\int_y^{\infty}f(s,t)dtds$$
$$\frac {\partial \Pr(X\ge x, Y\ge y)}{\partial y} = \int_x^{\infty} \left(\frac{\partial}{\partial y}\int_y^{\infty}f(s,t)dt\right)ds $$
$$=\int_x^{\infty}-f(s,y) ds$$
$$\frac {\partial^2 \Pr(X\ge x, Y\ge y)}{\partial y\partial x} = \frac {\partial }{\partial x} \int_x^{\infty}-f(s,y) ds = -\left(-f(x,y)\right) = f(x,y)$$
The above also means that we can obtain the joint pdf from any of the four joint events indicated by the breakdown of the support -but in the other two cases, we should multiply by $-1$.
$$\begin{align} f(x,y) =& \frac {\partial^2 \Pr(X\le x, Y\le y)}{\partial y\partial x}\\
=&\frac {\partial^2 \Pr(X\ge x, Y\ge y)}{\partial y\partial x}\\
=&-\frac {\partial^2 \Pr(X\le x, Y\ge y)}{\partial y\partial x}\\
=&-\frac {\partial^2 \Pr(X\ge x, Y\le y)}{\partial y\partial x}
\end{align}$$ | From joint cdf to joint pdf
A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we ge |
48,673 | How to avoid NaN in using ReLU + Cross-Entropy? [duplicate] | The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf)
This requires another hyperparameter, the threshold. The suggestion from the referenced paper is to sample some gradients to get an idea of the (non-exploding) norm and use the sample average. From my limited experience, it is worth playing around with this parameter a bit, even up to half the sample average.
Pseudocode looks like,
if norm(grad) > threshold:
grad = grad * threshold/norm(grad) | How to avoid NaN in using ReLU + Cross-Entropy? [duplicate] | The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf)
Thi | How to avoid NaN in using ReLU + Cross-Entropy? [duplicate]
The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf)
This requires another hyperparameter, the threshold. The suggestion from the referenced paper is to sample some gradients to get an idea of the (non-exploding) norm and use the sample average. From my limited experience, it is worth playing around with this parameter a bit, even up to half the sample average.
Pseudocode looks like,
if norm(grad) > threshold:
grad = grad * threshold/norm(grad) | How to avoid NaN in using ReLU + Cross-Entropy? [duplicate]
The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf)
Thi |
48,674 | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model | Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with very similar $R^2$ or a number of low quality predictors the answers can be different. This is why in regular regression we tend to look at adjusted $R^2$ rather than just $R^2$, that is, because the adjusted value penalizes $R^2$ to adjust for the variance one might expect to be explained by chance if a predictor was not really effective at all. As the author says in the blog post "although I should note that [$R^2$ is] a poor tool for model selection, since it almost always favors the most complex models".
P.S. If you are interested in the fixed effects relative to the overall variance (regardless of nesting factor), then you probably want to be looking at the marginal $R^2$. | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model | Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with ver | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with very similar $R^2$ or a number of low quality predictors the answers can be different. This is why in regular regression we tend to look at adjusted $R^2$ rather than just $R^2$, that is, because the adjusted value penalizes $R^2$ to adjust for the variance one might expect to be explained by chance if a predictor was not really effective at all. As the author says in the blog post "although I should note that [$R^2$ is] a poor tool for model selection, since it almost always favors the most complex models".
P.S. If you are interested in the fixed effects relative to the overall variance (regardless of nesting factor), then you probably want to be looking at the marginal $R^2$. | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with ver |
48,675 | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model | R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to know if the change in R^2 is significant. If not, for the parsimony, take the simpler model, and if so take the more complex model, assuming your simpler model is nested in the more complex model.
Parsimony is important when considering the predictive power because if your model is too complex it may memorize your current set of data and seem to perform very well. However, as a result, it will not be able to handle differences in a new set of data, therefore over-estimating the predictive power.
So my response is don't compare AIC to R^2, compare AIC with change in R^2. | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model | R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to kno | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to know if the change in R^2 is significant. If not, for the parsimony, take the simpler model, and if so take the more complex model, assuming your simpler model is nested in the more complex model.
Parsimony is important when considering the predictive power because if your model is too complex it may memorize your current set of data and seem to perform very well. However, as a result, it will not be able to handle differences in a new set of data, therefore over-estimating the predictive power.
So my response is don't compare AIC to R^2, compare AIC with change in R^2. | The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to kno |
48,676 | Matrix Factorization Recommendation Systems with Only "Like" Ratings | This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation:
predict likes, with log-loss,
use your set of actual likes (p=1) and randomly generate set of potential non-likes (p=0).
Usually you want to generate this non-likes set from the similar distribution, i.e. same distribution of users, and of pages (or anything they like). The easiest way to do so is to take a two random entries, and take user from one, and page from the other.
See:
word2vec: negative sampling (in layman term)?
Can someone please make me understand NCE and negative sampling?
See Improving Pairwise Learning for Item Recommendation from Implicit Feedback by Steffen Rendle and Christoph Freudenthaler (2014). The former authored the original paper Factorization Machines (2010), which I highly recommend reading. | Matrix Factorization Recommendation Systems with Only "Like" Ratings | This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation:
predict likes, with log-loss,
use your set of actual likes (p=1) and random | Matrix Factorization Recommendation Systems with Only "Like" Ratings
This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation:
predict likes, with log-loss,
use your set of actual likes (p=1) and randomly generate set of potential non-likes (p=0).
Usually you want to generate this non-likes set from the similar distribution, i.e. same distribution of users, and of pages (or anything they like). The easiest way to do so is to take a two random entries, and take user from one, and page from the other.
See:
word2vec: negative sampling (in layman term)?
Can someone please make me understand NCE and negative sampling?
See Improving Pairwise Learning for Item Recommendation from Implicit Feedback by Steffen Rendle and Christoph Freudenthaler (2014). The former authored the original paper Factorization Machines (2010), which I highly recommend reading. | Matrix Factorization Recommendation Systems with Only "Like" Ratings
This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation:
predict likes, with log-loss,
use your set of actual likes (p=1) and random |
48,677 | Matrix Factorization Recommendation Systems with Only "Like" Ratings | Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outlined in this paper (PDF): Hu, Koren, and Volinsky.There are implementations in many common machine learning software packages such as Mahout, Myrrix, and GraphLab. | Matrix Factorization Recommendation Systems with Only "Like" Ratings | Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outl | Matrix Factorization Recommendation Systems with Only "Like" Ratings
Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outlined in this paper (PDF): Hu, Koren, and Volinsky.There are implementations in many common machine learning software packages such as Mahout, Myrrix, and GraphLab. | Matrix Factorization Recommendation Systems with Only "Like" Ratings
Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outl |
48,678 | Which of these points in this plot has the highest leverage and why? | The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$.
The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so the point with the largest $(x_i-\bar{x})^2$ has the highest leverage.
This means that the point furthest from the mean has the highest leverage.
In the diagram, point 1 is the furthest from $\bar x$ in the x-direction, so it will have the largest leverage of the three points. | Which of these points in this plot has the highest leverage and why? | The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$.
The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so | Which of these points in this plot has the highest leverage and why?
The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$.
The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so the point with the largest $(x_i-\bar{x})^2$ has the highest leverage.
This means that the point furthest from the mean has the highest leverage.
In the diagram, point 1 is the furthest from $\bar x$ in the x-direction, so it will have the largest leverage of the three points. | Which of these points in this plot has the highest leverage and why?
The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$.
The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so |
48,679 | What transformations preserve the von Mises distribution? | Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family.
Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\Theta\}$ of continuous distributions. By virtue of this continuity, whenever $X\sim F_\theta$ and $0\le q\le 1$,
$$\Pr(F_\theta(X)\le q) = q.$$
The transformation
$$G_{\theta^\prime,\theta}(X) = F_{\theta^\prime}^{-1}(F_\theta(X))$$
maps any such random variable into $Y = G_{\theta^\prime,\theta}(X)$ and
$$\Pr(Y \le y) = \Pr(F_{\theta^\prime}^{-1}(F_\theta(X)) \le y) = \Pr(F_\theta(X) \le F_{\theta^\prime}(y))=F_{\theta^\prime}(y)$$
shows that $Y \sim F_{\theta^\prime}.$
The question, then, is whether the family $\{G_{\theta^\prime,\theta}| \theta\in\Theta, \theta^\prime\in\Theta\}$ is closed under composition. Suspecting that it should not be for the shape family of the Von Mises distribution (with $\mu=0$ and $\theta=\kappa$), I numerically searched for a solution $(\alpha,\beta)$ to the equation
$$ G_{\alpha,\beta} = G_{2,1} \circ G_{1/2,1}$$
by minimizing the $L^2$ norm between the two sides. The difference between the best solution (with $\alpha=2.96234\ldots$ and $\beta = 2.48773\ldots$) and the right hand side is small but so clear that I doubt there was an error in the calculation.
Consequently the answer to the question--as understood in the sense described here--appears to be that only the translations (modulo $2\pi$) and, of course, the reflections $x\to a-x \mod 2\pi$ preserve the entire family of Von Mises distributions. | What transformations preserve the von Mises distribution? | Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family.
Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\The | What transformations preserve the von Mises distribution?
Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family.
Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\Theta\}$ of continuous distributions. By virtue of this continuity, whenever $X\sim F_\theta$ and $0\le q\le 1$,
$$\Pr(F_\theta(X)\le q) = q.$$
The transformation
$$G_{\theta^\prime,\theta}(X) = F_{\theta^\prime}^{-1}(F_\theta(X))$$
maps any such random variable into $Y = G_{\theta^\prime,\theta}(X)$ and
$$\Pr(Y \le y) = \Pr(F_{\theta^\prime}^{-1}(F_\theta(X)) \le y) = \Pr(F_\theta(X) \le F_{\theta^\prime}(y))=F_{\theta^\prime}(y)$$
shows that $Y \sim F_{\theta^\prime}.$
The question, then, is whether the family $\{G_{\theta^\prime,\theta}| \theta\in\Theta, \theta^\prime\in\Theta\}$ is closed under composition. Suspecting that it should not be for the shape family of the Von Mises distribution (with $\mu=0$ and $\theta=\kappa$), I numerically searched for a solution $(\alpha,\beta)$ to the equation
$$ G_{\alpha,\beta} = G_{2,1} \circ G_{1/2,1}$$
by minimizing the $L^2$ norm between the two sides. The difference between the best solution (with $\alpha=2.96234\ldots$ and $\beta = 2.48773\ldots$) and the right hand side is small but so clear that I doubt there was an error in the calculation.
Consequently the answer to the question--as understood in the sense described here--appears to be that only the translations (modulo $2\pi$) and, of course, the reflections $x\to a-x \mod 2\pi$ preserve the entire family of Von Mises distributions. | What transformations preserve the von Mises distribution?
Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family.
Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\The |
48,680 | How to distance and to MDS-plot objects according their complex shape | This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculate on reasons for that below.
But I think I was able to get to what you look for in terms of the four basal signals A1, A5, E1, E5. Namely that they lie on the edge of the embedded manifold, that opposite signals lie more or less diametrical to each other (that is A1, E5 and E1, A5 respectively) and that neighbouring signals are preserved (so A1, A5 and E1, E5 respectively).
Generally, I think standard (i.e. with an input weight matrix consiting of only 1's) MDS doesn't really give you what you want because you are actually looking for nonlinear dimension reduction that has some localization feature, namely that larger "distances" should not be preserved but local distances should. There are a number of algorithms that do that. A rather popular one that is often used for cases like yours is called t-SNE for t-Distributed Stochastic Neighbor Embedding. On the linked homepage you will find quite some information.
I calculated the t-SNE embedding for your kappa distance. Note that I use a high perplexity to force a shaping that looks like the one you aim for. It can be obtained by
set.seed(1)
library(tsne)
tsne.coor1 <- tsne(res,perplexity=25)
rownames(tsne.coor1) <- c("A1","A2","A3","A4","A5",
"B1","B2","B3","B4","B5",
"C1","C2","C3","C4","C5",
"D1","D2","D3","D4","D5",
"E1","E2","E3","E4","E5")
plot(tsne.coor1[,1], tsne.coor1[,2], type="n", xlab="", ylab="")
text(tsne.coor1[,1], tsne.coor1[,2],
labels=row.names(tsne.coor1), cex=0.8)
abline(h=0,v=0,col="gray75")
and looks like this
The basic signals are labeled in red. As you can see, the similarity structure as captured by your preprocessing and distance measure does not suggest that the intermediate signals are actually at all as you expect them to be. For example, the path from A1 to E1 has intermediate signals E4, D5, A3 and not B1 through D1. But this is what your data tell you! So, the similarity between the signals that are within the convex hull of the embedded manifold suggests that the clear-cut pattern is not preserved.
There are two obvious explanations:
Some information gets lost in mapping to low-D. The information that gets lost might actually be the information you were looking for.
The distance measure might not capture what you actually care for. I agree with @ttnphns on this one.
[\Edit] (thanks @ttnphns!)
To investigate the last point, I tried other measures of similarity for binary matrices. For this high perplexity it led to no discernably different results in the shape, but it did so in the arrangements (I used a gravity model similarity and an asymmetric binary similarity, the one returned by dist(x,method="binary")). For lower perplexity the effects on the visualisation the kappa distance is small, but for the other distances it is not. To illustrate for the asymmetric distances the results are:
res <- dist(M,method="binary")
set.seed(1)
tsne.coor2 <- tsne(res,perplexity=25)
tsne.coor3 <- tsne(res,perplexity=3)
rownames(tsne.coor2)<-rownames(tsne.coor3) <- c("A1","A2","A3","A4","A5",
"B1","B2","B3","B4","B5",
"C1","C2","C3","C4","C5",
"D1","D2","D3","D4","D5",
"E1","E2","E3","E4","E5")
par(mfrow=c(1,2))
plot(tsne.coor2[,1], tsne.coor2[,2], type="n", xlab="", ylab="")
text(tsne.coor2[,1], tsne.coor2[,2],
labels=row.names(tsne.coor3), cex=0.8)
abline(h=0,v=0,col="gray75")
plot(tsne.coor3[,1], tsne.coor3[,2], type="n", xlab="", ylab="")
text(tsne.coor3[,1], tsne.coor3[,2],
labels=row.names(tsne.coor2), cex=0.8)
abline(h=0,v=0,col="gray75")
and here's the results
So, for high perplexity the difference in the shape of the projection of the signals is rather similar for both distance measures. For lower perplexity, the results change. Note that while it is looking different than what you originally intended when using the asymmetric binary distance, the transition from basis signal through the other states seems to be better preserved in the low perplexity plot, particularly column-wise! . This makes the speculation 2 of using kappa being a not very suitable distance measure more likely. You may try the distances dist(x, method="binary") or cluster::daisy(x, metric= "gower")) (which is the dice coefficient, I think).
[\end Edit]
Note the t-SNE has random initializations, so it might look a bit different at your end --- not sure whether set.seed has an effect in the R implementation. The random initialization is actually something that might get you closer to what you need anyway. As the authors put it:
In contrast to, e.g., PCA, t-SNE has a non-convex objective function. The objective function is minimized using a gradient descent optimization that is initiated randomly. As a result, it is possible that different runs give you different solutions. Notice that it is perfectly fine to run t-SNE a number of times (with the same data and parameters), and to select the visualization with the lowest value of the objective function as your final visualization.
You may play around with the above mentioned distances and some of the parameters, especially perplexity, to perhaps get you closer to what you need into one direction or the other. Hope this helps as a first start. | How to distance and to MDS-plot objects according their complex shape | This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculat | How to distance and to MDS-plot objects according their complex shape
This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculate on reasons for that below.
But I think I was able to get to what you look for in terms of the four basal signals A1, A5, E1, E5. Namely that they lie on the edge of the embedded manifold, that opposite signals lie more or less diametrical to each other (that is A1, E5 and E1, A5 respectively) and that neighbouring signals are preserved (so A1, A5 and E1, E5 respectively).
Generally, I think standard (i.e. with an input weight matrix consiting of only 1's) MDS doesn't really give you what you want because you are actually looking for nonlinear dimension reduction that has some localization feature, namely that larger "distances" should not be preserved but local distances should. There are a number of algorithms that do that. A rather popular one that is often used for cases like yours is called t-SNE for t-Distributed Stochastic Neighbor Embedding. On the linked homepage you will find quite some information.
I calculated the t-SNE embedding for your kappa distance. Note that I use a high perplexity to force a shaping that looks like the one you aim for. It can be obtained by
set.seed(1)
library(tsne)
tsne.coor1 <- tsne(res,perplexity=25)
rownames(tsne.coor1) <- c("A1","A2","A3","A4","A5",
"B1","B2","B3","B4","B5",
"C1","C2","C3","C4","C5",
"D1","D2","D3","D4","D5",
"E1","E2","E3","E4","E5")
plot(tsne.coor1[,1], tsne.coor1[,2], type="n", xlab="", ylab="")
text(tsne.coor1[,1], tsne.coor1[,2],
labels=row.names(tsne.coor1), cex=0.8)
abline(h=0,v=0,col="gray75")
and looks like this
The basic signals are labeled in red. As you can see, the similarity structure as captured by your preprocessing and distance measure does not suggest that the intermediate signals are actually at all as you expect them to be. For example, the path from A1 to E1 has intermediate signals E4, D5, A3 and not B1 through D1. But this is what your data tell you! So, the similarity between the signals that are within the convex hull of the embedded manifold suggests that the clear-cut pattern is not preserved.
There are two obvious explanations:
Some information gets lost in mapping to low-D. The information that gets lost might actually be the information you were looking for.
The distance measure might not capture what you actually care for. I agree with @ttnphns on this one.
[\Edit] (thanks @ttnphns!)
To investigate the last point, I tried other measures of similarity for binary matrices. For this high perplexity it led to no discernably different results in the shape, but it did so in the arrangements (I used a gravity model similarity and an asymmetric binary similarity, the one returned by dist(x,method="binary")). For lower perplexity the effects on the visualisation the kappa distance is small, but for the other distances it is not. To illustrate for the asymmetric distances the results are:
res <- dist(M,method="binary")
set.seed(1)
tsne.coor2 <- tsne(res,perplexity=25)
tsne.coor3 <- tsne(res,perplexity=3)
rownames(tsne.coor2)<-rownames(tsne.coor3) <- c("A1","A2","A3","A4","A5",
"B1","B2","B3","B4","B5",
"C1","C2","C3","C4","C5",
"D1","D2","D3","D4","D5",
"E1","E2","E3","E4","E5")
par(mfrow=c(1,2))
plot(tsne.coor2[,1], tsne.coor2[,2], type="n", xlab="", ylab="")
text(tsne.coor2[,1], tsne.coor2[,2],
labels=row.names(tsne.coor3), cex=0.8)
abline(h=0,v=0,col="gray75")
plot(tsne.coor3[,1], tsne.coor3[,2], type="n", xlab="", ylab="")
text(tsne.coor3[,1], tsne.coor3[,2],
labels=row.names(tsne.coor2), cex=0.8)
abline(h=0,v=0,col="gray75")
and here's the results
So, for high perplexity the difference in the shape of the projection of the signals is rather similar for both distance measures. For lower perplexity, the results change. Note that while it is looking different than what you originally intended when using the asymmetric binary distance, the transition from basis signal through the other states seems to be better preserved in the low perplexity plot, particularly column-wise! . This makes the speculation 2 of using kappa being a not very suitable distance measure more likely. You may try the distances dist(x, method="binary") or cluster::daisy(x, metric= "gower")) (which is the dice coefficient, I think).
[\end Edit]
Note the t-SNE has random initializations, so it might look a bit different at your end --- not sure whether set.seed has an effect in the R implementation. The random initialization is actually something that might get you closer to what you need anyway. As the authors put it:
In contrast to, e.g., PCA, t-SNE has a non-convex objective function. The objective function is minimized using a gradient descent optimization that is initiated randomly. As a result, it is possible that different runs give you different solutions. Notice that it is perfectly fine to run t-SNE a number of times (with the same data and parameters), and to select the visualization with the lowest value of the objective function as your final visualization.
You may play around with the above mentioned distances and some of the parameters, especially perplexity, to perhaps get you closer to what you need into one direction or the other. Hope this helps as a first start. | How to distance and to MDS-plot objects according their complex shape
This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculat |
48,681 | Can AUC decrease with additional variables? | The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance.
Your intuition that using more features should necessarily yield a better model is wrong. | Can AUC decrease with additional variables? | The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance.
Your intuit | Can AUC decrease with additional variables?
The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance.
Your intuition that using more features should necessarily yield a better model is wrong. | Can AUC decrease with additional variables?
The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance.
Your intuit |
48,682 | Can AUC decrease with additional variables? | 4 years late but I just had the same experience now.
For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of these variables from being 0.
It is important to remember how a logistic regression works. I believe the model optimises squared error not AUC directly. You might want to check if your MSE improved when your AUC deteriorated. In my case my MSE did improve despite my AUC getting worse.
I did notice that there is sometimes a very small increase in my MSE with more features. I think it might be one of the model default parameters maybe maximum iterations or a tolerance criteria for convergence. BTW i am using logistic regression from sklearn. | Can AUC decrease with additional variables? | 4 years late but I just had the same experience now.
For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of the | Can AUC decrease with additional variables?
4 years late but I just had the same experience now.
For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of these variables from being 0.
It is important to remember how a logistic regression works. I believe the model optimises squared error not AUC directly. You might want to check if your MSE improved when your AUC deteriorated. In my case my MSE did improve despite my AUC getting worse.
I did notice that there is sometimes a very small increase in my MSE with more features. I think it might be one of the model default parameters maybe maximum iterations or a tolerance criteria for convergence. BTW i am using logistic regression from sklearn. | Can AUC decrease with additional variables?
4 years late but I just had the same experience now.
For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of the |
48,683 | Can AUC decrease with additional variables? | Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the discrimination in the same cohorts. | Can AUC decrease with additional variables? | Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the d | Can AUC decrease with additional variables?
Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the discrimination in the same cohorts. | Can AUC decrease with additional variables?
Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the d |
48,684 | Longitudinal item response theory models in R | As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives using the WLSMV estimator for ordinal data since I imagine less issues will exist. Plus, including external covariates is much easier within that framework. Both approaches I describe here are also possible in SEM.
There are two ways that I know of which you can estimate unidimensional longitudinal IRT models that are not Rasch in nature. The the first approach requires a unique latent factor for each time block and a specific residual variation term for each item. A different approach, similar to what one would find in the SEM literature, is via a latent growth curve model whereby only a fixed number of factors are estimated (three if the relationship over time is believed to be linear). Fixed loadings are used in this approach, so computationally it may be much more stable due to the reduced number of estimated parameters, so I would tend to prefer the growth curve model for both the smaller dimensionality and fewer estimated parameters.
The idea for both approaches is to set up latent time factors indicating how person level $\theta$ values change over each test administration, and constrain the influence of their loadings across time as well so that their hyper parameters can be estimated (i.e., the latent mean and covariances). Item constraints must also be imposed across time to remain invariable so that the person differences are only captured in the hyper parameters. Since this approach can require a huge number of integration dimensions, so you'll need to use something like the dimensional reduction algorithm which is available in mirt under the bfactor() function.
Instead of going through a worked example here, which would take a lot of code, I'll simply point to a worked versions of these analyses. A word of warning though, these are very computationally demanding and may take more than an hour to converge on your computer since you have 4 dimensions of integration in the first case and 3 dimensions in the second. Or, if you don't have much RAM you could experience issues when increase the number of quadpts.
Data simulation script: https://github.com/philchalmers/mirt/blob/gh-pages/data-scripts/Longitudinal-IRT.R
Analysis output: http://philchalmers.github.io/mirt/html/Longitudinal-IRT.html
In the first example, if you save the factor scores by using fscores() you'll obtain estimates for each time point regarding how individual $\theta$ values are changing. In the second example, using the linear growth curve approach, the first column of the factor scores will represent the initial $\theta$ estimates while the second column will indicate the slope/change occurring on average over time. In the example, I set up a constant mean change of .5, so the values in fscores() should all be around 0.5 for each individual. Both analyses give roughly the same conclusions but are somewhat different approaches to the problem. However, if you are familiar with longitudinal models in SEM then these should be fairly natural to interpret. | Longitudinal item response theory models in R | As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives u | Longitudinal item response theory models in R
As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives using the WLSMV estimator for ordinal data since I imagine less issues will exist. Plus, including external covariates is much easier within that framework. Both approaches I describe here are also possible in SEM.
There are two ways that I know of which you can estimate unidimensional longitudinal IRT models that are not Rasch in nature. The the first approach requires a unique latent factor for each time block and a specific residual variation term for each item. A different approach, similar to what one would find in the SEM literature, is via a latent growth curve model whereby only a fixed number of factors are estimated (three if the relationship over time is believed to be linear). Fixed loadings are used in this approach, so computationally it may be much more stable due to the reduced number of estimated parameters, so I would tend to prefer the growth curve model for both the smaller dimensionality and fewer estimated parameters.
The idea for both approaches is to set up latent time factors indicating how person level $\theta$ values change over each test administration, and constrain the influence of their loadings across time as well so that their hyper parameters can be estimated (i.e., the latent mean and covariances). Item constraints must also be imposed across time to remain invariable so that the person differences are only captured in the hyper parameters. Since this approach can require a huge number of integration dimensions, so you'll need to use something like the dimensional reduction algorithm which is available in mirt under the bfactor() function.
Instead of going through a worked example here, which would take a lot of code, I'll simply point to a worked versions of these analyses. A word of warning though, these are very computationally demanding and may take more than an hour to converge on your computer since you have 4 dimensions of integration in the first case and 3 dimensions in the second. Or, if you don't have much RAM you could experience issues when increase the number of quadpts.
Data simulation script: https://github.com/philchalmers/mirt/blob/gh-pages/data-scripts/Longitudinal-IRT.R
Analysis output: http://philchalmers.github.io/mirt/html/Longitudinal-IRT.html
In the first example, if you save the factor scores by using fscores() you'll obtain estimates for each time point regarding how individual $\theta$ values are changing. In the second example, using the linear growth curve approach, the first column of the factor scores will represent the initial $\theta$ estimates while the second column will indicate the slope/change occurring on average over time. In the example, I set up a constant mean change of .5, so the values in fscores() should all be around 0.5 for each individual. Both analyses give roughly the same conclusions but are somewhat different approaches to the problem. However, if you are familiar with longitudinal models in SEM then these should be fairly natural to interpret. | Longitudinal item response theory models in R
As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives u |
48,685 | Longitudinal item response theory models in R | In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation.
I have had good experience using R package "rstan", which implements a flavor of Hamiltonian Monte Carlo. I had a data set with almost 3000 subjects measured at 6 time points, had 5 dimensions and 3 groups, and rstan worked really well. I was fitting a GRM model to ordinal test with 30 items. Check out the stan user group for example code. | Longitudinal item response theory models in R | In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation. | Longitudinal item response theory models in R
In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation.
I have had good experience using R package "rstan", which implements a flavor of Hamiltonian Monte Carlo. I had a data set with almost 3000 subjects measured at 6 time points, had 5 dimensions and 3 groups, and rstan worked really well. I was fitting a GRM model to ordinal test with 30 items. Check out the stan user group for example code. | Longitudinal item response theory models in R
In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation. |
48,686 | Variance as a function of parameters | This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (which may be the regressors themselves or not). In its most simple form the model is Weighted Least Squares.
Various specifications have been examined in the literature, like for example
$$\sigma^2_i = (\mathbf z_i'\alpha)^2,\;\;\;\sigma^2_i = \exp\{\mathbf z_i'\alpha\}, \;\;\;\sigma^2_i = \sigma^2(\mathbf x_i'\beta)^2$$
The last case indicates that the variance is directly proportional to the conditional expected value of the dependent variable, while in the previous formulations, the $\mathbf z$ vector may contain the regressors or other variables.
The model can be estimated by a two-step least square procedure, or by maximum likelihood -note that the unknown parameter $\alpha$ is common for all $i$ and so we do not have an "incidental parameters" problem.
This approach always had the issue of misspecification that is almost certain to occur in specifying the functional form that characterizes the heteroskedasticity. After the arrival of White's standard errors and the "heteroskedasticity-robust" variance-covariance matrix, the main problem with using OLS regression was cleared away, and the "direct-modeling" approach is visibly less used than in the past, at least in Econometrics. | Variance as a function of parameters | This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (whic | Variance as a function of parameters
This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (which may be the regressors themselves or not). In its most simple form the model is Weighted Least Squares.
Various specifications have been examined in the literature, like for example
$$\sigma^2_i = (\mathbf z_i'\alpha)^2,\;\;\;\sigma^2_i = \exp\{\mathbf z_i'\alpha\}, \;\;\;\sigma^2_i = \sigma^2(\mathbf x_i'\beta)^2$$
The last case indicates that the variance is directly proportional to the conditional expected value of the dependent variable, while in the previous formulations, the $\mathbf z$ vector may contain the regressors or other variables.
The model can be estimated by a two-step least square procedure, or by maximum likelihood -note that the unknown parameter $\alpha$ is common for all $i$ and so we do not have an "incidental parameters" problem.
This approach always had the issue of misspecification that is almost certain to occur in specifying the functional form that characterizes the heteroskedasticity. After the arrival of White's standard errors and the "heteroskedasticity-robust" variance-covariance matrix, the main problem with using OLS regression was cleared away, and the "direct-modeling" approach is visibly less used than in the past, at least in Econometrics. | Variance as a function of parameters
This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (whic |
48,687 | Variance as a function of parameters | To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functions.
For example, suppose you have a model that looks like this:
(1) $y_{i} = \beta_{0} + \beta_{1}x_{1} + \beta_{2}x_{2} + \varepsilon_{i},$
where, say, $x_{1}$ is some continuous predictor, and $x_{2}$ is a nominal factor (for sake of simplicity).
Now suppose, as you suggest that the variance of $y$ depends on $x_{2}$. You might revise your model as in (2):
(2) $y_{i} = \beta_{0} + \beta_{1}x_{1} + \beta_{2}x_{2} + \varepsilon_{i0} + \varepsilon_{i2},$
where:
$\left[\begin{array}{c}\varepsilon_{0,i}\\
\varepsilon_{2,i}\end{array}\right] \sim \mathcal{N}\left(0,\Omega_{\varepsilon}\right):\Omega_{\varepsilon}=\left[\begin{array}{cc}\sigma^{2}_{\varepsilon 0} & \\
0 & \sigma^{2}_{\varepsilon 2} \end{array}\right]$
(The covariance in $\Omega_{\varepsilon}$ is assumed zero here, since you've got factors with what I am assuming are mutually exclusive categories. If you are not comfortable with that assumption, and think you can estimate a covariance term, $\sigma_{\varepsilon02}$, go ahead and include that.)
You can even use this model if there is no fixed effect of $x_{2}$ on $y$ (i.e. if $x_{2}$ only contributes a random effect). This could be accomplished implicitly by letting a near zero estimate of $\beta_{2}$ be part of the model as in (2), or explicit as in (3):
(3) $y_{i} = \beta_{0} + \beta_{1}x_{1} + \varepsilon_{0,i} + \varepsilon_{2,i}.$ | Variance as a function of parameters | To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functio | Variance as a function of parameters
To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functions.
For example, suppose you have a model that looks like this:
(1) $y_{i} = \beta_{0} + \beta_{1}x_{1} + \beta_{2}x_{2} + \varepsilon_{i},$
where, say, $x_{1}$ is some continuous predictor, and $x_{2}$ is a nominal factor (for sake of simplicity).
Now suppose, as you suggest that the variance of $y$ depends on $x_{2}$. You might revise your model as in (2):
(2) $y_{i} = \beta_{0} + \beta_{1}x_{1} + \beta_{2}x_{2} + \varepsilon_{i0} + \varepsilon_{i2},$
where:
$\left[\begin{array}{c}\varepsilon_{0,i}\\
\varepsilon_{2,i}\end{array}\right] \sim \mathcal{N}\left(0,\Omega_{\varepsilon}\right):\Omega_{\varepsilon}=\left[\begin{array}{cc}\sigma^{2}_{\varepsilon 0} & \\
0 & \sigma^{2}_{\varepsilon 2} \end{array}\right]$
(The covariance in $\Omega_{\varepsilon}$ is assumed zero here, since you've got factors with what I am assuming are mutually exclusive categories. If you are not comfortable with that assumption, and think you can estimate a covariance term, $\sigma_{\varepsilon02}$, go ahead and include that.)
You can even use this model if there is no fixed effect of $x_{2}$ on $y$ (i.e. if $x_{2}$ only contributes a random effect). This could be accomplished implicitly by letting a near zero estimate of $\beta_{2}$ be part of the model as in (2), or explicit as in (3):
(3) $y_{i} = \beta_{0} + \beta_{1}x_{1} + \varepsilon_{0,i} + \varepsilon_{2,i}.$ | Variance as a function of parameters
To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functio |
48,688 | Convergence in probability, $X_i$ IID with finite second moment | Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$,
$$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\sum_{j=1}^nj(X_j-\mathbb E[X_j]),$$
hence
$$\tag{1}\mathbb E|Y_n-\mathbb E[X_1]|^2=\frac 4{n^2(n+1)^2}\sum_{i,j=1}^n
ij\mathbb E\left[(X_i-\mathbb E[X_i])(X_j-\mathbb E[X_j])\right].$$
If $i\neq j$, then by independence $\mathbb E\left[(X_i-\mathbb E[X_i])(X_j-\mathbb E[X_j])\right]=0$ and plugging it in (1),
$$\tag{2}\mathbb E|Y_n-\mathbb E[X_1]|^2=\frac 4{n^2(n+1)^2}\sum_{j=1}^n
j^2\mathbb E\left[(X_j-\mathbb E[X_j])^2\right].$$
Using now the fact that $X_j$ has the same distribution as $X_1$ and bounding $\sum_{j=1}^nj^2$ by $n^2(n+1)$, equality (2) becomes
$$\mathbb E|Y_n-\mathbb E[X_1]|^2\leqslant\frac 4{n+1}\mathbb E\left[(X_0-\mathbb E[X_0])\right]^2$$
and we are done. | Convergence in probability, $X_i$ IID with finite second moment | Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$,
$$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\su | Convergence in probability, $X_i$ IID with finite second moment
Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$,
$$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\sum_{j=1}^nj(X_j-\mathbb E[X_j]),$$
hence
$$\tag{1}\mathbb E|Y_n-\mathbb E[X_1]|^2=\frac 4{n^2(n+1)^2}\sum_{i,j=1}^n
ij\mathbb E\left[(X_i-\mathbb E[X_i])(X_j-\mathbb E[X_j])\right].$$
If $i\neq j$, then by independence $\mathbb E\left[(X_i-\mathbb E[X_i])(X_j-\mathbb E[X_j])\right]=0$ and plugging it in (1),
$$\tag{2}\mathbb E|Y_n-\mathbb E[X_1]|^2=\frac 4{n^2(n+1)^2}\sum_{j=1}^n
j^2\mathbb E\left[(X_j-\mathbb E[X_j])^2\right].$$
Using now the fact that $X_j$ has the same distribution as $X_1$ and bounding $\sum_{j=1}^nj^2$ by $n^2(n+1)$, equality (2) becomes
$$\mathbb E|Y_n-\mathbb E[X_1]|^2\leqslant\frac 4{n+1}\mathbb E\left[(X_0-\mathbb E[X_0])\right]^2$$
and we are done. | Convergence in probability, $X_i$ IID with finite second moment
Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$,
$$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\su |
48,689 | Convergence in probability, $X_i$ IID with finite second moment | Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) complements the supplied proof of convergence in mean square and the direct proof using Chebyshev's inequality.
Proof outline
(1) Show that $Y_n\overset{a.s}{\to}Y$ for some random variable Y.
(2) Show that $\mathbb E Y = \mu := \mathbb E X_1$ and $\mathrm{Var}(Y)=0$
(3) Conclude from this that since (2) implies that $Y = \mu$ a.s., we by (1) have the desired result, namely: $Y_n\overset{a.s}{\to} \mu$.
Proof of 1)
Consider the sum of absolute values, viz.
$$\begin{align}
S_n&:=\sum_{i=1}^n\frac{2i}{n(n+1)}|X_i| \\
& \leq \frac{1}{n}\sum_{i=1}^n|X_i| \overset{a.s,m.s}{\to} \mathbb E|X_i|\leq\sqrt{\mathbb E X_i^2}<\infty
\end{align}$$, where the convergence and inequalities in the last line follows from the strong law of large numbers and the assumption of finite second moment.
This shows that on sets with total probability one, $S_n$ is an increasing and bounded sequence and thus convergent. But $S_n$ is the sum of the absolute values of the terms of the sum $Y_n$, so we thus know that also $Y_n$ converges on these sets. In other words, $Y_n$ is almost surely convergent with limit $Y$, say.
Proof of 2)
We use the following standard result (sometimes called the extended or improved dominated convergence theorem):
A dominated convergence theorem (DCT)
Let $\{Z_n\}$ be a sequence of random variables on some probability space$(\Omega,\mathcal{F},P).$ If $Z_n \overset{a.s}{\to} Z$ and there exist random variables $M_n \overset{a.s}{\to}M$ such that $|Z_n|\leq M_n, \mathbb EM_n<\infty,\forall n$ and $\lim _n \mathbb E M_n = \mathbb E M <\infty$, then $\lim _{n\to \infty} \mathbb E Z_n=\mathbb EZ.$
To obtain the expectation of $Y$, take first $Z_n=Y_n$ and $M_n=\frac{1}{n}\sum_{i=1}^n|X_i|$ in the DCT. We get $\lim _{n\to \infty} \mathbb E Y_n=\lim _{n\to \infty} \mu=\mathbb EY.$
To obtain the variance, set $Z_n=(Y_n-\mu)^2$. We have $Z_n \overset{a.s}{\to}Z:=(Y-\mu)^2$ and also $$|Y_n-\mu|^2\leq (S_n+|\mu|)^2 \leq (\frac{1}{n}\sum_{i=1}^n|X_i|+|\mu|)^2=:M_n.$$
By expanding the square and using the mean square convergence of $\frac{1}{n}\sum_{i=1}^n|X_i|$ it is clear that this $M_n$ satisfies the requirement for applying the DCT. Thus, $$\mathrm{Var}(Y)=\mathbb E (Y-\mu)^2 = \lim_n \mathbb E (Y_n-\mu)^2=\lim_n \mathrm{Var}(Y_n)=0.$$ This finishes the proof. | Convergence in probability, $X_i$ IID with finite second moment | Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) comple | Convergence in probability, $X_i$ IID with finite second moment
Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) complements the supplied proof of convergence in mean square and the direct proof using Chebyshev's inequality.
Proof outline
(1) Show that $Y_n\overset{a.s}{\to}Y$ for some random variable Y.
(2) Show that $\mathbb E Y = \mu := \mathbb E X_1$ and $\mathrm{Var}(Y)=0$
(3) Conclude from this that since (2) implies that $Y = \mu$ a.s., we by (1) have the desired result, namely: $Y_n\overset{a.s}{\to} \mu$.
Proof of 1)
Consider the sum of absolute values, viz.
$$\begin{align}
S_n&:=\sum_{i=1}^n\frac{2i}{n(n+1)}|X_i| \\
& \leq \frac{1}{n}\sum_{i=1}^n|X_i| \overset{a.s,m.s}{\to} \mathbb E|X_i|\leq\sqrt{\mathbb E X_i^2}<\infty
\end{align}$$, where the convergence and inequalities in the last line follows from the strong law of large numbers and the assumption of finite second moment.
This shows that on sets with total probability one, $S_n$ is an increasing and bounded sequence and thus convergent. But $S_n$ is the sum of the absolute values of the terms of the sum $Y_n$, so we thus know that also $Y_n$ converges on these sets. In other words, $Y_n$ is almost surely convergent with limit $Y$, say.
Proof of 2)
We use the following standard result (sometimes called the extended or improved dominated convergence theorem):
A dominated convergence theorem (DCT)
Let $\{Z_n\}$ be a sequence of random variables on some probability space$(\Omega,\mathcal{F},P).$ If $Z_n \overset{a.s}{\to} Z$ and there exist random variables $M_n \overset{a.s}{\to}M$ such that $|Z_n|\leq M_n, \mathbb EM_n<\infty,\forall n$ and $\lim _n \mathbb E M_n = \mathbb E M <\infty$, then $\lim _{n\to \infty} \mathbb E Z_n=\mathbb EZ.$
To obtain the expectation of $Y$, take first $Z_n=Y_n$ and $M_n=\frac{1}{n}\sum_{i=1}^n|X_i|$ in the DCT. We get $\lim _{n\to \infty} \mathbb E Y_n=\lim _{n\to \infty} \mu=\mathbb EY.$
To obtain the variance, set $Z_n=(Y_n-\mu)^2$. We have $Z_n \overset{a.s}{\to}Z:=(Y-\mu)^2$ and also $$|Y_n-\mu|^2\leq (S_n+|\mu|)^2 \leq (\frac{1}{n}\sum_{i=1}^n|X_i|+|\mu|)^2=:M_n.$$
By expanding the square and using the mean square convergence of $\frac{1}{n}\sum_{i=1}^n|X_i|$ it is clear that this $M_n$ satisfies the requirement for applying the DCT. Thus, $$\mathrm{Var}(Y)=\mathbb E (Y-\mu)^2 = \lim_n \mathbb E (Y_n-\mu)^2=\lim_n \mathrm{Var}(Y_n)=0.$$ This finishes the proof. | Convergence in probability, $X_i$ IID with finite second moment
Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) comple |
48,690 | Plotting a categorical response as a function of a continuous predictor using R | This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight.
In this spirit, an attractive approach eschews binning the independent variable. Instead, compute and smooth a running summary of the dependent variable (proportion of incomes less than 50,000 per annum). Choose suitable windows for the summary and smooth depending on how detailed or how general a picture of the relationship is needed.
Here is an example of a synthetic dataset designed to look like the illustration:
The top row presents univariate summaries of the independent variable (hours) and dependent variable (income in thousands of dollars per year). The latter makes it evident that information about high incomes will be relatively uncertain in this example.
The bottom row presents bivariate information. On the left is the plot of income against hours. (In many circumstances it would be better to model this relationship rather than splitting the income into just two groups. But sometimes we really do just have a binary dependent variable or our analytical objective truly concerns comparing the two groups. Let's proceed...)
The bottom right illustrates the suggested solution:
The wiggly blue line is the smoothed running mean of proportions of income below 50K against hours per week.
The surrounding gray lines are separated from the blue line by one standard error of estimate.
The red line--which in practice would not be available--is the underlying relationship used to synthesize these data. Ideally, it would lie entirely within (or at least close to) the region enclosed by the gray lines.
The appearance of this last plot can be controlled by varying the moving window size and by applying more or less amounts of smoothing to the moving summaries. Here are some variations from the default width of $16$:
The smaller width (upper left) provides too much detail. The larger widths give simpler representations; at the lower right, the relationship is nearly reduced to a straight line. In practice--were this a real dataset (and the reference [red] line unavailable), one might decide to start with a simple linear term in hours alone and test whether introducing (say) a small cubic spline would improve the model, in effect comparing the depiction in the lower right corner to that in the upper right corner.
If you wish to preserve the ability to compare models formally--that is, to trust the p-values--then it is essential to hold out some data before conducting the exploration and test the final model against the held-out data. If it holds up, then the model can finally be fit using all the data in order to improve the estimates of its coefficients.
R code follows.
#
# Synthesize a data set.
#
set.seed(17)
n <- 300 # Amount of data
means <- c(15, 44) / 168 # Typical hours per week
sds <- c(5, 15) / 168 # Dispersion around those values
ab <- means * (1-means) / sds^2 - 1 # Corresponding Beta parameters
alphas <- means * ab
betas <- ab - alphas
hours <- sort(rbeta(n, alphas, betas) * 168) # Generate a mixture of Betas
par(mfrow=c(2,2))
hist(hours)
f <- function(h, m1=-2, m2=0.3) { # Prescribe the income-hour relationship
x <- h/100
0.4 + m2*x + m1*(pmin(x,0.5)-0.5) -(m1+m2)*(pmin(x,0.3)-0.3)
}
# Incomes are lognormally distributed conditional on hours
sd <- 0.4 # CV of incomes (geometric SD)
income <- exp(rnorm(n, mean=qnorm(1-f(hours))*sd + log(50), sd=sd))
hist(income)
plot(hours, income, xlab="Hours per week", ylab="Income (K$)",
main="Income vs. Hours") # $
#
# Compute moving summaries.
#
require(zoo)
width <- floor(sqrt(n)) # Size of moving window
smooth.width <- min(n, 3*width) / n # Strength of the smoother
fill <- list("extend", "extend", "extend")
x.window <- rollmean(zoo(hours), width, fill=fill)
y.window <- rollmean(zoo(income <= 50), width, fill=fill)
y.window <- zoo(lowess(y.window, f=smooth.width)$y) # $
plot(c(min(x.window), max(x.window)), c(0,1), type="n", bg="#f8f8f8",
xlab="Hours per week", ylab="Proportion Income <= 50K",
main="Proportion Below 50K vs. Hours")
curve(f(x), add=TRUE, col="Red")
lines(x.window, y.window, col="Blue")
lines(x.window, y.window + sqrt(y.window * (1-y.window) / width), col="Gray")
lines(x.window, y.window - sqrt(y.window * (1-y.window) / width), col="Gray") | Plotting a categorical response as a function of a continuous predictor using R | This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight.
In this spirit, an attractive approach eschews binning the independent variab | Plotting a categorical response as a function of a continuous predictor using R
This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight.
In this spirit, an attractive approach eschews binning the independent variable. Instead, compute and smooth a running summary of the dependent variable (proportion of incomes less than 50,000 per annum). Choose suitable windows for the summary and smooth depending on how detailed or how general a picture of the relationship is needed.
Here is an example of a synthetic dataset designed to look like the illustration:
The top row presents univariate summaries of the independent variable (hours) and dependent variable (income in thousands of dollars per year). The latter makes it evident that information about high incomes will be relatively uncertain in this example.
The bottom row presents bivariate information. On the left is the plot of income against hours. (In many circumstances it would be better to model this relationship rather than splitting the income into just two groups. But sometimes we really do just have a binary dependent variable or our analytical objective truly concerns comparing the two groups. Let's proceed...)
The bottom right illustrates the suggested solution:
The wiggly blue line is the smoothed running mean of proportions of income below 50K against hours per week.
The surrounding gray lines are separated from the blue line by one standard error of estimate.
The red line--which in practice would not be available--is the underlying relationship used to synthesize these data. Ideally, it would lie entirely within (or at least close to) the region enclosed by the gray lines.
The appearance of this last plot can be controlled by varying the moving window size and by applying more or less amounts of smoothing to the moving summaries. Here are some variations from the default width of $16$:
The smaller width (upper left) provides too much detail. The larger widths give simpler representations; at the lower right, the relationship is nearly reduced to a straight line. In practice--were this a real dataset (and the reference [red] line unavailable), one might decide to start with a simple linear term in hours alone and test whether introducing (say) a small cubic spline would improve the model, in effect comparing the depiction in the lower right corner to that in the upper right corner.
If you wish to preserve the ability to compare models formally--that is, to trust the p-values--then it is essential to hold out some data before conducting the exploration and test the final model against the held-out data. If it holds up, then the model can finally be fit using all the data in order to improve the estimates of its coefficients.
R code follows.
#
# Synthesize a data set.
#
set.seed(17)
n <- 300 # Amount of data
means <- c(15, 44) / 168 # Typical hours per week
sds <- c(5, 15) / 168 # Dispersion around those values
ab <- means * (1-means) / sds^2 - 1 # Corresponding Beta parameters
alphas <- means * ab
betas <- ab - alphas
hours <- sort(rbeta(n, alphas, betas) * 168) # Generate a mixture of Betas
par(mfrow=c(2,2))
hist(hours)
f <- function(h, m1=-2, m2=0.3) { # Prescribe the income-hour relationship
x <- h/100
0.4 + m2*x + m1*(pmin(x,0.5)-0.5) -(m1+m2)*(pmin(x,0.3)-0.3)
}
# Incomes are lognormally distributed conditional on hours
sd <- 0.4 # CV of incomes (geometric SD)
income <- exp(rnorm(n, mean=qnorm(1-f(hours))*sd + log(50), sd=sd))
hist(income)
plot(hours, income, xlab="Hours per week", ylab="Income (K$)",
main="Income vs. Hours") # $
#
# Compute moving summaries.
#
require(zoo)
width <- floor(sqrt(n)) # Size of moving window
smooth.width <- min(n, 3*width) / n # Strength of the smoother
fill <- list("extend", "extend", "extend")
x.window <- rollmean(zoo(hours), width, fill=fill)
y.window <- rollmean(zoo(income <= 50), width, fill=fill)
y.window <- zoo(lowess(y.window, f=smooth.width)$y) # $
plot(c(min(x.window), max(x.window)), c(0,1), type="n", bg="#f8f8f8",
xlab="Hours per week", ylab="Proportion Income <= 50K",
main="Proportion Below 50K vs. Hours")
curve(f(x), add=TRUE, col="Red")
lines(x.window, y.window, col="Blue")
lines(x.window, y.window + sqrt(y.window * (1-y.window) / width), col="Gray")
lines(x.window, y.window - sqrt(y.window * (1-y.window) / width), col="Gray") | Plotting a categorical response as a function of a continuous predictor using R
This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight.
In this spirit, an attractive approach eschews binning the independent variab |
48,691 | Plotting a categorical response as a function of a continuous predictor using R | The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:-
Of course, the line corresponds with the histogram example at where the two colours meet. I can't see in the your example if the data is raw or modelled (as my example is).
This SO question shows another example. | Plotting a categorical response as a function of a continuous predictor using R | The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:-
Of course, the line corresponds with the histogram e | Plotting a categorical response as a function of a continuous predictor using R
The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:-
Of course, the line corresponds with the histogram example at where the two colours meet. I can't see in the your example if the data is raw or modelled (as my example is).
This SO question shows another example. | Plotting a categorical response as a function of a continuous predictor using R
The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:-
Of course, the line corresponds with the histogram e |
48,692 | Checking MCMC convergence with a single chain | First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly important mode of the target distribution, the Gelman-Rubin criterion concludes to the convergence of the chains. Using multiple chains to check for convergence is quite reasonable if costly, but one can never "be sure to have reached stationarity". Simulated tempering can help, though.
Second, to check convergence or stationarity on a single Markov chain $(x_t)_{t=1,\ldots,T}$, one needs to know a lot about the target distribution $\pi(x)$ because, otherwise, all you can judge from the sequence of values $x_1,x_2,\ldots,x_T$ is their stability. Hence only the ability of the MCMC sampler to explore the current region of the support of $\pi$. To go beyond that requires an assessment of this support and of the "missing mass", i.e. the mass under $\pi$ of the remainder of the space. This is an extremely rare occurrence. | Checking MCMC convergence with a single chain | First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly imp | Checking MCMC convergence with a single chain
First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly important mode of the target distribution, the Gelman-Rubin criterion concludes to the convergence of the chains. Using multiple chains to check for convergence is quite reasonable if costly, but one can never "be sure to have reached stationarity". Simulated tempering can help, though.
Second, to check convergence or stationarity on a single Markov chain $(x_t)_{t=1,\ldots,T}$, one needs to know a lot about the target distribution $\pi(x)$ because, otherwise, all you can judge from the sequence of values $x_1,x_2,\ldots,x_T$ is their stability. Hence only the ability of the MCMC sampler to explore the current region of the support of $\pi$. To go beyond that requires an assessment of this support and of the "missing mass", i.e. the mass under $\pi$ of the remainder of the space. This is an extremely rare occurrence. | Checking MCMC convergence with a single chain
First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly imp |
48,693 | Estimating the error in the average of correlated values | This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically, this is a nontrivial question.
Provided a CLT exists, the second question is how to obtain a consistent estimator of the variance in the CLT. Partitioning the chain as you have done is called batching and is the approach used in the R package mcmcse. I'd suggest you take a look at that package and the references therein. As an example of using the package you can do
library(mcmcse)
n = 1000
y = rep(0,n)
for (i in 2:n) y[i] = rnorm(1,0.9*y[i-1])
mcse(y)
which will return an estimate of the expectation of y, which is known to be zero in this case, as well as an estimate of the Monte Carlo standard error in the estimator which takes into account the correlation in the chain.
This package allows the user to specify the batch size, but defaults to the square root of the total number of samples. Perhaps the references in the package will give an indication of why this is the default or what other batch sizes will work. | Estimating the error in the average of correlated values | This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically, | Estimating the error in the average of correlated values
This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically, this is a nontrivial question.
Provided a CLT exists, the second question is how to obtain a consistent estimator of the variance in the CLT. Partitioning the chain as you have done is called batching and is the approach used in the R package mcmcse. I'd suggest you take a look at that package and the references therein. As an example of using the package you can do
library(mcmcse)
n = 1000
y = rep(0,n)
for (i in 2:n) y[i] = rnorm(1,0.9*y[i-1])
mcse(y)
which will return an estimate of the expectation of y, which is known to be zero in this case, as well as an estimate of the Monte Carlo standard error in the estimator which takes into account the correlation in the chain.
This package allows the user to specify the batch size, but defaults to the square root of the total number of samples. Perhaps the references in the package will give an indication of why this is the default or what other batch sizes will work. | Estimating the error in the average of correlated values
This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically, |
48,694 | Estimating the error in the average of correlated values | The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this document for a very short summary [1]. There is also some parallel work in the physics community, where ideas from renormalisation are used; see e.g. [2] for a clear exposition.
References:
[1] Kreiss, J. P., & Lahiri, S. N. (2012). Bootstrap methods for time series. In Handbook of statistics (Vol. 30, pp. 3-26). Elsevier.
[2] Flyvbjerg, H., & Petersen, H. G. (1989). Error estimates on averages of correlated data. The Journal of Chemical Physics, 91(1), 461-466. | Estimating the error in the average of correlated values | The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this doc | Estimating the error in the average of correlated values
The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this document for a very short summary [1]. There is also some parallel work in the physics community, where ideas from renormalisation are used; see e.g. [2] for a clear exposition.
References:
[1] Kreiss, J. P., & Lahiri, S. N. (2012). Bootstrap methods for time series. In Handbook of statistics (Vol. 30, pp. 3-26). Elsevier.
[2] Flyvbjerg, H., & Petersen, H. G. (1989). Error estimates on averages of correlated data. The Journal of Chemical Physics, 91(1), 461-466. | Estimating the error in the average of correlated values
The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this doc |
48,695 | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm? | Well, random forest uses bagging which is specifically designed to reduce problems with overfitting.
Ensemble methods like bagging and CV are both ways to avoid overfitting.
Cross-validation can be used in random forest modelling various ways - e.g. to find the optimal number of trees - but I don't know anywhere it has to be used. For example, to measure the out-of-sample performance I think you can use the out-of-bag error.
I suppose the resulting question is 'can overfitting - while reduced in scope - still be a problem if you don't use cross-validation'? I'm not 100% certain of the answer to that,
but searching around$^{[1]}$ it looks like overfitting might still be a potential issue (BMA and bagging are both forms of model averaging, the problem could easily carry over to bagging and so perhaps to random forests). In that case, some other approach - such as cross-validation - might be needed.
(Cross validation isn't the only other way to reduce/avoid overfitting of course, which may have been the underlying point of the question.)
[1] Domingos, P., (2000)
"Bayesian Averaging of Classifiers and the Overfitting Problem"
Proceedings of the Seventeenth International Conference on Machine Learning, pp.223-230 | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm | Well, random forest uses bagging which is specifically designed to reduce problems with overfitting.
Ensemble methods like bagging and CV are both ways to avoid overfitting.
Cross-validation can be | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm?
Well, random forest uses bagging which is specifically designed to reduce problems with overfitting.
Ensemble methods like bagging and CV are both ways to avoid overfitting.
Cross-validation can be used in random forest modelling various ways - e.g. to find the optimal number of trees - but I don't know anywhere it has to be used. For example, to measure the out-of-sample performance I think you can use the out-of-bag error.
I suppose the resulting question is 'can overfitting - while reduced in scope - still be a problem if you don't use cross-validation'? I'm not 100% certain of the answer to that,
but searching around$^{[1]}$ it looks like overfitting might still be a potential issue (BMA and bagging are both forms of model averaging, the problem could easily carry over to bagging and so perhaps to random forests). In that case, some other approach - such as cross-validation - might be needed.
(Cross validation isn't the only other way to reduce/avoid overfitting of course, which may have been the underlying point of the question.)
[1] Domingos, P., (2000)
"Bayesian Averaging of Classifiers and the Overfitting Problem"
Proceedings of the Seventeenth International Conference on Machine Learning, pp.223-230 | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm
Well, random forest uses bagging which is specifically designed to reduce problems with overfitting.
Ensemble methods like bagging and CV are both ways to avoid overfitting.
Cross-validation can be |
48,696 | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm? | As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much of your time.
And Glen_b also mentioned that, CV and Bagging are two approaches to reduce overfitting, so using one of them will be just fine. | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm | As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm?
As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much of your time.
And Glen_b also mentioned that, CV and Bagging are two approaches to reduce overfitting, so using one of them will be just fine. | Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm
As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much |
48,697 | What's the algorithm for finding sequences used by TraMineR? | As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the prefix-tree-based search described in Masseglia (2002).
Masseglia, F. (2002). Algorithmes et applications pour l'extraction
de motifs sequentiels dans le domaine de la fouille de donnees : de
l'incremental au temps reel. Ph. D. thesis, Universite de
Versailles Saint-Quentin en Yvelines.
Ritschard, G., Bürgin, R. & Studer, M. (2013), "Exploratory Mining of
Life Event Histories", In McArdle, J.J. & Ritschard, G. (eds)
Contemporary Issues in Exploratory Data Mining in the Behavioral
Sciences. Series: Quantitative Methodology, pp. 221-253. New York:
Routledge. | What's the algorithm for finding sequences used by TraMineR? | As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the prefix-tree-based search described in Masseglia (2002).
Masseglia, F. (2002). Algorithmes et appli | What's the algorithm for finding sequences used by TraMineR?
As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the prefix-tree-based search described in Masseglia (2002).
Masseglia, F. (2002). Algorithmes et applications pour l'extraction
de motifs sequentiels dans le domaine de la fouille de donnees : de
l'incremental au temps reel. Ph. D. thesis, Universite de
Versailles Saint-Quentin en Yvelines.
Ritschard, G., Bürgin, R. & Studer, M. (2013), "Exploratory Mining of
Life Event Histories", In McArdle, J.J. & Ritschard, G. (eds)
Contemporary Issues in Exploratory Data Mining in the Behavioral
Sciences. Series: Quantitative Methodology, pp. 221-253. New York:
Routledge. | What's the algorithm for finding sequences used by TraMineR?
As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the prefix-tree-based search described in Masseglia (2002).
Masseglia, F. (2002). Algorithmes et appli |
48,698 | Parameter region for existence of solutions of equation | To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures below.)
This particular question is amenable to further analysis which enables R to display $S$: for each $x\in [0,1]$, we can plot the subset of $S$ it determines (which is a curve). By choosing a visually dense collection of such $x$, the collection of these curves limns the entire region $S$.
Begin the analysis by rewriting the defining equation in the form
$$b = -\frac{1}{2} a^2 + u(x) a + v(x)$$
where (assuming $x\ne 0$)
$$u(x) = f(x)^2/f^\prime(x) + F(x)$$ and $$v(x) = -\left(F(x) f(x)^2/f^\prime(x) + \frac{1}{2}F(x)^2\right).$$
In the $(a,b)$ plane these equations describe similar parabolae having their vertexes at $(u(x), v(x) + u(x)^2/2).$
In this figure some of the parabolae are drawn in gray. The locus of their vertexes is traced by the thick red curve. The region $[0,1]\times [0,1]$ is shown as a gray square. $S$ is the portion of the gray square overlapped by the shaded blue region.
The region $S$ comprises most of the left half of the unit square ($a\le 1/2$), less a small region at the bottom, together with pieces of some parabolae at the bottom. The next figure examines that lower region in more detail.
The piece missing from $S$ in the left half of the unit square lies below the parabola $b = -\frac{1}{2}a^2 + u(1)a + v(1)\approx -\frac{1}{2}a^2 + 0.599374 a - 0.15035.$ The part of $S$ in the right half of the unit square (where the curves seem to overlap) is bounded by the envelope of these parabolae; it is difficult to derive any simple formula for its boundary.
R code
Begin with a function f to compute values along a parabola given by $x \ne 0$:
f <- function(a, x) {
F <- pnorm(x)
f0 <- dnorm(x)
f1 <- -x * exp(-x^2 / 2) / sqrt(2 * pi)
return(-1/2 * a^2 + (f0^2/f1 + F)*a - (F*f0^2/f1 + F^2/2))
}
Because interest lies in $0\le a\le 1$, a typically will be a sequence of numbers in this range; f returns the ordinates of the parabola lying above this sequence.
Use this iteratively to draw the parabolae. The vertical line $a=\frac{1}{2}$ (corresponding to $x=0$) needs to be drawn separately because it is not the graph of a function of $a$.
n.mesh <- 64
mesh <- seq(0, 1, length.out=n.mesh)
colors <- hsv(mesh, .8, .9, 2/3)
plot(c(0,1), c(0,1), type="n", xlab="a", ylab="b")
rect(0,0,1,1, col=gray(0.96))
for(i in n.mesh:2) {
x0 <- mesh[i]
curve(f(x, x0), add=TRUE, col=colors[i])
}
abline(v=1/2, xlim=c(0,1), ylim=c(0,1), col=colors[1])
Appendix: Brute Force Code
In Mathematica a brute-force (but somewhat efficient) way to find a subset of $S$, when the function $x\to g(x,a,b)$ is continuous for all $(a,b)\in[0,1]^2$, checks whether this function changes sign between $x=0$ and $x=1$. Begin by defining $g$:
fF[x_] := CDF[NormalDistribution[], x];
f[x_] := PDF[NormalDistribution[]][x];
f1[x_] := Evaluate[D[f[y], y] /. y -> x];
g[x_, a_, b_] := (fF[x] - a) f[x]^2 + (fF[x]^2/2 - a fF[x] + (a^2/2 + b)) f1[x];
Here is the solution (which plots quickly):
RegionPlot[g[0, a, b] g[1, a, b] < 0 , {a, 0, 1}, {b, 0, 1}]
In general some analysis is required to assure that this covers all of $S$. That analysis can be assisted by plotting contours of $(a,b)\to g(x,a,b)$ for various values of $x$ of interest:
ContourPlot[Evaluate@Table[g[x, a, b] == 0, {x, Range[0, 1, 1/64]}], {a, 0, 1}, {b, 0, 1}]
To the extent you can trust R to produce accurate contour plots, a similar solution is available for it. | Parameter region for existence of solutions of equation | To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures bel | Parameter region for existence of solutions of equation
To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures below.)
This particular question is amenable to further analysis which enables R to display $S$: for each $x\in [0,1]$, we can plot the subset of $S$ it determines (which is a curve). By choosing a visually dense collection of such $x$, the collection of these curves limns the entire region $S$.
Begin the analysis by rewriting the defining equation in the form
$$b = -\frac{1}{2} a^2 + u(x) a + v(x)$$
where (assuming $x\ne 0$)
$$u(x) = f(x)^2/f^\prime(x) + F(x)$$ and $$v(x) = -\left(F(x) f(x)^2/f^\prime(x) + \frac{1}{2}F(x)^2\right).$$
In the $(a,b)$ plane these equations describe similar parabolae having their vertexes at $(u(x), v(x) + u(x)^2/2).$
In this figure some of the parabolae are drawn in gray. The locus of their vertexes is traced by the thick red curve. The region $[0,1]\times [0,1]$ is shown as a gray square. $S$ is the portion of the gray square overlapped by the shaded blue region.
The region $S$ comprises most of the left half of the unit square ($a\le 1/2$), less a small region at the bottom, together with pieces of some parabolae at the bottom. The next figure examines that lower region in more detail.
The piece missing from $S$ in the left half of the unit square lies below the parabola $b = -\frac{1}{2}a^2 + u(1)a + v(1)\approx -\frac{1}{2}a^2 + 0.599374 a - 0.15035.$ The part of $S$ in the right half of the unit square (where the curves seem to overlap) is bounded by the envelope of these parabolae; it is difficult to derive any simple formula for its boundary.
R code
Begin with a function f to compute values along a parabola given by $x \ne 0$:
f <- function(a, x) {
F <- pnorm(x)
f0 <- dnorm(x)
f1 <- -x * exp(-x^2 / 2) / sqrt(2 * pi)
return(-1/2 * a^2 + (f0^2/f1 + F)*a - (F*f0^2/f1 + F^2/2))
}
Because interest lies in $0\le a\le 1$, a typically will be a sequence of numbers in this range; f returns the ordinates of the parabola lying above this sequence.
Use this iteratively to draw the parabolae. The vertical line $a=\frac{1}{2}$ (corresponding to $x=0$) needs to be drawn separately because it is not the graph of a function of $a$.
n.mesh <- 64
mesh <- seq(0, 1, length.out=n.mesh)
colors <- hsv(mesh, .8, .9, 2/3)
plot(c(0,1), c(0,1), type="n", xlab="a", ylab="b")
rect(0,0,1,1, col=gray(0.96))
for(i in n.mesh:2) {
x0 <- mesh[i]
curve(f(x, x0), add=TRUE, col=colors[i])
}
abline(v=1/2, xlim=c(0,1), ylim=c(0,1), col=colors[1])
Appendix: Brute Force Code
In Mathematica a brute-force (but somewhat efficient) way to find a subset of $S$, when the function $x\to g(x,a,b)$ is continuous for all $(a,b)\in[0,1]^2$, checks whether this function changes sign between $x=0$ and $x=1$. Begin by defining $g$:
fF[x_] := CDF[NormalDistribution[], x];
f[x_] := PDF[NormalDistribution[]][x];
f1[x_] := Evaluate[D[f[y], y] /. y -> x];
g[x_, a_, b_] := (fF[x] - a) f[x]^2 + (fF[x]^2/2 - a fF[x] + (a^2/2 + b)) f1[x];
Here is the solution (which plots quickly):
RegionPlot[g[0, a, b] g[1, a, b] < 0 , {a, 0, 1}, {b, 0, 1}]
In general some analysis is required to assure that this covers all of $S$. That analysis can be assisted by plotting contours of $(a,b)\to g(x,a,b)$ for various values of $x$ of interest:
ContourPlot[Evaluate@Table[g[x, a, b] == 0, {x, Range[0, 1, 1/64]}], {a, 0, 1}, {b, 0, 1}]
To the extent you can trust R to produce accurate contour plots, a similar solution is available for it. | Parameter region for existence of solutions of equation
To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures bel |
48,699 | How to choose the right number of parameters in Logistic Regression? | Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devote to the portion of the model that pertains to that one predictor. I use regression splines, place knots where $X$ is dense, and specify the number of knots (or the number of parameters and back calculate the number of knots) by asking (1) what does the sample size and distribution of $Y$ support and (2) what is the signal:noise ratio in this dataset. When $n \uparrow$ or signal:noise ratio $\uparrow$ I can use more knots. There is no set formula for the number of parameters that should be fitted, although in a minority of situations you can use cross-validation or AIC to determine this. As you mentioned, shrinkage is a great alternative, because you can start out with many parameters then shrink the coefficients down to what cross-validation or effective AIC dictate. | How to choose the right number of parameters in Logistic Regression? | Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devo | How to choose the right number of parameters in Logistic Regression?
Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devote to the portion of the model that pertains to that one predictor. I use regression splines, place knots where $X$ is dense, and specify the number of knots (or the number of parameters and back calculate the number of knots) by asking (1) what does the sample size and distribution of $Y$ support and (2) what is the signal:noise ratio in this dataset. When $n \uparrow$ or signal:noise ratio $\uparrow$ I can use more knots. There is no set formula for the number of parameters that should be fitted, although in a minority of situations you can use cross-validation or AIC to determine this. As you mentioned, shrinkage is a great alternative, because you can start out with many parameters then shrink the coefficients down to what cross-validation or effective AIC dictate. | How to choose the right number of parameters in Logistic Regression?
Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devo |
48,700 | Confidence Interval for predictions for Poisson regression | To address Q1, lets start by making some data to play with:
lo.to.p <- function(lo){ # this function will convert log odds to probabilities
o <- exp(lo) # we get odds by exponentiating log odds
p <- o/(o+1) # we convert to probabilities
return(p)
}
set.seed(90) # this makes the example reproducible
x <- runif(100, min=0, max=100) # I generate some x data from a uniform dist
lo <- -.5 + .1*x # this is the linear predictor
p <- lo.to.p(lo) # converting log odds to probabilities
y <- rbinom(100, size=1, prob=p) # generating observed y values
foo <- data.frame(x=x, y=y)
# @Gavin's code:
mod <- glm(y ~ x, data=foo, family=binomial)
preddat <- with(foo, data.frame(x=seq(min(x), max(x), length=100)))
preds <- predict(mod, newdata=preddat, type="link", se.fit=TRUE)
Now, why not try to get predicted values and a confidence interval / band by just using the original data:
preds2 <- predict(mod, newdata=foo$x, type="link", se.fit=TRUE)
That throws an error, because predict() needs the newdata argument to get a data frame:
# Error in eval(predvars, data, env) :
# numeric 'envir' arg not of length one
So let's try with the original data as a data frame:
preds3 <- predict(mod, newdata=data.frame(x=foo$x), type="link", se.fit=TRUE)
That time it worked, so let's see what the output looks like (I used our lo.to.p() function to convert the output from predict to predicted probabilities as @Gavin suggested, note that you can also use predict with type="response" to do that automatically):
Using the original data frame yields a garbled mess. You can sort the data first, which works OK in this case, but generally is not as smooth / pretty. To better show the effect of this strategy, I slightly augmented the data and model. Here's the code for the sorted version:
foo2 <- with(foo, data.frame(x=c(x, -100), y=c(y,0)))
mod2 <- glm(y~x, data=foo2, family=binomial)
preds4 <- predict(mod2, newdata=data.frame(x=sort(foo2$x)), type="link",
se.fit=TRUE)
Regarding Q2, the statistical theory behind generalized linear models (GLiMs) assumes that the sampling distribution of a parameter estimate is asymptotically normally distributed (i.e., 'at infinity'). It is well known that this is not necessarily true for small samples, but the sampling distribution may be 'normal enough'. At any rate, this is (possibly) true on the scale of the linear predictor, which I call lo above; but the link function is a non-linear transformation, it isn't necessarily true on the response scale. To use an easy example, the normal distribution goes to infinity on both sides, but the response scale is bounded at 0 and 1. Moreover, all of these points hold for the Poisson distribution just like the binomial. Although it's not exactly the same topic, it may help to read my answer here: difference between logit and probit models because it provides a lot of information about link functions and GLiMs that may help with the larger conceptual framework.
For Q3, yes there is a relationship between the SEs of your coefficients and the width confidence band, but the confidence band is a little more complicated. The width of the confidence band grows as you move left or right away from the mean of x. (You can get the general idea from my answer here: linear regression prediction interval.) On the other hand, with a GLiM, the width of the confidence band also depends on the predicted value. To more easily see these effects, we can look at the confidence band for our original model on the scale of the linear predictor, and for a second model where there is no effect of x. Here's the second model:
y2 <- rbinom(100, size=1, prob=.5)
mod2 <- glm(y2~x, family=binomial)
preds5 <- predict(mod2, newdata=data.frame(x=sort(foo$x)), type="link",
se.fit=TRUE)
Here's what they look like: | Confidence Interval for predictions for Poisson regression | To address Q1, lets start by making some data to play with:
lo.to.p <- function(lo){ # this function will convert log odds to probabilities
o <- exp(lo) # we get odds by exponentiating | Confidence Interval for predictions for Poisson regression
To address Q1, lets start by making some data to play with:
lo.to.p <- function(lo){ # this function will convert log odds to probabilities
o <- exp(lo) # we get odds by exponentiating log odds
p <- o/(o+1) # we convert to probabilities
return(p)
}
set.seed(90) # this makes the example reproducible
x <- runif(100, min=0, max=100) # I generate some x data from a uniform dist
lo <- -.5 + .1*x # this is the linear predictor
p <- lo.to.p(lo) # converting log odds to probabilities
y <- rbinom(100, size=1, prob=p) # generating observed y values
foo <- data.frame(x=x, y=y)
# @Gavin's code:
mod <- glm(y ~ x, data=foo, family=binomial)
preddat <- with(foo, data.frame(x=seq(min(x), max(x), length=100)))
preds <- predict(mod, newdata=preddat, type="link", se.fit=TRUE)
Now, why not try to get predicted values and a confidence interval / band by just using the original data:
preds2 <- predict(mod, newdata=foo$x, type="link", se.fit=TRUE)
That throws an error, because predict() needs the newdata argument to get a data frame:
# Error in eval(predvars, data, env) :
# numeric 'envir' arg not of length one
So let's try with the original data as a data frame:
preds3 <- predict(mod, newdata=data.frame(x=foo$x), type="link", se.fit=TRUE)
That time it worked, so let's see what the output looks like (I used our lo.to.p() function to convert the output from predict to predicted probabilities as @Gavin suggested, note that you can also use predict with type="response" to do that automatically):
Using the original data frame yields a garbled mess. You can sort the data first, which works OK in this case, but generally is not as smooth / pretty. To better show the effect of this strategy, I slightly augmented the data and model. Here's the code for the sorted version:
foo2 <- with(foo, data.frame(x=c(x, -100), y=c(y,0)))
mod2 <- glm(y~x, data=foo2, family=binomial)
preds4 <- predict(mod2, newdata=data.frame(x=sort(foo2$x)), type="link",
se.fit=TRUE)
Regarding Q2, the statistical theory behind generalized linear models (GLiMs) assumes that the sampling distribution of a parameter estimate is asymptotically normally distributed (i.e., 'at infinity'). It is well known that this is not necessarily true for small samples, but the sampling distribution may be 'normal enough'. At any rate, this is (possibly) true on the scale of the linear predictor, which I call lo above; but the link function is a non-linear transformation, it isn't necessarily true on the response scale. To use an easy example, the normal distribution goes to infinity on both sides, but the response scale is bounded at 0 and 1. Moreover, all of these points hold for the Poisson distribution just like the binomial. Although it's not exactly the same topic, it may help to read my answer here: difference between logit and probit models because it provides a lot of information about link functions and GLiMs that may help with the larger conceptual framework.
For Q3, yes there is a relationship between the SEs of your coefficients and the width confidence band, but the confidence band is a little more complicated. The width of the confidence band grows as you move left or right away from the mean of x. (You can get the general idea from my answer here: linear regression prediction interval.) On the other hand, with a GLiM, the width of the confidence band also depends on the predicted value. To more easily see these effects, we can look at the confidence band for our original model on the scale of the linear predictor, and for a second model where there is no effect of x. Here's the second model:
y2 <- rbinom(100, size=1, prob=.5)
mod2 <- glm(y2~x, family=binomial)
preds5 <- predict(mod2, newdata=data.frame(x=sort(foo$x)), type="link",
se.fit=TRUE)
Here's what they look like: | Confidence Interval for predictions for Poisson regression
To address Q1, lets start by making some data to play with:
lo.to.p <- function(lo){ # this function will convert log odds to probabilities
o <- exp(lo) # we get odds by exponentiating |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.