idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,701 | Comparing coefficients in logistic regression | For those still looking at this old post, I found an article by King that might be useful: (King, J.E. (2007). Standardized coefficients in logistic regression. Paper presented at the annual meeting of the Southwest Educational Research Association, San Antonio, TX. February, 1-12.) | Comparing coefficients in logistic regression | For those still looking at this old post, I found an article by King that might be useful: (King, J.E. (2007). Standardized coefficients in logistic regression. Paper presented at the annual meeting o | Comparing coefficients in logistic regression
For those still looking at this old post, I found an article by King that might be useful: (King, J.E. (2007). Standardized coefficients in logistic regression. Paper presented at the annual meeting of the Southwest Educational Research Association, San Antonio, TX. February, 1-12.) | Comparing coefficients in logistic regression
For those still looking at this old post, I found an article by King that might be useful: (King, J.E. (2007). Standardized coefficients in logistic regression. Paper presented at the annual meeting o |
52,702 | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant? | Your random variables belong to what is called a scale family. The first step should be to show that if $X\sim F_1$ then $\theta\cdot X\sim F_\theta$.
A statistic $T_n(X_1,\ldots,X_n)$ is usually said to be scale invariant if $$T_n(\theta X_1,\ldots,\theta X_n)=T_n(X_1,\ldots,X_n),$$
i.e. if rescaling the data leaves the statistic unchanged, but sometimes it is taken to mean that
$$T_n(\theta X_1,\ldots,\theta X_n)=\theta T_n(X_1,\ldots,X_n),$$
i.e. that the statistic "scales in the right way", which seems to be the case here. If I understand your notation correctly, you wish to show that this is true for its expected value as well.
Let $X\sim F_\theta$ and $Y=\frac{1}{\theta}X\sim F_1$. Then you can show that
$$\mathbb{E}_\theta(T_n)=\mathbb{E}_\theta T_n(X_1,\ldots,X_n)=\mathbb{E}_\theta\theta T_n\Big(\frac{1}{\theta}X_1,\ldots,\frac{1}{\theta}X_n\Big)\\=\mathbb{E}_1 \theta T_n(Y_1,\ldots,Y_n)=\theta\cdot \mathbb{E}_1(T_n).$$
The crucial step is to show the equality in the line break. | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant? | Your random variables belong to what is called a scale family. The first step should be to show that if $X\sim F_1$ then $\theta\cdot X\sim F_\theta$.
A statistic $T_n(X_1,\ldots,X_n)$ is usually said | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant?
Your random variables belong to what is called a scale family. The first step should be to show that if $X\sim F_1$ then $\theta\cdot X\sim F_\theta$.
A statistic $T_n(X_1,\ldots,X_n)$ is usually said to be scale invariant if $$T_n(\theta X_1,\ldots,\theta X_n)=T_n(X_1,\ldots,X_n),$$
i.e. if rescaling the data leaves the statistic unchanged, but sometimes it is taken to mean that
$$T_n(\theta X_1,\ldots,\theta X_n)=\theta T_n(X_1,\ldots,X_n),$$
i.e. that the statistic "scales in the right way", which seems to be the case here. If I understand your notation correctly, you wish to show that this is true for its expected value as well.
Let $X\sim F_\theta$ and $Y=\frac{1}{\theta}X\sim F_1$. Then you can show that
$$\mathbb{E}_\theta(T_n)=\mathbb{E}_\theta T_n(X_1,\ldots,X_n)=\mathbb{E}_\theta\theta T_n\Big(\frac{1}{\theta}X_1,\ldots,\frac{1}{\theta}X_n\Big)\\=\mathbb{E}_1 \theta T_n(Y_1,\ldots,Y_n)=\theta\cdot \mathbb{E}_1(T_n).$$
The crucial step is to show the equality in the line break. | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant?
Your random variables belong to what is called a scale family. The first step should be to show that if $X\sim F_1$ then $\theta\cdot X\sim F_\theta$.
A statistic $T_n(X_1,\ldots,X_n)$ is usually said |
52,703 | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant? | For completeness here is the solution (adding in the missing steps that @manst wanted me to think about).
\begin{align*}
\mathbb{E}_\theta[T_n] &= \int \ldots \int t_n (x_1, \ldots , x_n) f_\theta(x_1), \ldots , f_\theta(x_n) \text{d}x_1, \ldots , \text{d}x_n \\
\end{align*}
Since $F_\theta(x) = F(\frac{x}{\theta})$ we can write $f_\theta(x) = \frac{1}{\theta}f(\frac{x}{\theta})$, where $f(u) = F'(u)$
\begin{align*}
&= \int \ldots \int t_n (x_1, \ldots , x_n) \frac{1}{\theta}f\left(\frac{x_1}{\theta}\right), \ldots , \frac{1}{\theta}f\left(\frac{x_n}{\theta}\right) \text{d}x_1, \ldots , \text{d}x_n \\
\end{align*}
let $\textbf{u} = \textbf{x}/\theta$ or $\theta \textbf{u} = \textbf{x}$
\begin{align*}
&= \int \ldots \int t_n (\theta u_1, \ldots , \theta u_n) \frac{1}{\theta} f(u_1), \ldots , \frac{1}{\theta}f(u_n) \theta du_1, \ldots , \theta du_n \\
&= \int \ldots \int t_n (\theta u_1, \ldots , \theta u_n) \ f(u_1), \ldots , f(u_n) du_1, \ldots , du_n \\
&= \theta \int \ldots \int t_n ( u_1, \ldots , u_n) \ f(u_1), \ldots , f(u_n) du_1, \ldots , du_n \\
&= \theta \mathbb{E}_1[T_n]
\end{align*} | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant? | For completeness here is the solution (adding in the missing steps that @manst wanted me to think about).
\begin{align*}
\mathbb{E}_\theta[T_n] &= \int \ldots \int t_n (x_1, \ldots , x_n) f_\theta(x | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant?
For completeness here is the solution (adding in the missing steps that @manst wanted me to think about).
\begin{align*}
\mathbb{E}_\theta[T_n] &= \int \ldots \int t_n (x_1, \ldots , x_n) f_\theta(x_1), \ldots , f_\theta(x_n) \text{d}x_1, \ldots , \text{d}x_n \\
\end{align*}
Since $F_\theta(x) = F(\frac{x}{\theta})$ we can write $f_\theta(x) = \frac{1}{\theta}f(\frac{x}{\theta})$, where $f(u) = F'(u)$
\begin{align*}
&= \int \ldots \int t_n (x_1, \ldots , x_n) \frac{1}{\theta}f\left(\frac{x_1}{\theta}\right), \ldots , \frac{1}{\theta}f\left(\frac{x_n}{\theta}\right) \text{d}x_1, \ldots , \text{d}x_n \\
\end{align*}
let $\textbf{u} = \textbf{x}/\theta$ or $\theta \textbf{u} = \textbf{x}$
\begin{align*}
&= \int \ldots \int t_n (\theta u_1, \ldots , \theta u_n) \frac{1}{\theta} f(u_1), \ldots , \frac{1}{\theta}f(u_n) \theta du_1, \ldots , \theta du_n \\
&= \int \ldots \int t_n (\theta u_1, \ldots , \theta u_n) \ f(u_1), \ldots , f(u_n) du_1, \ldots , du_n \\
&= \theta \int \ldots \int t_n ( u_1, \ldots , u_n) \ f(u_1), \ldots , f(u_n) du_1, \ldots , du_n \\
&= \theta \mathbb{E}_1[T_n]
\end{align*} | Showing $\mathbb{E}[T_n] = \theta \mathbb{E}_1[T_n]$ is scale equivariant?
For completeness here is the solution (adding in the missing steps that @manst wanted me to think about).
\begin{align*}
\mathbb{E}_\theta[T_n] &= \int \ldots \int t_n (x_1, \ldots , x_n) f_\theta(x |
52,704 | Simple introduction to MCMC with Dirichlet process prior? | Its not a paper, but I found some Matlab code that implements a DP prior for an infinite Gaussian mixture model. The code uses Gibbs sampling to infer a GMM (and the number of components in the mixture) over some input data. The code is pretty readable and it has helped me quite a bit to see the DP in action in a concrete example. | Simple introduction to MCMC with Dirichlet process prior? | Its not a paper, but I found some Matlab code that implements a DP prior for an infinite Gaussian mixture model. The code uses Gibbs sampling to infer a GMM (and the number of components in the mixtur | Simple introduction to MCMC with Dirichlet process prior?
Its not a paper, but I found some Matlab code that implements a DP prior for an infinite Gaussian mixture model. The code uses Gibbs sampling to infer a GMM (and the number of components in the mixture) over some input data. The code is pretty readable and it has helped me quite a bit to see the DP in action in a concrete example. | Simple introduction to MCMC with Dirichlet process prior?
Its not a paper, but I found some Matlab code that implements a DP prior for an infinite Gaussian mixture model. The code uses Gibbs sampling to infer a GMM (and the number of components in the mixtur |
52,705 | Simple introduction to MCMC with Dirichlet process prior? | MCMC sampling for DPMM is quite challenging and that's for many reasons,
the main one being that the model is infinite and the distribution is not
that easy to work with. Employing algorithms such as metropolis is non-trivial
since there are actually quite a few degrees of freedom in how you specify
your proposal distributions and acceptance ratios.
You can trace part of the evolution of DPMM MCMC algorithms in the work
of Radford Neal (University of Toronto) here.
You could start with this 1998 paper and move on
to his split and merge for conjugate and non-conjugate models
from 2000 and 2005. These will help you understand the main ideas
behind MCMC sampling for DPMMs.
When you feel brave you can hop on to the most recent advances.
One of them is Chang's and Fisher's (MIT) 2013 paper which you can
find here. You can also find code and a demo on Jason Chang's
website.
If you need a more general mathematical exploration of DPs you can find many
of them online. A good one is by Yee Whye Teh (UCL). I would post more links but
my reps are not enough at this point :-) | Simple introduction to MCMC with Dirichlet process prior? | MCMC sampling for DPMM is quite challenging and that's for many reasons,
the main one being that the model is infinite and the distribution is not
that easy to work with. Employing algorithms such as | Simple introduction to MCMC with Dirichlet process prior?
MCMC sampling for DPMM is quite challenging and that's for many reasons,
the main one being that the model is infinite and the distribution is not
that easy to work with. Employing algorithms such as metropolis is non-trivial
since there are actually quite a few degrees of freedom in how you specify
your proposal distributions and acceptance ratios.
You can trace part of the evolution of DPMM MCMC algorithms in the work
of Radford Neal (University of Toronto) here.
You could start with this 1998 paper and move on
to his split and merge for conjugate and non-conjugate models
from 2000 and 2005. These will help you understand the main ideas
behind MCMC sampling for DPMMs.
When you feel brave you can hop on to the most recent advances.
One of them is Chang's and Fisher's (MIT) 2013 paper which you can
find here. You can also find code and a demo on Jason Chang's
website.
If you need a more general mathematical exploration of DPs you can find many
of them online. A good one is by Yee Whye Teh (UCL). I would post more links but
my reps are not enough at this point :-) | Simple introduction to MCMC with Dirichlet process prior?
MCMC sampling for DPMM is quite challenging and that's for many reasons,
the main one being that the model is infinite and the distribution is not
that easy to work with. Employing algorithms such as |
52,706 | Simple introduction to MCMC with Dirichlet process prior? | I have the same feeling. This is as closest as I've come:
http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf
The algorithm explained starting at page 37 is understandable, however I still wish to see a very simple example written out step by step so I'm more confident of which each term means.
I attempted to implement the algorithm in R, below is my code, not efficient and not sure if it is all correct!
#Dirichlet mixture of normals
#flat prior on mu - posterior normal
#flat prior on precision - posterior gamma
###generate data
library(mixtools)
n=100
x=rnormmix(n,lambda=c(.5,.5),mu=c(-4,2),sigma=c(1,1))
###get probs for each component
cProbs=function(x,c,etamu,etavar,alpha,i,n){
c1=c[-i]
cProb=rep(NA,length(unique(c1)))
for(j in 1:length(cProb))
cProb[j]=length(which(c1==unique(c1)[j]))/(n-1+alpha)*dnorm(x[i],etamu[j],sqrt(etavar[j]))
return(cProb)
}
###get probs for new component
newProb=function(x,alpha,i,n){
s=sqrt(sum((x-mean(x))^2)/(n-1))
#predictive distribution is t
newProb1=alpha/(n-1+alpha)*dt((x[i]-mean(x))/(s*sqrt(1+1/n)),n-1)
return(newProb1)
}
#parms
alpha=.01
nsim=100
#initialize
etamu=mean(x)
etavar=var(x)
c=rep(1,n)
idx=0
#loop
repeat{
idx=idx+1
for(i in 1:n){
####if c_i singleton remove
if(sum(c==c[i])==1){
etamu=etamu[-c[i]]
etavar=etavar[-c[i]]
c[c>c[i]]=c[c>c[i]]-1
c[i]=1
}
####draw new c_i
#get probabilities for components
probsExisting=cProbs(x,c,etamu,etavar,alpha,i,n)
newP=newProb(x,alpha,i,n)
#sample
temp=sample(1:(length(unique(c))+1),size=1,prob=c(probsExisting,newP))
newC=ifelse(temp==length(unique(c))+1,1,0)
c[i]=temp
####if new c_i draw new eta
if(newC==1){
#var: posterior is inverse gamma
newVar1=1/rgamma(1,(n-1)/2,(n-1)/2*sum((x-mean(x))^2)/(n-1))
etavar=c(etavar,newVar1)
#mean: posterior is normal
newMean1=rnorm(1,x[i],sqrt(newVar1/n))
etamu=c(etamu,newMean1)
}
} #end loop over observations
#updata etas
for(i in 1:length(unique(c))){
n1=sum(c==i)
if(n1>1){
temp=x[which(c==i)]
etavar[i]=1/rgamma(1,(n1-1)/2,(n1-1)/2*sum((temp-mean(temp))^2)/(n1-1))
etamu[i]=rnorm(1,mean(temp),sqrt(etavar[i]/n1))
}
} #end update etas
if(idx==nsim) break
} #end repeat
###plot results
probs=rep(NA,length(etamu))
for(i in 1:length(probs))
probs[i]=sum(c==i)/n
grid=seq(min(x)-1,max(x)+1,length=500)
dens=rep(NA,length=length(grid))
for(i in 1:length(grid))
dens[i]=sum(probs*dnorm(grid[i],etamu,sqrt(etavar)))
hist(x,freq=FALSE)
lines(grid,dens,col='red',lwd=2) | Simple introduction to MCMC with Dirichlet process prior? | I have the same feeling. This is as closest as I've come:
http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf
The algorithm explained starting at page 37 is understandable, however I still wish to see a very | Simple introduction to MCMC with Dirichlet process prior?
I have the same feeling. This is as closest as I've come:
http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf
The algorithm explained starting at page 37 is understandable, however I still wish to see a very simple example written out step by step so I'm more confident of which each term means.
I attempted to implement the algorithm in R, below is my code, not efficient and not sure if it is all correct!
#Dirichlet mixture of normals
#flat prior on mu - posterior normal
#flat prior on precision - posterior gamma
###generate data
library(mixtools)
n=100
x=rnormmix(n,lambda=c(.5,.5),mu=c(-4,2),sigma=c(1,1))
###get probs for each component
cProbs=function(x,c,etamu,etavar,alpha,i,n){
c1=c[-i]
cProb=rep(NA,length(unique(c1)))
for(j in 1:length(cProb))
cProb[j]=length(which(c1==unique(c1)[j]))/(n-1+alpha)*dnorm(x[i],etamu[j],sqrt(etavar[j]))
return(cProb)
}
###get probs for new component
newProb=function(x,alpha,i,n){
s=sqrt(sum((x-mean(x))^2)/(n-1))
#predictive distribution is t
newProb1=alpha/(n-1+alpha)*dt((x[i]-mean(x))/(s*sqrt(1+1/n)),n-1)
return(newProb1)
}
#parms
alpha=.01
nsim=100
#initialize
etamu=mean(x)
etavar=var(x)
c=rep(1,n)
idx=0
#loop
repeat{
idx=idx+1
for(i in 1:n){
####if c_i singleton remove
if(sum(c==c[i])==1){
etamu=etamu[-c[i]]
etavar=etavar[-c[i]]
c[c>c[i]]=c[c>c[i]]-1
c[i]=1
}
####draw new c_i
#get probabilities for components
probsExisting=cProbs(x,c,etamu,etavar,alpha,i,n)
newP=newProb(x,alpha,i,n)
#sample
temp=sample(1:(length(unique(c))+1),size=1,prob=c(probsExisting,newP))
newC=ifelse(temp==length(unique(c))+1,1,0)
c[i]=temp
####if new c_i draw new eta
if(newC==1){
#var: posterior is inverse gamma
newVar1=1/rgamma(1,(n-1)/2,(n-1)/2*sum((x-mean(x))^2)/(n-1))
etavar=c(etavar,newVar1)
#mean: posterior is normal
newMean1=rnorm(1,x[i],sqrt(newVar1/n))
etamu=c(etamu,newMean1)
}
} #end loop over observations
#updata etas
for(i in 1:length(unique(c))){
n1=sum(c==i)
if(n1>1){
temp=x[which(c==i)]
etavar[i]=1/rgamma(1,(n1-1)/2,(n1-1)/2*sum((temp-mean(temp))^2)/(n1-1))
etamu[i]=rnorm(1,mean(temp),sqrt(etavar[i]/n1))
}
} #end update etas
if(idx==nsim) break
} #end repeat
###plot results
probs=rep(NA,length(etamu))
for(i in 1:length(probs))
probs[i]=sum(c==i)/n
grid=seq(min(x)-1,max(x)+1,length=500)
dens=rep(NA,length=length(grid))
for(i in 1:length(grid))
dens[i]=sum(probs*dnorm(grid[i],etamu,sqrt(etavar)))
hist(x,freq=FALSE)
lines(grid,dens,col='red',lwd=2) | Simple introduction to MCMC with Dirichlet process prior?
I have the same feeling. This is as closest as I've come:
http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf
The algorithm explained starting at page 37 is understandable, however I still wish to see a very |
52,707 | Simple introduction to MCMC with Dirichlet process prior? | My belief is that there can hardly be such a paper, because the concepts and implementation details are not straighforward. Hence, even the best papers will be somewhat involved and you need to go through that. | Simple introduction to MCMC with Dirichlet process prior? | My belief is that there can hardly be such a paper, because the concepts and implementation details are not straighforward. Hence, even the best papers will be somewhat involved and you need to go thr | Simple introduction to MCMC with Dirichlet process prior?
My belief is that there can hardly be such a paper, because the concepts and implementation details are not straighforward. Hence, even the best papers will be somewhat involved and you need to go through that. | Simple introduction to MCMC with Dirichlet process prior?
My belief is that there can hardly be such a paper, because the concepts and implementation details are not straighforward. Hence, even the best papers will be somewhat involved and you need to go thr |
52,708 | Simple introduction to MCMC with Dirichlet process prior? | (Edited) Since, no one had added this. Let me contribute here.
Sethuraman's stick breaking representation for Dirichlet Process is incredibly useful for understanding DP model and also for simulating DP. Basically, Sethuraman tells us that one can think of the weights which appear in Dirichlet Process as a product of stick breaking process (beta). Here's the link for the paper. http://www.jstor.org/stable/24305538?seq=1#page_scan_tab_contents
One of the main reason, why DP is hard to handle because it's literally an infinite mixture model. An excellent paper by Ishwaran tells us that it's alright to approximate DP as a finite mixture model. Here's the paper https://people.eecs.berkeley.edu/~jordan/sail/readings/archive/ishwaran-Mixture.pdf.
Once, you have internalized above methods, it's fairly straightforward to do Gibbs Sampling with DP with say Normal Base Distribution. Let's say that your data y also comes from the Normal Distribution. let's also that the cluster mean of your data (y) comes from DP. Then, here are the steps to do a simple Gibbs Sampling with DP
Choose a large K (total number of cluster).
Initialize cluster mean and variance for each cluster.
Initialize the stick breaking prior.
2#. Choose a data point
Compute the probability that data point lies in a particular cluster .
Do the Step 3 for all of the cluster.
Normalize the above probability, so that it adds up to 1.
Using the above probability allocate the data point to a particular cluster.
Do step 3-6 for all your data points.
Now update the number of points in all the cluster.
Using basic Bayesian Methods, update the cluster means and cluster variance.
Repeat the above process for however number of iteration you want.
Discard the firs half of your sample.
Viola, you did a MCMC for your DP model.
Here's an excellent R code shared by Brian Neelon, which uses the above two concept to make an elegant DP code.
http://people.musc.edu/~brn200/r/DP03.r | Simple introduction to MCMC with Dirichlet process prior? | (Edited) Since, no one had added this. Let me contribute here.
Sethuraman's stick breaking representation for Dirichlet Process is incredibly useful for understanding DP model and also for simulating | Simple introduction to MCMC with Dirichlet process prior?
(Edited) Since, no one had added this. Let me contribute here.
Sethuraman's stick breaking representation for Dirichlet Process is incredibly useful for understanding DP model and also for simulating DP. Basically, Sethuraman tells us that one can think of the weights which appear in Dirichlet Process as a product of stick breaking process (beta). Here's the link for the paper. http://www.jstor.org/stable/24305538?seq=1#page_scan_tab_contents
One of the main reason, why DP is hard to handle because it's literally an infinite mixture model. An excellent paper by Ishwaran tells us that it's alright to approximate DP as a finite mixture model. Here's the paper https://people.eecs.berkeley.edu/~jordan/sail/readings/archive/ishwaran-Mixture.pdf.
Once, you have internalized above methods, it's fairly straightforward to do Gibbs Sampling with DP with say Normal Base Distribution. Let's say that your data y also comes from the Normal Distribution. let's also that the cluster mean of your data (y) comes from DP. Then, here are the steps to do a simple Gibbs Sampling with DP
Choose a large K (total number of cluster).
Initialize cluster mean and variance for each cluster.
Initialize the stick breaking prior.
2#. Choose a data point
Compute the probability that data point lies in a particular cluster .
Do the Step 3 for all of the cluster.
Normalize the above probability, so that it adds up to 1.
Using the above probability allocate the data point to a particular cluster.
Do step 3-6 for all your data points.
Now update the number of points in all the cluster.
Using basic Bayesian Methods, update the cluster means and cluster variance.
Repeat the above process for however number of iteration you want.
Discard the firs half of your sample.
Viola, you did a MCMC for your DP model.
Here's an excellent R code shared by Brian Neelon, which uses the above two concept to make an elegant DP code.
http://people.musc.edu/~brn200/r/DP03.r | Simple introduction to MCMC with Dirichlet process prior?
(Edited) Since, no one had added this. Let me contribute here.
Sethuraman's stick breaking representation for Dirichlet Process is incredibly useful for understanding DP model and also for simulating |
52,709 | Mean and standard deviation of Gaussian Distribution | You can estimate them. The best estimate of the mean of the Gaussian distribution is the mean of your sample- that is, the sum of your sample divided by the number of elements in it.
$$\bar{x} = \frac{1}{n}\sum_{i=1}^nx_i$$
The most common estimate of the standard deviation of a Gaussian distribution is
$$\bar{s} = \sqrt{\frac{1}{n-1}\sum_{i=1}^n\left(x_i - \bar{x}\right)^2}.$$
Here, $x_i$ is the $i^\text{th}$ number in your sample. See Wikipedia for details. | Mean and standard deviation of Gaussian Distribution | You can estimate them. The best estimate of the mean of the Gaussian distribution is the mean of your sample- that is, the sum of your sample divided by the number of elements in it.
$$\bar{x} = \frac | Mean and standard deviation of Gaussian Distribution
You can estimate them. The best estimate of the mean of the Gaussian distribution is the mean of your sample- that is, the sum of your sample divided by the number of elements in it.
$$\bar{x} = \frac{1}{n}\sum_{i=1}^nx_i$$
The most common estimate of the standard deviation of a Gaussian distribution is
$$\bar{s} = \sqrt{\frac{1}{n-1}\sum_{i=1}^n\left(x_i - \bar{x}\right)^2}.$$
Here, $x_i$ is the $i^\text{th}$ number in your sample. See Wikipedia for details. | Mean and standard deviation of Gaussian Distribution
You can estimate them. The best estimate of the mean of the Gaussian distribution is the mean of your sample- that is, the sum of your sample divided by the number of elements in it.
$$\bar{x} = \frac |
52,710 | When are order statistics not sufficient? | The order statistics are just the sorted data values, so for any case where the data is univariate iid, the order statistics have the exact same information as the original data (just in a different order). If the order in the data matters (not iid, e.g. time series) then the order statistics don't have that information and that would be one case where they were not sufficient. Another case would be non-univariate cases, the order statistics of X and the order statistics of Y would be sufficient for the X and Y distributions seperately, but not for covariance or correlation parameters. | When are order statistics not sufficient? | The order statistics are just the sorted data values, so for any case where the data is univariate iid, the order statistics have the exact same information as the original data (just in a different o | When are order statistics not sufficient?
The order statistics are just the sorted data values, so for any case where the data is univariate iid, the order statistics have the exact same information as the original data (just in a different order). If the order in the data matters (not iid, e.g. time series) then the order statistics don't have that information and that would be one case where they were not sufficient. Another case would be non-univariate cases, the order statistics of X and the order statistics of Y would be sufficient for the X and Y distributions seperately, but not for covariance or correlation parameters. | When are order statistics not sufficient?
The order statistics are just the sorted data values, so for any case where the data is univariate iid, the order statistics have the exact same information as the original data (just in a different o |
52,711 | How is it possible that these variances are equal? | They look about equal to me.
A good visual test to estimate or compare standard deviations (after checking for obvious outliers) is to look at the range of a dataset. For a given sample size, the range will typically be near a fixed multiple of the SD. With around 250 independent samples of a normal distribution, for instance, the range will be around 7 times the SD. So here we have ranges of around 1.3 (left panel), 1.0 (middle panel), and 1.1 (right panel), and each panel comprises about the same amount of data. Thus the ratios of variances, which will equal the squares of the ratios of the ranges, range from around $(1.3 : 1.1)^2$ = about $1.4$ down to $(1.0 : 1.1)^2$ = about $0.8$. You could use an F-test as a very rough estimate of significance, but I would reduce the degrees of freedom to account for the evidently strong serial correlation. Regardless, it wouldn't be unusual to get a pair of F-statistics (with 250 and 250 degrees of freedom) in the range $(0.8, 1.4)$ and reducing the df only increases that chance. In light of that, a p-value of $0.26$ looks fine.
Actually, you don't need a formal test here: it's pointless, because obviously these residuals aren't anywhere near independent and they exhibit strong trends within the series. There might not be much the regression can do to eliminate all the serial correlation, but a richer model is needed to capture the variation that is evident here. Until that happens, there's little sense in checking for homogeneity of the residuals. | How is it possible that these variances are equal? | They look about equal to me.
A good visual test to estimate or compare standard deviations (after checking for obvious outliers) is to look at the range of a dataset. For a given sample size, the ran | How is it possible that these variances are equal?
They look about equal to me.
A good visual test to estimate or compare standard deviations (after checking for obvious outliers) is to look at the range of a dataset. For a given sample size, the range will typically be near a fixed multiple of the SD. With around 250 independent samples of a normal distribution, for instance, the range will be around 7 times the SD. So here we have ranges of around 1.3 (left panel), 1.0 (middle panel), and 1.1 (right panel), and each panel comprises about the same amount of data. Thus the ratios of variances, which will equal the squares of the ratios of the ranges, range from around $(1.3 : 1.1)^2$ = about $1.4$ down to $(1.0 : 1.1)^2$ = about $0.8$. You could use an F-test as a very rough estimate of significance, but I would reduce the degrees of freedom to account for the evidently strong serial correlation. Regardless, it wouldn't be unusual to get a pair of F-statistics (with 250 and 250 degrees of freedom) in the range $(0.8, 1.4)$ and reducing the df only increases that chance. In light of that, a p-value of $0.26$ looks fine.
Actually, you don't need a formal test here: it's pointless, because obviously these residuals aren't anywhere near independent and they exhibit strong trends within the series. There might not be much the regression can do to eliminate all the serial correlation, but a richer model is needed to capture the variation that is evident here. Until that happens, there's little sense in checking for homogeneity of the residuals. | How is it possible that these variances are equal?
They look about equal to me.
A good visual test to estimate or compare standard deviations (after checking for obvious outliers) is to look at the range of a dataset. For a given sample size, the ran |
52,712 | Should a predictor, significant on its own but not with other predictors, be included in an overall multinomial logistic regression? | It depends whether you are doing...
a) predictive research, where you don't care about what is causally responsible, only what serves as an efficient set of indicators, or
b) explanatory research, where you want to disentangle causal relationships as much as you can.
In the latter, when multiple correlated predictors vie for a role in your equation, you would care about such things as giving "causal credit" to earlier factors over later ones, since what comes later could never cause what came before, but sometimes the reverse is true. You would care about giving more "credit" to relatively objective, relatively fixed variables such as marital status or ethnicity than to relatively subjective, changeable ones such as attitudes and opinions. And (and here I'm paraphrasing James Davis's The Logic of Causal Order) you would want to choose more generative factors such as socioeconomic status over less generative ones such as what brand of toothpaste a person uses.
When your candidate predictors are correlated, no statistical algorithm (such as a stepwise regression) can deal with these issues of explanation. It is up to you as a researcher to think through your candidate variables and choose those that will best serve your purpose. It is only in pure predictive research that you can ignore such issues and simply choose those predictors that account for the most variance in the outcome--or, in your case, produce the highest pseudo-r-squared.
Your question gets to the heart of important issues in multivariate modelling of many types, and if more than 5 tags were allowed I would have also listed multicollinearity, model-building, and/or variable selection. | Should a predictor, significant on its own but not with other predictors, be included in an overall | It depends whether you are doing...
a) predictive research, where you don't care about what is causally responsible, only what serves as an efficient set of indicators, or
b) explanatory research, whe | Should a predictor, significant on its own but not with other predictors, be included in an overall multinomial logistic regression?
It depends whether you are doing...
a) predictive research, where you don't care about what is causally responsible, only what serves as an efficient set of indicators, or
b) explanatory research, where you want to disentangle causal relationships as much as you can.
In the latter, when multiple correlated predictors vie for a role in your equation, you would care about such things as giving "causal credit" to earlier factors over later ones, since what comes later could never cause what came before, but sometimes the reverse is true. You would care about giving more "credit" to relatively objective, relatively fixed variables such as marital status or ethnicity than to relatively subjective, changeable ones such as attitudes and opinions. And (and here I'm paraphrasing James Davis's The Logic of Causal Order) you would want to choose more generative factors such as socioeconomic status over less generative ones such as what brand of toothpaste a person uses.
When your candidate predictors are correlated, no statistical algorithm (such as a stepwise regression) can deal with these issues of explanation. It is up to you as a researcher to think through your candidate variables and choose those that will best serve your purpose. It is only in pure predictive research that you can ignore such issues and simply choose those predictors that account for the most variance in the outcome--or, in your case, produce the highest pseudo-r-squared.
Your question gets to the heart of important issues in multivariate modelling of many types, and if more than 5 tags were allowed I would have also listed multicollinearity, model-building, and/or variable selection. | Should a predictor, significant on its own but not with other predictors, be included in an overall
It depends whether you are doing...
a) predictive research, where you don't care about what is causally responsible, only what serves as an efficient set of indicators, or
b) explanatory research, whe |
52,713 | Should a predictor, significant on its own but not with other predictors, be included in an overall multinomial logistic regression? | As @rolando2 mentioned, this depends very much on what your trying to accomplish or what question(s) you are trying to answer.
If you are trying to find a good model for prediction then rather than just deciding on whethere to include a term or not, it is better to use some type of shrinkage method such as penalized regression, ridge regression, lasso/lars, or model averaging.
You should also take into account outside knowledge about the variables. If my doctor had a choice of 2 predictive models to help in diagnosing me I would prefer that he use the one that uses blood pressure as a predictor rather than the one that uses the results from an exploratory surgery, even if it has a slightly smaller $R^2$ value. | Should a predictor, significant on its own but not with other predictors, be included in an overall | As @rolando2 mentioned, this depends very much on what your trying to accomplish or what question(s) you are trying to answer.
If you are trying to find a good model for prediction then rather than ju | Should a predictor, significant on its own but not with other predictors, be included in an overall multinomial logistic regression?
As @rolando2 mentioned, this depends very much on what your trying to accomplish or what question(s) you are trying to answer.
If you are trying to find a good model for prediction then rather than just deciding on whethere to include a term or not, it is better to use some type of shrinkage method such as penalized regression, ridge regression, lasso/lars, or model averaging.
You should also take into account outside knowledge about the variables. If my doctor had a choice of 2 predictive models to help in diagnosing me I would prefer that he use the one that uses blood pressure as a predictor rather than the one that uses the results from an exploratory surgery, even if it has a slightly smaller $R^2$ value. | Should a predictor, significant on its own but not with other predictors, be included in an overall
As @rolando2 mentioned, this depends very much on what your trying to accomplish or what question(s) you are trying to answer.
If you are trying to find a good model for prediction then rather than ju |
52,714 | Should a predictor, significant on its own but not with other predictors, be included in an overall multinomial logistic regression? | If predictive accuracy is the main objective, then it is generally better to use regularisation to address problems such as correlated predictor variables and not perform any feature selection. This is because feature selection is difficult. Most often feature selection is peformed by optimising some feature selection criterion evaluated over a finite dataset. Since only a finite dataset is used, the feature selection criterion has a non-zero variance, and hence it is possible to over-fit the feature selection criterion (and get a set of features that is optimal for this particular sample of data, but not for the true underlying distribution and hence generalisation is poor). Over-fitting is always most dangerous when you have many degrees of freedome with which to optimise the criterion, and in feature selection, there is one per feature. For regularisation (e.g. ridge regression or regularised logistic regression) there is only one degree of freedom (the ridge parameter) and so the risk of over-fitting is generally lower (but it doesn't go away completely). This is the advice given in the appendix of Millar's monograph "subset selection in regression" (but without the reasoning IIRC).
If you can identify the variables that are the causal "parents" of the quantity you seek to predict, then using only those featrues has the advantage that the model will still work well when extrapolating or under covariate shift (e.g. the sampling of the data uses a different distribution), as your model will represent the true causal structure, rather than mere correllations. So if extrapolation or covariate shift is an issue, causal feature selection may be helpful (although in practice identifying causal relationships is unreliable). Isabelle Guyon has much to say that is well worth listening to on this topic (just found a videolecture here that I am going to watch now).
There is no need for the same model to be used for explication and for prediction, so I would say fit two models, one with feature selection to help you understand the problem/data and a second model with no feature selection but with properly tuned regularisation to use for prediction. | Should a predictor, significant on its own but not with other predictors, be included in an overall | If predictive accuracy is the main objective, then it is generally better to use regularisation to address problems such as correlated predictor variables and not perform any feature selection. This | Should a predictor, significant on its own but not with other predictors, be included in an overall multinomial logistic regression?
If predictive accuracy is the main objective, then it is generally better to use regularisation to address problems such as correlated predictor variables and not perform any feature selection. This is because feature selection is difficult. Most often feature selection is peformed by optimising some feature selection criterion evaluated over a finite dataset. Since only a finite dataset is used, the feature selection criterion has a non-zero variance, and hence it is possible to over-fit the feature selection criterion (and get a set of features that is optimal for this particular sample of data, but not for the true underlying distribution and hence generalisation is poor). Over-fitting is always most dangerous when you have many degrees of freedome with which to optimise the criterion, and in feature selection, there is one per feature. For regularisation (e.g. ridge regression or regularised logistic regression) there is only one degree of freedom (the ridge parameter) and so the risk of over-fitting is generally lower (but it doesn't go away completely). This is the advice given in the appendix of Millar's monograph "subset selection in regression" (but without the reasoning IIRC).
If you can identify the variables that are the causal "parents" of the quantity you seek to predict, then using only those featrues has the advantage that the model will still work well when extrapolating or under covariate shift (e.g. the sampling of the data uses a different distribution), as your model will represent the true causal structure, rather than mere correllations. So if extrapolation or covariate shift is an issue, causal feature selection may be helpful (although in practice identifying causal relationships is unreliable). Isabelle Guyon has much to say that is well worth listening to on this topic (just found a videolecture here that I am going to watch now).
There is no need for the same model to be used for explication and for prediction, so I would say fit two models, one with feature selection to help you understand the problem/data and a second model with no feature selection but with properly tuned regularisation to use for prediction. | Should a predictor, significant on its own but not with other predictors, be included in an overall
If predictive accuracy is the main objective, then it is generally better to use regularisation to address problems such as correlated predictor variables and not perform any feature selection. This |
52,715 | Recursive partitioning using rpart() method in R | Perhaps you misunderstood the message? It is saying that, having built the tree using the control parameters specified, only the variables mpa_a and tc_b have been involved in splits. All the variables were considered, but just these two were needed.
That tree seems quite small; do you have only a small sample of observations? If you want to grow a bigger tree for subsequent pruning back, then you need to alter the minsplit and minbucket control parameters. See ?rpart.control, e.g.:
rm <- rpart(uloss ~ tc_b + ublkb + mpa_a + mpa_b +
sys_a + sys_b + usr_a, data = data81, method = "anova",
control = rpart.control(minsplit = 2, minbucket = 1))
would try to fit a full tree --- but it will be hopelessly over-fitted to the data and you must prune it back using prune(). However, that might assure you that rpart() used all the data. | Recursive partitioning using rpart() method in R | Perhaps you misunderstood the message? It is saying that, having built the tree using the control parameters specified, only the variables mpa_a and tc_b have been involved in splits. All the variable | Recursive partitioning using rpart() method in R
Perhaps you misunderstood the message? It is saying that, having built the tree using the control parameters specified, only the variables mpa_a and tc_b have been involved in splits. All the variables were considered, but just these two were needed.
That tree seems quite small; do you have only a small sample of observations? If you want to grow a bigger tree for subsequent pruning back, then you need to alter the minsplit and minbucket control parameters. See ?rpart.control, e.g.:
rm <- rpart(uloss ~ tc_b + ublkb + mpa_a + mpa_b +
sys_a + sys_b + usr_a, data = data81, method = "anova",
control = rpart.control(minsplit = 2, minbucket = 1))
would try to fit a full tree --- but it will be hopelessly over-fitted to the data and you must prune it back using prune(). However, that might assure you that rpart() used all the data. | Recursive partitioning using rpart() method in R
Perhaps you misunderstood the message? It is saying that, having built the tree using the control parameters specified, only the variables mpa_a and tc_b have been involved in splits. All the variable |
52,716 | Recursive partitioning using rpart() method in R | If the number of observations is less than around 20,000 the trees built by rpart do not have a reliable structure. That is, if you were to use the bootstrap to repeat the process, you will see many different trees that are called 'optimal'. | Recursive partitioning using rpart() method in R | If the number of observations is less than around 20,000 the trees built by rpart do not have a reliable structure. That is, if you were to use the bootstrap to repeat the process, you will see many | Recursive partitioning using rpart() method in R
If the number of observations is less than around 20,000 the trees built by rpart do not have a reliable structure. That is, if you were to use the bootstrap to repeat the process, you will see many different trees that are called 'optimal'. | Recursive partitioning using rpart() method in R
If the number of observations is less than around 20,000 the trees built by rpart do not have a reliable structure. That is, if you were to use the bootstrap to repeat the process, you will see many |
52,717 | ARMA modeling in R | Simplest way to arrive at values for $p$ and $q$ is using auto.arima function from package forecast. There is no simplest way in any statistical package to arrive at good values. The main reason for that is that there is no universal definition of good.
Since you mention overfitting, one possible way is to fit arima models for different values of $p$ and $q$ and then pick the one which is the best according to your overfitting criteria (out of sample forecasting performance for example). auto.arima does basically the same, you can choose between AIC, AICC and BIC to let auto.arima pick the best model. | ARMA modeling in R | Simplest way to arrive at values for $p$ and $q$ is using auto.arima function from package forecast. There is no simplest way in any statistical package to arrive at good values. The main reason for t | ARMA modeling in R
Simplest way to arrive at values for $p$ and $q$ is using auto.arima function from package forecast. There is no simplest way in any statistical package to arrive at good values. The main reason for that is that there is no universal definition of good.
Since you mention overfitting, one possible way is to fit arima models for different values of $p$ and $q$ and then pick the one which is the best according to your overfitting criteria (out of sample forecasting performance for example). auto.arima does basically the same, you can choose between AIC, AICC and BIC to let auto.arima pick the best model. | ARMA modeling in R
Simplest way to arrive at values for $p$ and $q$ is using auto.arima function from package forecast. There is no simplest way in any statistical package to arrive at good values. The main reason for t |
52,718 | ARMA modeling in R | One option is to fit a series of ARMA models with combinations of $p$ and $q$ and work with the model that has the best "fit". Here I evaluate "fit" using BIC to attempt to penalise overly complex fits. An example is shown below for the in-built Mauna Loa $\mathrm{CO}_2$ concentration data set
## load the data
data(co2)
## take only data up to end of 1990 - predict for remaining data later
CO2 <- window(co2, end = c(1990, 12))
## Set up the parameter sets over which we want to operate
CO2.pars <- expand.grid(ar = 0:2, diff = 1, ma = 0:2, sar = 0:1,
sdiff = 1, sma = 0:1)
## As you are only wanting ARMA, then you would need something like
## pars <- expand.grid(ar = 0:4, diff = 0, ma = 0:4)
## and where you choose the upper and lower limits - here 0 and 4
## A vector to hold the BIC values for each combination of model
CO2.bic <- rep(0, nrow(CO2.pars))
## loop over the combinations, fitting an ARIMA model and recording the BIC
## for that model. Note we use AIC() with extra penalty given by `k`
for (i in seq(along = CO2.bic)) {
CO2.bic[i] <- AIC(arima(CO2, unlist(CO2.pars[i, 1:3]),
unlist(CO2.pars[i, 4:6])),
k = log(length(CO2)))
}
## identify the model with lowest BIC
CO2.pars[which.min(CO2.bic), ]
## Refit the model with lowest BIC
CO2.mod <- arima(CO2, order = c(0, 1, 1), seasonal = c(0, 1, 1))
CO2.mod
## Diagnostics plots
tsdiag(CO2.mod, gof.lag = 36)
## predict for the most recent data
pred <- predict(CO2.mod, n.ahead = 7 * 12)
upr <- pred$pred + (2 * pred$se) ## upper and lower confidence intervals
lwr <- pred$pred - (2 * pred$se) ## approximate 95% pointwise
## plot what we have done
ylim <- range(co2, upr, lwr)
plot(co2, ylab = ylab, main = expression(bold(Mauna ~ Loa ~ CO[2])),
xlab = "Year", ylim = ylim)
lines(pred$pred, col = "red")
lines(upr, col = "red", lty = 2)
lines(lwr, col = "red", lty = 2)
legend("topleft", legend = c("Observed", "Predicted", "95% CI"),
col = c("black", "red", "red"), lty = c(1, 1, 2), bty = "n") | ARMA modeling in R | One option is to fit a series of ARMA models with combinations of $p$ and $q$ and work with the model that has the best "fit". Here I evaluate "fit" using BIC to attempt to penalise overly complex fit | ARMA modeling in R
One option is to fit a series of ARMA models with combinations of $p$ and $q$ and work with the model that has the best "fit". Here I evaluate "fit" using BIC to attempt to penalise overly complex fits. An example is shown below for the in-built Mauna Loa $\mathrm{CO}_2$ concentration data set
## load the data
data(co2)
## take only data up to end of 1990 - predict for remaining data later
CO2 <- window(co2, end = c(1990, 12))
## Set up the parameter sets over which we want to operate
CO2.pars <- expand.grid(ar = 0:2, diff = 1, ma = 0:2, sar = 0:1,
sdiff = 1, sma = 0:1)
## As you are only wanting ARMA, then you would need something like
## pars <- expand.grid(ar = 0:4, diff = 0, ma = 0:4)
## and where you choose the upper and lower limits - here 0 and 4
## A vector to hold the BIC values for each combination of model
CO2.bic <- rep(0, nrow(CO2.pars))
## loop over the combinations, fitting an ARIMA model and recording the BIC
## for that model. Note we use AIC() with extra penalty given by `k`
for (i in seq(along = CO2.bic)) {
CO2.bic[i] <- AIC(arima(CO2, unlist(CO2.pars[i, 1:3]),
unlist(CO2.pars[i, 4:6])),
k = log(length(CO2)))
}
## identify the model with lowest BIC
CO2.pars[which.min(CO2.bic), ]
## Refit the model with lowest BIC
CO2.mod <- arima(CO2, order = c(0, 1, 1), seasonal = c(0, 1, 1))
CO2.mod
## Diagnostics plots
tsdiag(CO2.mod, gof.lag = 36)
## predict for the most recent data
pred <- predict(CO2.mod, n.ahead = 7 * 12)
upr <- pred$pred + (2 * pred$se) ## upper and lower confidence intervals
lwr <- pred$pred - (2 * pred$se) ## approximate 95% pointwise
## plot what we have done
ylim <- range(co2, upr, lwr)
plot(co2, ylab = ylab, main = expression(bold(Mauna ~ Loa ~ CO[2])),
xlab = "Year", ylim = ylim)
lines(pred$pred, col = "red")
lines(upr, col = "red", lty = 2)
lines(lwr, col = "red", lty = 2)
legend("topleft", legend = c("Observed", "Predicted", "95% CI"),
col = c("black", "red", "red"), lty = c(1, 1, 2), bty = "n") | ARMA modeling in R
One option is to fit a series of ARMA models with combinations of $p$ and $q$ and work with the model that has the best "fit". Here I evaluate "fit" using BIC to attempt to penalise overly complex fit |
52,719 | Model performance metrics for ordinal response | A good measure is Somers' Dxy rank correlation, a generalization of ROC area for ordinal or continuous Y. It is computed for ordinal proportional odds regression in the lrm function in the rms package. | Model performance metrics for ordinal response | A good measure is Somers' Dxy rank correlation, a generalization of ROC area for ordinal or continuous Y. It is computed for ordinal proportional odds regression in the lrm function in the rms packag | Model performance metrics for ordinal response
A good measure is Somers' Dxy rank correlation, a generalization of ROC area for ordinal or continuous Y. It is computed for ordinal proportional odds regression in the lrm function in the rms package. | Model performance metrics for ordinal response
A good measure is Somers' Dxy rank correlation, a generalization of ROC area for ordinal or continuous Y. It is computed for ordinal proportional odds regression in the lrm function in the rms packag |
52,720 | How to produce Theil's U with package forecast 2.16 in R? | It does. Use the accuracy() command.
Update: here is an example.
library(forecast)
x <- EuStockMarkets[1:200,1]
f <- EuStockMarkets[201:300,1]
fit1 <- ses(x,h=100)
accuracy(fit1,f)
ME RMSE MAE MPE MAPE MASE ACF1 Theil's U
0.8065983 78.1801986 63.2728352 -0.1725009 3.7876802 7.0619776 0.9586859 6.6120277
If you want the in-sample value of U (which is of limited value), the following will work:
fpe <- fit1$fitted[2:200]/x[1:199] - 1
ape <- x[2:200]/x[1:199] - 1
U <- sqrt(sum((fpe - ape)^2)/sum(ape^2)) | How to produce Theil's U with package forecast 2.16 in R? | It does. Use the accuracy() command.
Update: here is an example.
library(forecast)
x <- EuStockMarkets[1:200,1]
f <- EuStockMarkets[201:300,1]
fit1 <- ses(x,h=100)
accuracy(fit1,f)
ME RM | How to produce Theil's U with package forecast 2.16 in R?
It does. Use the accuracy() command.
Update: here is an example.
library(forecast)
x <- EuStockMarkets[1:200,1]
f <- EuStockMarkets[201:300,1]
fit1 <- ses(x,h=100)
accuracy(fit1,f)
ME RMSE MAE MPE MAPE MASE ACF1 Theil's U
0.8065983 78.1801986 63.2728352 -0.1725009 3.7876802 7.0619776 0.9586859 6.6120277
If you want the in-sample value of U (which is of limited value), the following will work:
fpe <- fit1$fitted[2:200]/x[1:199] - 1
ape <- x[2:200]/x[1:199] - 1
U <- sqrt(sum((fpe - ape)^2)/sum(ape^2)) | How to produce Theil's U with package forecast 2.16 in R?
It does. Use the accuracy() command.
Update: here is an example.
library(forecast)
x <- EuStockMarkets[1:200,1]
f <- EuStockMarkets[201:300,1]
fit1 <- ses(x,h=100)
accuracy(fit1,f)
ME RM |
52,721 | Tutorial for using R to do multivariate regression? | This is my favorite one: Quick-R | Tutorial for using R to do multivariate regression? | This is my favorite one: Quick-R | Tutorial for using R to do multivariate regression?
This is my favorite one: Quick-R | Tutorial for using R to do multivariate regression?
This is my favorite one: Quick-R |
52,722 | Tutorial for using R to do multivariate regression? | +1 for Quick-R.
Another great resource that I (re)turn to regularly is the website of the UCLA Statistical Consulting Group. In particular, it sounds like you might find their data analysis examples useful. Many of the cases walk through the logic of inquiry and model design steps in addition to providing sample code and datasets. They also have a separate section of textbook examples and code, which I have found useful for self-teaching purposes. | Tutorial for using R to do multivariate regression? | +1 for Quick-R.
Another great resource that I (re)turn to regularly is the website of the UCLA Statistical Consulting Group. In particular, it sounds like you might find their data analysis examples u | Tutorial for using R to do multivariate regression?
+1 for Quick-R.
Another great resource that I (re)turn to regularly is the website of the UCLA Statistical Consulting Group. In particular, it sounds like you might find their data analysis examples useful. Many of the cases walk through the logic of inquiry and model design steps in addition to providing sample code and datasets. They also have a separate section of textbook examples and code, which I have found useful for self-teaching purposes. | Tutorial for using R to do multivariate regression?
+1 for Quick-R.
Another great resource that I (re)turn to regularly is the website of the UCLA Statistical Consulting Group. In particular, it sounds like you might find their data analysis examples u |
52,723 | Concept of a random linear model | One problem with the approach you outline is that the regressors $x_i$ will (on average) be uncorrelated, and one situation in which variable selection methods have difficulty is highly correlated regressors.
I'm not sure the concept of a 'random' linear model is very useful here, as you have to decide on a probability distribution over your model space, which seems arbitrary. I'd rather think of it as an experiment, and apply the principles of good experimental design.
Postscript: Here's one reference but i'm sure there are others:
Andrea Burton, Douglas G. Altman, Patrick Royston, and Roger L. Holder. The design of simulation studies in medical statistics. Statistics in Medicine 25(24):4279-4292, 2006. DOI:10.1002/sim.2673
See also this related letter:
Hakan Demirtas. Statistics in Medicine 26(20):3818-3821, 2007. DOI:10.1002/sim.2876
Just found a commentary on a similar topic:
G. Maldonado and S. Greenland. The importance of critically interpreting simulation studies. Epidemiology 8 (4):453-456, 1997. http://www.jstor.org/stable/3702591 | Concept of a random linear model | One problem with the approach you outline is that the regressors $x_i$ will (on average) be uncorrelated, and one situation in which variable selection methods have difficulty is highly correlated reg | Concept of a random linear model
One problem with the approach you outline is that the regressors $x_i$ will (on average) be uncorrelated, and one situation in which variable selection methods have difficulty is highly correlated regressors.
I'm not sure the concept of a 'random' linear model is very useful here, as you have to decide on a probability distribution over your model space, which seems arbitrary. I'd rather think of it as an experiment, and apply the principles of good experimental design.
Postscript: Here's one reference but i'm sure there are others:
Andrea Burton, Douglas G. Altman, Patrick Royston, and Roger L. Holder. The design of simulation studies in medical statistics. Statistics in Medicine 25(24):4279-4292, 2006. DOI:10.1002/sim.2673
See also this related letter:
Hakan Demirtas. Statistics in Medicine 26(20):3818-3821, 2007. DOI:10.1002/sim.2876
Just found a commentary on a similar topic:
G. Maldonado and S. Greenland. The importance of critically interpreting simulation studies. Epidemiology 8 (4):453-456, 1997. http://www.jstor.org/stable/3702591 | Concept of a random linear model
One problem with the approach you outline is that the regressors $x_i$ will (on average) be uncorrelated, and one situation in which variable selection methods have difficulty is highly correlated reg |
52,724 | Concept of a random linear model | To address @onestop's objection to non-correlated regressors, you could do the following:
Choose $n, k, l$, where $l$ is the number of latent factors.
Choose $\sigma_i$, the amount of 'idiosyncratic' volatility in the regressors.
Draw a $k \times l$ matrix, $F$, of exposures, uniformly on $(0,1)$. (you may want to normalize to sum 1 across rows of $F$.)
Draw a $n \times l$ matrix, $W$, of latent regressors as standard normals.
Let $X = W F^\top + \sigma_i E$ be the regressors, where $E$ is an $n\times k$ matrix drawn from a standard normal.
Proceed as before: draw $k$ vector $\beta$ uniformly on $(0,1)$.
draw $n$ vector $\epsilon$ as a normal with variance $\sigma^2$.
Let $y = X\beta + \epsilon$. | Concept of a random linear model | To address @onestop's objection to non-correlated regressors, you could do the following:
Choose $n, k, l$, where $l$ is the number of latent factors.
Choose $\sigma_i$, the amount of 'idiosyncratic | Concept of a random linear model
To address @onestop's objection to non-correlated regressors, you could do the following:
Choose $n, k, l$, where $l$ is the number of latent factors.
Choose $\sigma_i$, the amount of 'idiosyncratic' volatility in the regressors.
Draw a $k \times l$ matrix, $F$, of exposures, uniformly on $(0,1)$. (you may want to normalize to sum 1 across rows of $F$.)
Draw a $n \times l$ matrix, $W$, of latent regressors as standard normals.
Let $X = W F^\top + \sigma_i E$ be the regressors, where $E$ is an $n\times k$ matrix drawn from a standard normal.
Proceed as before: draw $k$ vector $\beta$ uniformly on $(0,1)$.
draw $n$ vector $\epsilon$ as a normal with variance $\sigma^2$.
Let $y = X\beta + \epsilon$. | Concept of a random linear model
To address @onestop's objection to non-correlated regressors, you could do the following:
Choose $n, k, l$, where $l$ is the number of latent factors.
Choose $\sigma_i$, the amount of 'idiosyncratic |
52,725 | A "systematic" part of a random time series component? | The Burns reference that you are quoting seems to dividing the stochastic part into autocorrelation error, which is a byproduct of any time series analysis (and is systematic), vs. truly random error which is uncontrollable.
-Ralph Winters | A "systematic" part of a random time series component? | The Burns reference that you are quoting seems to dividing the stochastic part into autocorrelation error, which is a byproduct of any time series analysis (and is systematic), vs. truly random error | A "systematic" part of a random time series component?
The Burns reference that you are quoting seems to dividing the stochastic part into autocorrelation error, which is a byproduct of any time series analysis (and is systematic), vs. truly random error which is uncontrollable.
-Ralph Winters | A "systematic" part of a random time series component?
The Burns reference that you are quoting seems to dividing the stochastic part into autocorrelation error, which is a byproduct of any time series analysis (and is systematic), vs. truly random error |
52,726 | A "systematic" part of a random time series component? | "random" is often used as if it was a real property of the data under study, where it should be replaced with "uncertain". To give an example, if I ask you what how much money you earned over the past month, and you don't tell me, it is not "random", but just uncertain. However, treating the uncertainty as if it was random allows you to make some useful conclusions.
The noise is not "random" per se, but given that we usually have limited knowledge of how each particular piece of "noise" is generated, assuming that it is random can be useful.
Now whenever you fit a model to some data, you will have residuals from that model. And if treating the "noise" as if it was random was a good idea, then the residuals from the model should be consistent with whatever definition of "randomness" you have used in fitting the model. If they are not, then basically the time series is telling you that the "randomness" you assumed is not a good description of what it actually happening, and it gives you a clue as to what a better description might be.
For example, if I fit a linear relationship for the systematic part, but it is actually quadratic, then the so-called "random" noise will not look random at all, rather it will contain the squared component of the systemtatic part.
To make it even more concrete, suppose that your response $Y$ is a deterministic function of $X$, say $Y=3+2X+X^2$. Now, because you don't know this function you suppose that $Y=\alpha+\beta X + error$, and because you have no reason to doubt the model prior to seeing the data, you assume that the error is just "random noise" (usually $N(0,\sigma^2)$). However, once you actually fit your data and look at the residuals, they will all line up as an exact quadratic function of the residuals. Thus there is a systematic component to the "noise" (in fact the "noise" is entirely systematic). This is basically "Nature" telling "You" that you model is wrong, and gives a clue as to how it could be improved.
The same kind of thing is happening in the time series. you could just replace the model above with $Y_{1}=1,Y_{2}=6,Y_{3}=0.5,Y_{4}=10,Y_{5}=3,Y_{6}=10$ and for $t\geq7$ have $Y_t=10+2 Y_{t-1} -5Y_{t-1}^3 + 2Y_{t-5}$ and the same kind of thing would happen. | A "systematic" part of a random time series component? | "random" is often used as if it was a real property of the data under study, where it should be replaced with "uncertain". To give an example, if I ask you what how much money you earned over the pas | A "systematic" part of a random time series component?
"random" is often used as if it was a real property of the data under study, where it should be replaced with "uncertain". To give an example, if I ask you what how much money you earned over the past month, and you don't tell me, it is not "random", but just uncertain. However, treating the uncertainty as if it was random allows you to make some useful conclusions.
The noise is not "random" per se, but given that we usually have limited knowledge of how each particular piece of "noise" is generated, assuming that it is random can be useful.
Now whenever you fit a model to some data, you will have residuals from that model. And if treating the "noise" as if it was random was a good idea, then the residuals from the model should be consistent with whatever definition of "randomness" you have used in fitting the model. If they are not, then basically the time series is telling you that the "randomness" you assumed is not a good description of what it actually happening, and it gives you a clue as to what a better description might be.
For example, if I fit a linear relationship for the systematic part, but it is actually quadratic, then the so-called "random" noise will not look random at all, rather it will contain the squared component of the systemtatic part.
To make it even more concrete, suppose that your response $Y$ is a deterministic function of $X$, say $Y=3+2X+X^2$. Now, because you don't know this function you suppose that $Y=\alpha+\beta X + error$, and because you have no reason to doubt the model prior to seeing the data, you assume that the error is just "random noise" (usually $N(0,\sigma^2)$). However, once you actually fit your data and look at the residuals, they will all line up as an exact quadratic function of the residuals. Thus there is a systematic component to the "noise" (in fact the "noise" is entirely systematic). This is basically "Nature" telling "You" that you model is wrong, and gives a clue as to how it could be improved.
The same kind of thing is happening in the time series. you could just replace the model above with $Y_{1}=1,Y_{2}=6,Y_{3}=0.5,Y_{4}=10,Y_{5}=3,Y_{6}=10$ and for $t\geq7$ have $Y_t=10+2 Y_{t-1} -5Y_{t-1}^3 + 2Y_{t-5}$ and the same kind of thing would happen. | A "systematic" part of a random time series component?
"random" is often used as if it was a real property of the data under study, where it should be replaced with "uncertain". To give an example, if I ask you what how much money you earned over the pas |
52,727 | A "systematic" part of a random time series component? | Systematic and unsystematic are rather ambiguous terms. One of the possible explanations is given by @probabilityislogic. Another may be given here. Since the context you gave is time series, I think this might be related to Wold's theorem. Unfortunately wikipedia text captures the essence, but does not go into the details of which part is systematic and non systematic.
I did not manage to find appropriate link to refer to, so I will try give some explanation based on the book I have. This subject is also discussed in this book. I will not give precise and rigorous definitions, since they involve Hilbert spaces and other graduate mathematics stuff, which I think is not really necessary to get the point across.
Each covariance-stationary process $\{X_t,t\in \mathbb{Z}\}$ can be uniquely decomposed into two stationary proceses: $X_t=M_t+N_t$, singular $M_t$ and regular $N_t$.
Singular and regular processes are defined via their prediction properties. In stationary process theory the prediction of process $X_t$ at time $t$ is formed from linear span of its history $(X_s,s<t)$. Singular processes are processes for which the prediction error:
$$E(\hat{X}_t-X_t)^2$$
is zero. Such processes sometimes are called deterministic, and in your context can be also called systematic. The most simple example of such process is $X_t=\eta$ for all $t$ and $\eta$ some random variable. Then the linear prediction of $X_t$ based on its history will always be $\eta$. The error of such prediction as defined above would be zero.
Regular stationary processes on the other hand cannot be predicted without error from their history. It can be shown that the stationary process $N_t$ is regular if and only if it admits $MA(\infty)$ decomposition. This means that there exists white-noise sequence $(\varepsilon_t)$ such that
$$N_t=\sum_{t=0}^{\infty}c_n\varepsilon_{t-n}.$$
where coefficients $c_n$ are such, that the equality holds. These processes sometimes are called non-deterministic, or probably non-systematic in your case. | A "systematic" part of a random time series component? | Systematic and unsystematic are rather ambiguous terms. One of the possible explanations is given by @probabilityislogic. Another may be given here. Since the context you gave is time series, I think | A "systematic" part of a random time series component?
Systematic and unsystematic are rather ambiguous terms. One of the possible explanations is given by @probabilityislogic. Another may be given here. Since the context you gave is time series, I think this might be related to Wold's theorem. Unfortunately wikipedia text captures the essence, but does not go into the details of which part is systematic and non systematic.
I did not manage to find appropriate link to refer to, so I will try give some explanation based on the book I have. This subject is also discussed in this book. I will not give precise and rigorous definitions, since they involve Hilbert spaces and other graduate mathematics stuff, which I think is not really necessary to get the point across.
Each covariance-stationary process $\{X_t,t\in \mathbb{Z}\}$ can be uniquely decomposed into two stationary proceses: $X_t=M_t+N_t$, singular $M_t$ and regular $N_t$.
Singular and regular processes are defined via their prediction properties. In stationary process theory the prediction of process $X_t$ at time $t$ is formed from linear span of its history $(X_s,s<t)$. Singular processes are processes for which the prediction error:
$$E(\hat{X}_t-X_t)^2$$
is zero. Such processes sometimes are called deterministic, and in your context can be also called systematic. The most simple example of such process is $X_t=\eta$ for all $t$ and $\eta$ some random variable. Then the linear prediction of $X_t$ based on its history will always be $\eta$. The error of such prediction as defined above would be zero.
Regular stationary processes on the other hand cannot be predicted without error from their history. It can be shown that the stationary process $N_t$ is regular if and only if it admits $MA(\infty)$ decomposition. This means that there exists white-noise sequence $(\varepsilon_t)$ such that
$$N_t=\sum_{t=0}^{\infty}c_n\varepsilon_{t-n}.$$
where coefficients $c_n$ are such, that the equality holds. These processes sometimes are called non-deterministic, or probably non-systematic in your case. | A "systematic" part of a random time series component?
Systematic and unsystematic are rather ambiguous terms. One of the possible explanations is given by @probabilityislogic. Another may be given here. Since the context you gave is time series, I think |
52,728 | Testing if a coin is fair | It's neither because the alternative to being fair is that the coin favors heads or tails.
You are free to invent any test you like. For example, I could (idiosyncratically) decide the coin is unfair if and only if the number of heads is either 6 or 15 (the "critical region"), because this event occurs with only 5% chance when the coin is fair. The key question is how well does a test perform. The Neyman-Pearson lemma shows that this particular one I just invented is a poor test. A good test is one whose critical region not only is unlikely when the null is true, but is also highly likely when the null is false.
There is no one best critical region for this kind of two-sided test, but a reasonable compromise is to adopt a procedure that will detect deviations from fairness in both directions. That suggests a critical region that contains the most extreme possibilities: a bunch near 0 and a bunch near 20. A good choice at the 5% level is to consider any outcome of 15 or more or 5 or less to be significant.
Let us then adopt the best symmetric test for a fair coin. This means we want the critical region to include $20-i$ heads whenever it includes $i$ heads (that is, $20-i$ tails). This treats heads and tails on an equal footing. It is only in the context of a particular test (like this one) that a p-value has any meaning. The p-value corresponding to an outcome of 14 is, by definition, the smallest significance of any such test that includes 14 in its critical region. By symmetry it must include 20 - 14 = 6 and by the Neyman-Pearson lemma it must include all values larger than 14 and all values less than 6. The chance of this under the null is 11.53%. This chance increases uniformly as the chance of heads deviates more and more from 1/2 in either direction. | Testing if a coin is fair | It's neither because the alternative to being fair is that the coin favors heads or tails.
You are free to invent any test you like. For example, I could (idiosyncratically) decide the coin is unfair | Testing if a coin is fair
It's neither because the alternative to being fair is that the coin favors heads or tails.
You are free to invent any test you like. For example, I could (idiosyncratically) decide the coin is unfair if and only if the number of heads is either 6 or 15 (the "critical region"), because this event occurs with only 5% chance when the coin is fair. The key question is how well does a test perform. The Neyman-Pearson lemma shows that this particular one I just invented is a poor test. A good test is one whose critical region not only is unlikely when the null is true, but is also highly likely when the null is false.
There is no one best critical region for this kind of two-sided test, but a reasonable compromise is to adopt a procedure that will detect deviations from fairness in both directions. That suggests a critical region that contains the most extreme possibilities: a bunch near 0 and a bunch near 20. A good choice at the 5% level is to consider any outcome of 15 or more or 5 or less to be significant.
Let us then adopt the best symmetric test for a fair coin. This means we want the critical region to include $20-i$ heads whenever it includes $i$ heads (that is, $20-i$ tails). This treats heads and tails on an equal footing. It is only in the context of a particular test (like this one) that a p-value has any meaning. The p-value corresponding to an outcome of 14 is, by definition, the smallest significance of any such test that includes 14 in its critical region. By symmetry it must include 20 - 14 = 6 and by the Neyman-Pearson lemma it must include all values larger than 14 and all values less than 6. The chance of this under the null is 11.53%. This chance increases uniformly as the chance of heads deviates more and more from 1/2 in either direction. | Testing if a coin is fair
It's neither because the alternative to being fair is that the coin favors heads or tails.
You are free to invent any test you like. For example, I could (idiosyncratically) decide the coin is unfair |
52,729 | References for use of symplectic geometry in statistics? | I know nothing whatsoever about symplectic geometry, but a bit of googling brought up a 1997 article in the Journal of Statistical Planning & Inference by Barndorff-Nielsen & Jupp, which contains this quote:
Some other links between statistics and symplectic geometry have been discussed
by Friedrich and Nakamura. Friedrich (1991) established some connections between
expected (Fisher) information and symplectic structures. However, his approach and
results are quite different from those considered here. Nakamura (1993, 1994) has
shown that certain parametric statistical models in which the parameter space M is an
even-dimensional vector space (and so has the symplectic structure of the cotangent
space of a vector space) give rise to completely integrable Hamiltonian systems on M.
The cited refs are:
Friedrich, T., 1991. Die Fisher-lnformation und symplectische Strukturen. Math. Nachr. 153: 273-296.
Nakamura, Y., 1993. Completely integrable gradient systems on the manifolds of Gaussian and multinomial
distributions. Japan. J. Ind. Appl. Math. 10: 179-189.
Nakamura, Y., 1994. Gradient systems associated with probability distributions. Japan. J. Ind. Appl. Math. 11: 21-30.
The article's Introduction says B-N & others have used differential geometry as an approach to statistical asymptotics. Symplectic geometry is a branch of differential geometry (according to Wikipedia). A Google Books search finds several books about the application of differential geometry to statistics and related fields such as econometrics. | References for use of symplectic geometry in statistics? | I know nothing whatsoever about symplectic geometry, but a bit of googling brought up a 1997 article in the Journal of Statistical Planning & Inference by Barndorff-Nielsen & Jupp, which contains this | References for use of symplectic geometry in statistics?
I know nothing whatsoever about symplectic geometry, but a bit of googling brought up a 1997 article in the Journal of Statistical Planning & Inference by Barndorff-Nielsen & Jupp, which contains this quote:
Some other links between statistics and symplectic geometry have been discussed
by Friedrich and Nakamura. Friedrich (1991) established some connections between
expected (Fisher) information and symplectic structures. However, his approach and
results are quite different from those considered here. Nakamura (1993, 1994) has
shown that certain parametric statistical models in which the parameter space M is an
even-dimensional vector space (and so has the symplectic structure of the cotangent
space of a vector space) give rise to completely integrable Hamiltonian systems on M.
The cited refs are:
Friedrich, T., 1991. Die Fisher-lnformation und symplectische Strukturen. Math. Nachr. 153: 273-296.
Nakamura, Y., 1993. Completely integrable gradient systems on the manifolds of Gaussian and multinomial
distributions. Japan. J. Ind. Appl. Math. 10: 179-189.
Nakamura, Y., 1994. Gradient systems associated with probability distributions. Japan. J. Ind. Appl. Math. 11: 21-30.
The article's Introduction says B-N & others have used differential geometry as an approach to statistical asymptotics. Symplectic geometry is a branch of differential geometry (according to Wikipedia). A Google Books search finds several books about the application of differential geometry to statistics and related fields such as econometrics. | References for use of symplectic geometry in statistics?
I know nothing whatsoever about symplectic geometry, but a bit of googling brought up a 1997 article in the Journal of Statistical Planning & Inference by Barndorff-Nielsen & Jupp, which contains this |
52,730 | References for use of symplectic geometry in statistics? | A direct connection would be unexpected: the two fields appear to have little in common. For example, a modern introduction to symplectic geometry published by the American Mathematical Society appears to make no mention of mathematical statistics at all.
At best it seems any connection would come through mathematical physics. A symplectic geometry on phase space naturally arises in the Hamiltonian formulation of classical mechanics and that in turn can be used to explore global properties of physical systems. The study of periodic and near-periodic orbits becomes somewhat statistical (e.g., ergodic theorems). When applied to a system with many degrees of freedom it would conceivably relate some aspects of symplectic geometry to thermodynamics, which is inherently a statistical theory | References for use of symplectic geometry in statistics? | A direct connection would be unexpected: the two fields appear to have little in common. For example, a modern introduction to symplectic geometry published by the American Mathematical Society appea | References for use of symplectic geometry in statistics?
A direct connection would be unexpected: the two fields appear to have little in common. For example, a modern introduction to symplectic geometry published by the American Mathematical Society appears to make no mention of mathematical statistics at all.
At best it seems any connection would come through mathematical physics. A symplectic geometry on phase space naturally arises in the Hamiltonian formulation of classical mechanics and that in turn can be used to explore global properties of physical systems. The study of periodic and near-periodic orbits becomes somewhat statistical (e.g., ergodic theorems). When applied to a system with many degrees of freedom it would conceivably relate some aspects of symplectic geometry to thermodynamics, which is inherently a statistical theory | References for use of symplectic geometry in statistics?
A direct connection would be unexpected: the two fields appear to have little in common. For example, a modern introduction to symplectic geometry published by the American Mathematical Society appea |
52,731 | References for use of symplectic geometry in statistics? | Symplectic model of Statistical Physics and Information Geometry is given by Souriau model of "Lie groups Thermodynamics":
Lie Group Cohomology and (Multi)Symplectic Integrators: New Geometric Tools for Lie Group Machine Learning Based on Souriau Geometric Statistical Mechanics,
Lie Group Statistics and Lie Group Machine Learning Based on Souriau Lie Groups Thermodynamics & Koszul-Souriau-Fisher Metric: New Entropy Definition as Generalized Casimir Invariant Function in Coadjoint Representation,
(Souriau-Casimir
Lie Groups Thermodynamics
& Machine Learning: A slide presentation)(https://franknielsen.github.io/SPIG-LesHouches2020/Barbaresco-SPILG2020.pdf),
the youtube version | References for use of symplectic geometry in statistics? | Symplectic model of Statistical Physics and Information Geometry is given by Souriau model of "Lie groups Thermodynamics":
Lie Group Cohomology and (Multi)Symplectic Integrators: New Geometric Tools f | References for use of symplectic geometry in statistics?
Symplectic model of Statistical Physics and Information Geometry is given by Souriau model of "Lie groups Thermodynamics":
Lie Group Cohomology and (Multi)Symplectic Integrators: New Geometric Tools for Lie Group Machine Learning Based on Souriau Geometric Statistical Mechanics,
Lie Group Statistics and Lie Group Machine Learning Based on Souriau Lie Groups Thermodynamics & Koszul-Souriau-Fisher Metric: New Entropy Definition as Generalized Casimir Invariant Function in Coadjoint Representation,
(Souriau-Casimir
Lie Groups Thermodynamics
& Machine Learning: A slide presentation)(https://franknielsen.github.io/SPIG-LesHouches2020/Barbaresco-SPILG2020.pdf),
the youtube version | References for use of symplectic geometry in statistics?
Symplectic model of Statistical Physics and Information Geometry is given by Souriau model of "Lie groups Thermodynamics":
Lie Group Cohomology and (Multi)Symplectic Integrators: New Geometric Tools f |
52,732 | How to use Kernel Density Estimation for Prediction? | You can use conditional kernel density estimation to obtain the density of sales at time $t+h$ conditional on the values of sales at times $t, t-1, t-2, \dots$ This gives you a density forecast rather than a point forecast. The problem is that the conditioning is difficult in a density setting when the number of conditioning variables is more than 2. See this paper for a discussion of the basic idea.
An alternative procedure that imposes more assumptions (but allows more conditioning variables) is to fit an additive autoregression such as described in Chen and Tsay (1993) and then use kde on the residuals to obtain the forecast densities.
However, I suspect that both of these are more complicated than what you really want. I suggest you read a textbook on demand forecasting such as Levenbach and Cleary (2006). | How to use Kernel Density Estimation for Prediction? | You can use conditional kernel density estimation to obtain the density of sales at time $t+h$ conditional on the values of sales at times $t, t-1, t-2, \dots$ This gives you a density forecast rather | How to use Kernel Density Estimation for Prediction?
You can use conditional kernel density estimation to obtain the density of sales at time $t+h$ conditional on the values of sales at times $t, t-1, t-2, \dots$ This gives you a density forecast rather than a point forecast. The problem is that the conditioning is difficult in a density setting when the number of conditioning variables is more than 2. See this paper for a discussion of the basic idea.
An alternative procedure that imposes more assumptions (but allows more conditioning variables) is to fit an additive autoregression such as described in Chen and Tsay (1993) and then use kde on the residuals to obtain the forecast densities.
However, I suspect that both of these are more complicated than what you really want. I suggest you read a textbook on demand forecasting such as Levenbach and Cleary (2006). | How to use Kernel Density Estimation for Prediction?
You can use conditional kernel density estimation to obtain the density of sales at time $t+h$ conditional on the values of sales at times $t, t-1, t-2, \dots$ This gives you a density forecast rather |
52,733 | How to use Kernel Density Estimation for Prediction? | I would have thought that KDE bear little if any relationship to predicting future sales based on past sales. Sounds more like time series analysis to me, though that's really not my area. | How to use Kernel Density Estimation for Prediction? | I would have thought that KDE bear little if any relationship to predicting future sales based on past sales. Sounds more like time series analysis to me, though that's really not my area. | How to use Kernel Density Estimation for Prediction?
I would have thought that KDE bear little if any relationship to predicting future sales based on past sales. Sounds more like time series analysis to me, though that's really not my area. | How to use Kernel Density Estimation for Prediction?
I would have thought that KDE bear little if any relationship to predicting future sales based on past sales. Sounds more like time series analysis to me, though that's really not my area. |
52,734 | Libraries for forest and funnel plots | Well, i use graphviz, which has Java bindings (Grappa).
Although the dot language (graphviz's syntax) is simple, i prefer to use graphviz as a library through the excellent and production-stable python bindings, pygraphviz, and networkx.
Here's the code for a simple 'funnel diagram' using those tools; it's not the most elaborate diagram, but it is complete--it initializes the graph object, creates all of the necessary components, styles them, renders the graph, and writes it to file.
import networkx as NX
import pygraphviz as PV
G = PV.AGraph(strict=False, directed=True) # initialize graph object
# create graph components:
node_list = ["Step1", "Step2", "Step3", "Step4"]
edge_list = [("Step1, Step2"), ("Step2", "Step3"), ("Step3", "Step4")]
G.add_nodes_from(node_list)
G.add_edge("Step1", "Step2")
G.add_edge("Step2", "Step3")
G.add_edge("Step3", "Step4")
# style them:
nak = "fontname fontsize fontcolor shape style fill color size".split()
nav = "Arial 11 white invtrapezium filled cornflowerblue cornflowerblue 1.4".split()
nas = dict(zip(nak, nav))
for k, v in nas.iteritems() :
G.node_attr[k] = v
eak = "fontname fontsize fontcolor dir arrowhead arrowsize arrowtail".split()
eav = "Arial 10 red4 forward normal 0.8 inv".split()
eas = dict(zip(eak, eav))
for k, v in eas.iteritems() :
G.edge_attr[k] = v
n1 = G.get_node("Step1")
n1.attr['fontsize'] = '11'
n1.attr['fontcolor'] = 'red4'
n1.attr['label'] = '1411'
n1.attr['shape'] = 'rectangle'
n1.attr['width'] = '1.4'
n1.attr['height'] = '0.05'
n1.attr['color'] = 'firebrick4'
n4 = G.get_node("Step4")
n4.attr['shape'] = 'rectangle'
# it's simple to scale graph features to indicate 'flow' conditions, e.g., scale
# each container size based on how many items each holds in a given time snapshot:
# (instead of setting node attribute ('width') to a static quantity, you would
# just bind 'n1.attr['width']' to a variable such as 'total_from_container_1'
n1 = G.get_node("Step2")
n1.attr['width'] = '2.4'
# likewise, you can do the same with edgewidth (i.e., make the arrow thicker
# to indicate higher 'flow rate')
e1 = G.get_edge("Step1", "Step2")
e1.attr['label'] = ' 1411'
e1.attr['penwidth'] = 2.6
# and you can easily add labels to the nodes and edges to indicate e.g., quantities:
e1 = G.get_edge("Step2", "Step3")
e1.attr['label'] = ' 392'
G.write("conv_fnl.dot") # save the dot file
G.draw("conv_fnl.png") # save the rendered diagram
alt text http://a.imageshack.us/img148/390/convfunnel.png | Libraries for forest and funnel plots | Well, i use graphviz, which has Java bindings (Grappa).
Although the dot language (graphviz's syntax) is simple, i prefer to use graphviz as a library through the excellent and production-stable pytho | Libraries for forest and funnel plots
Well, i use graphviz, which has Java bindings (Grappa).
Although the dot language (graphviz's syntax) is simple, i prefer to use graphviz as a library through the excellent and production-stable python bindings, pygraphviz, and networkx.
Here's the code for a simple 'funnel diagram' using those tools; it's not the most elaborate diagram, but it is complete--it initializes the graph object, creates all of the necessary components, styles them, renders the graph, and writes it to file.
import networkx as NX
import pygraphviz as PV
G = PV.AGraph(strict=False, directed=True) # initialize graph object
# create graph components:
node_list = ["Step1", "Step2", "Step3", "Step4"]
edge_list = [("Step1, Step2"), ("Step2", "Step3"), ("Step3", "Step4")]
G.add_nodes_from(node_list)
G.add_edge("Step1", "Step2")
G.add_edge("Step2", "Step3")
G.add_edge("Step3", "Step4")
# style them:
nak = "fontname fontsize fontcolor shape style fill color size".split()
nav = "Arial 11 white invtrapezium filled cornflowerblue cornflowerblue 1.4".split()
nas = dict(zip(nak, nav))
for k, v in nas.iteritems() :
G.node_attr[k] = v
eak = "fontname fontsize fontcolor dir arrowhead arrowsize arrowtail".split()
eav = "Arial 10 red4 forward normal 0.8 inv".split()
eas = dict(zip(eak, eav))
for k, v in eas.iteritems() :
G.edge_attr[k] = v
n1 = G.get_node("Step1")
n1.attr['fontsize'] = '11'
n1.attr['fontcolor'] = 'red4'
n1.attr['label'] = '1411'
n1.attr['shape'] = 'rectangle'
n1.attr['width'] = '1.4'
n1.attr['height'] = '0.05'
n1.attr['color'] = 'firebrick4'
n4 = G.get_node("Step4")
n4.attr['shape'] = 'rectangle'
# it's simple to scale graph features to indicate 'flow' conditions, e.g., scale
# each container size based on how many items each holds in a given time snapshot:
# (instead of setting node attribute ('width') to a static quantity, you would
# just bind 'n1.attr['width']' to a variable such as 'total_from_container_1'
n1 = G.get_node("Step2")
n1.attr['width'] = '2.4'
# likewise, you can do the same with edgewidth (i.e., make the arrow thicker
# to indicate higher 'flow rate')
e1 = G.get_edge("Step1", "Step2")
e1.attr['label'] = ' 1411'
e1.attr['penwidth'] = 2.6
# and you can easily add labels to the nodes and edges to indicate e.g., quantities:
e1 = G.get_edge("Step2", "Step3")
e1.attr['label'] = ' 392'
G.write("conv_fnl.dot") # save the dot file
G.draw("conv_fnl.png") # save the rendered diagram
alt text http://a.imageshack.us/img148/390/convfunnel.png | Libraries for forest and funnel plots
Well, i use graphviz, which has Java bindings (Grappa).
Although the dot language (graphviz's syntax) is simple, i prefer to use graphviz as a library through the excellent and production-stable pytho |
52,735 | Libraries for forest and funnel plots | The rmeta package in R can produce forest and funnel plots.
http://cran.r-project.org/web/packages/rmeta/index.html | Libraries for forest and funnel plots | The rmeta package in R can produce forest and funnel plots.
http://cran.r-project.org/web/packages/rmeta/index.html | Libraries for forest and funnel plots
The rmeta package in R can produce forest and funnel plots.
http://cran.r-project.org/web/packages/rmeta/index.html | Libraries for forest and funnel plots
The rmeta package in R can produce forest and funnel plots.
http://cran.r-project.org/web/packages/rmeta/index.html |
52,736 | Libraries for forest and funnel plots | In addition to the rmeta package there is also the meta package in R, which produce publication quality plots. | Libraries for forest and funnel plots | In addition to the rmeta package there is also the meta package in R, which produce publication quality plots. | Libraries for forest and funnel plots
In addition to the rmeta package there is also the meta package in R, which produce publication quality plots. | Libraries for forest and funnel plots
In addition to the rmeta package there is also the meta package in R, which produce publication quality plots. |
52,737 | Where is a good place to find survey results? | The best place to find survey data related to the social sciences is the ICPSR data clearinghouse: http://www.icpsr.umich.edu/icpsrweb/ICPSR/access/index.jsp
Also, the 'survey' tag on Infochimps has many interesting and free data sets: http://infochimps.org/tags/survey | Where is a good place to find survey results? | The best place to find survey data related to the social sciences is the ICPSR data clearinghouse: http://www.icpsr.umich.edu/icpsrweb/ICPSR/access/index.jsp
Also, the 'survey' tag on Infochimps has m | Where is a good place to find survey results?
The best place to find survey data related to the social sciences is the ICPSR data clearinghouse: http://www.icpsr.umich.edu/icpsrweb/ICPSR/access/index.jsp
Also, the 'survey' tag on Infochimps has many interesting and free data sets: http://infochimps.org/tags/survey | Where is a good place to find survey results?
The best place to find survey data related to the social sciences is the ICPSR data clearinghouse: http://www.icpsr.umich.edu/icpsrweb/ICPSR/access/index.jsp
Also, the 'survey' tag on Infochimps has m |
52,738 | Where is a good place to find survey results? | government websites usually .... I use the RITA a lot | Where is a good place to find survey results? | government websites usually .... I use the RITA a lot | Where is a good place to find survey results?
government websites usually .... I use the RITA a lot | Where is a good place to find survey results?
government websites usually .... I use the RITA a lot |
52,739 | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at least weight it? | You should use a loss that accurately reflects the "real world loss" you are trying to minimize by using your model (in the context of subsequent decisions). Then the "problem" disappears, or more precisely, never is a problem.
Suppose you have a rare disease, with an incidence of one in a hundred, but which is fatal. If you use a loss that does not account for the difference in consequences or costs, like accuracy, your model will be tempted to label all instances as negative. However, once you do include a much larger loss if an instance is incorrectly labeled "healthy" (a false negative) than for a false positive etc., you are actually comparing apples to apples, and the rarity of the target class in the validation sample is outweighed by the severity of the costs we incur on these cases by misclassifying them. (Of course, your dataset needs to be large enough so you actually do have some instances of the target class in the validation sample.)
You may find this thread interesting: Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at | You should use a loss that accurately reflects the "real world loss" you are trying to minimize by using your model (in the context of subsequent decisions). Then the "problem" disappears, or more pre | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at least weight it?
You should use a loss that accurately reflects the "real world loss" you are trying to minimize by using your model (in the context of subsequent decisions). Then the "problem" disappears, or more precisely, never is a problem.
Suppose you have a rare disease, with an incidence of one in a hundred, but which is fatal. If you use a loss that does not account for the difference in consequences or costs, like accuracy, your model will be tempted to label all instances as negative. However, once you do include a much larger loss if an instance is incorrectly labeled "healthy" (a false negative) than for a false positive etc., you are actually comparing apples to apples, and the rarity of the target class in the validation sample is outweighed by the severity of the costs we incur on these cases by misclassifying them. (Of course, your dataset needs to be large enough so you actually do have some instances of the target class in the validation sample.)
You may find this thread interesting: Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at
You should use a loss that accurately reflects the "real world loss" you are trying to minimize by using your model (in the context of subsequent decisions). Then the "problem" disappears, or more pre |
52,740 | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at least weight it? | Answering you with a question: if not validation loss then what? Certainly, the training metrics won't be any better here. The desirable scenario is that your validation set resembles the real-world data that you will see in prediction time. In such a case, if the real-world data is equally imbalanced, the performance on the validation set would be similar to the prediction time.
If the proportion of the minority group in the validation set is different than in the real-world data, you can use weighted loss, as you noticed. However there are many myths and ways of handling imbalanced data, so you can read in more detail about them in other questions tagged as unbalanced-classes.
Finally, I'm not sure if this is what you mean, but a completely different problem is if what you aim for is having a model that is equally good for all the groups. This is a slightly different problem, since as Simpson's paradox shows, such a model does not need to necessarily work best for all the data, so you would need to decide if you care more about overall performance or the within-group performances. Again, this could be achieved by picking a loss function that reflects the problem you are trying to solve. | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at | Answering you with a question: if not validation loss then what? Certainly, the training metrics won't be any better here. The desirable scenario is that your validation set resembles the real-world d | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at least weight it?
Answering you with a question: if not validation loss then what? Certainly, the training metrics won't be any better here. The desirable scenario is that your validation set resembles the real-world data that you will see in prediction time. In such a case, if the real-world data is equally imbalanced, the performance on the validation set would be similar to the prediction time.
If the proportion of the minority group in the validation set is different than in the real-world data, you can use weighted loss, as you noticed. However there are many myths and ways of handling imbalanced data, so you can read in more detail about them in other questions tagged as unbalanced-classes.
Finally, I'm not sure if this is what you mean, but a completely different problem is if what you aim for is having a model that is equally good for all the groups. This is a slightly different problem, since as Simpson's paradox shows, such a model does not need to necessarily work best for all the data, so you would need to decide if you care more about overall performance or the within-group performances. Again, this could be achieved by picking a loss function that reflects the problem you are trying to solve. | When dealing with data imbalance, shouldn't we never compare models based on validation loss, or at
Answering you with a question: if not validation loss then what? Certainly, the training metrics won't be any better here. The desirable scenario is that your validation set resembles the real-world d |
52,741 | Error in Gaussian Process Implementation | The problem is the ill conditioning of your kernel matrix. These are the singular values of your kernel matrix k:
As you can see, many of them are numerically zero. This leads to nonsense when you compute np.linalg.inv.
You have two options. The most common is to simply add a scaled identity matrix to your kernel matrix with some small scaling value, like $10^{-6}$:
K_inv = np.linalg.inv(k + 1e-6 * np.eye(k.shape[0]))
This leads to correct predictions:
Another option is to notice that each time you explicitly compute a matrix inverse, Householder rolls around in his grave. We can allow him to rest easier by avoiding explicit inverse computation, and instead computing linear system solves. Like so:
def gp(x_observed, y_observed, x_new):
k = kernel_func(x_observed, x_observed)
K_star_star = kernel_func(x_new, x_new)
K_star = kernel_func(x_observed, x_new)
mu = K_star.T @ np.linalg.solve(k, y_observed)
sigma = K_star_star - K_star.T @ np.linalg.solve(k, K_star)
return mu, sigma
This also leads to good predictions: | Error in Gaussian Process Implementation | The problem is the ill conditioning of your kernel matrix. These are the singular values of your kernel matrix k:
As you can see, many of them are numerically zero. This leads to nonsense when you co | Error in Gaussian Process Implementation
The problem is the ill conditioning of your kernel matrix. These are the singular values of your kernel matrix k:
As you can see, many of them are numerically zero. This leads to nonsense when you compute np.linalg.inv.
You have two options. The most common is to simply add a scaled identity matrix to your kernel matrix with some small scaling value, like $10^{-6}$:
K_inv = np.linalg.inv(k + 1e-6 * np.eye(k.shape[0]))
This leads to correct predictions:
Another option is to notice that each time you explicitly compute a matrix inverse, Householder rolls around in his grave. We can allow him to rest easier by avoiding explicit inverse computation, and instead computing linear system solves. Like so:
def gp(x_observed, y_observed, x_new):
k = kernel_func(x_observed, x_observed)
K_star_star = kernel_func(x_new, x_new)
K_star = kernel_func(x_observed, x_new)
mu = K_star.T @ np.linalg.solve(k, y_observed)
sigma = K_star_star - K_star.T @ np.linalg.solve(k, K_star)
return mu, sigma
This also leads to good predictions: | Error in Gaussian Process Implementation
The problem is the ill conditioning of your kernel matrix. These are the singular values of your kernel matrix k:
As you can see, many of them are numerically zero. This leads to nonsense when you co |
52,742 | Error in Gaussian Process Implementation | @John Madden gave a good answer pointing to the root cause of the problem, but adding to it, you should not invert that matrix directly in the first place. Matrix inversion is generally inefficient and not recommended for all kinds of applications. This also applies to Gaussian processes. An efficient algorithm is given by Rasmussen (2006) as Algorithm 2.1 on p 19:
$$\begin{align}
L &= \operatorname{cholesky}(K + \sigma^2 I) \\
\alpha &= L^\top \backslash (L \backslash y) \\
\mu &= K_{*}^\top \alpha \\
v &= L \backslash K_{*} \\
\Sigma^2 &= K_{**} - v^\top v \\
\end{align}$$
where $K = k(X, X)$, $K_{*} = k(X, X_*)$ and $K_{**} = k(X_*, X_*)$. | Error in Gaussian Process Implementation | @John Madden gave a good answer pointing to the root cause of the problem, but adding to it, you should not invert that matrix directly in the first place. Matrix inversion is generally inefficient an | Error in Gaussian Process Implementation
@John Madden gave a good answer pointing to the root cause of the problem, but adding to it, you should not invert that matrix directly in the first place. Matrix inversion is generally inefficient and not recommended for all kinds of applications. This also applies to Gaussian processes. An efficient algorithm is given by Rasmussen (2006) as Algorithm 2.1 on p 19:
$$\begin{align}
L &= \operatorname{cholesky}(K + \sigma^2 I) \\
\alpha &= L^\top \backslash (L \backslash y) \\
\mu &= K_{*}^\top \alpha \\
v &= L \backslash K_{*} \\
\Sigma^2 &= K_{**} - v^\top v \\
\end{align}$$
where $K = k(X, X)$, $K_{*} = k(X, X_*)$ and $K_{**} = k(X_*, X_*)$. | Error in Gaussian Process Implementation
@John Madden gave a good answer pointing to the root cause of the problem, but adding to it, you should not invert that matrix directly in the first place. Matrix inversion is generally inefficient an |
52,743 | Poisson regression intercept downward bias when true intercepts are small | The score function is exactly unbiased
$$E_{\beta_0}[\sum_i x_i(y_i-\mu_i)]=0$$
In your case that simplifies to
$$E_{\beta_0}[\sum y_i-\exp\beta_0]=0$$
The parameter estimate is a non-linear function of the score, so that tells us it won't be exactly unbiased.
Can we work out the direction of the bias? Well, the mean of $Y$ is $\exp \beta_0$, so $\beta_0=\log EY$ and $\hat\beta=\log \bar Y$. The logarithm function is concave, and $E[\bar Y]=E[Y]=\exp\beta_0$ so we can use Jensen's inequality to see that the bias is downward. (Or draw a picture, like the one here only the other way up) | Poisson regression intercept downward bias when true intercepts are small | The score function is exactly unbiased
$$E_{\beta_0}[\sum_i x_i(y_i-\mu_i)]=0$$
In your case that simplifies to
$$E_{\beta_0}[\sum y_i-\exp\beta_0]=0$$
The parameter estimate is a non-linear function | Poisson regression intercept downward bias when true intercepts are small
The score function is exactly unbiased
$$E_{\beta_0}[\sum_i x_i(y_i-\mu_i)]=0$$
In your case that simplifies to
$$E_{\beta_0}[\sum y_i-\exp\beta_0]=0$$
The parameter estimate is a non-linear function of the score, so that tells us it won't be exactly unbiased.
Can we work out the direction of the bias? Well, the mean of $Y$ is $\exp \beta_0$, so $\beta_0=\log EY$ and $\hat\beta=\log \bar Y$. The logarithm function is concave, and $E[\bar Y]=E[Y]=\exp\beta_0$ so we can use Jensen's inequality to see that the bias is downward. (Or draw a picture, like the one here only the other way up) | Poisson regression intercept downward bias when true intercepts are small
The score function is exactly unbiased
$$E_{\beta_0}[\sum_i x_i(y_i-\mu_i)]=0$$
In your case that simplifies to
$$E_{\beta_0}[\sum y_i-\exp\beta_0]=0$$
The parameter estimate is a non-linear function |
52,744 | What exactly needs to be independent in GLMs? | What is actually required is conditional independence of the response variable. Conditional on the regressors, that is. A Poisson regression model - for independent data - is no different from an ordinary least squares model in this assumption except that the OLS conveniently expresses the random error as a separate parameter.
In a Poisson regression model, the specific form of the conditional response is debatable due to disagreements in the fields of probability and statistics. One popular option would be $Y/\hat{Y}$, which are still random variables albeit not exactly Poisson distributed - the value is considered an ancillary statistic like a residual, which does not depend on the estimated model parameters. An interesting thing to observe here is that, if the linear model is misspecified (such as omitted variable bias), this may induce a kind of "dependence" on the residue of the fitted component of the model that is not correctly captured.
Independence has a very specific probabilistic meaning, and most attempts to diagnose dependence with diagnostic tests are futile. This is mostly complicated by the important mathfact that independence implies covariance is zero, but zero covariance does not imply independence.
Poisson regression and OLS are special cases of "generalized linear models" or GLMs so we can conveniently deal with the independence of observations in GLMs. The classic OLS residual plot, which we use to detect heteroscedasticity, is effective to visualize a covariance structure, but additional assumptions are needed to declare independence. In a GLM such a Poisson, a non-trivial mean-variance relationship is an expected feature of the model; a standard residual plot would be useless. So we rather consider the Pearson residuals versus fitted as a diagnostic test. In R, a simple poisson GLM(x <- seq(-3, 3, by=0.1); y<-rpois(length(x), exp(-3 + x)); f <- glm(y ~ x, family=poisson). The resulting graph shows a tapering curve of residuals and, arguably a funnel shape, with a LOESS smoother showing a mostly constant and 0-expectation mean residual trend.
As an example, we may describe the distribution of the $x$ as the design of the study. While many examples here and in textbooks deal with $X$ as random, that's merely a convenience and not that reflective of reality. The design of experiment is to assess viral load of infected mouse models treated with antivirals at a sequence of dose concentrations, say, control, $X = (0 \text{ control}, 10, 50, 200, 1,000,$ and $5,000)$ mg/kg. For an effective ART, the sequence of viral loads is expected to be descending because of a dose-response relationship. The outcome might look $Y = (10^5, 10^5, 10^4, 10^3, LLOD, LLOD)$. This response vector is not unconditionally independent, there is a strong "autoregressive" trend induced by the design. Trivially, when the fitted effect is estimated through the regressions, the conditional response is completely mutually independent.
A more involved but real life example is detailed in Agresti's categorical data analysis second edition in Chapter 3 covering poisson regression. This deals with the issue of estimating the number "satellite crabs" in a horseshoe crab nest, a sort of interesting polyamory. Data analyses can be found here. | What exactly needs to be independent in GLMs? | What is actually required is conditional independence of the response variable. Conditional on the regressors, that is. A Poisson regression model - for independent data - is no different from an ordi | What exactly needs to be independent in GLMs?
What is actually required is conditional independence of the response variable. Conditional on the regressors, that is. A Poisson regression model - for independent data - is no different from an ordinary least squares model in this assumption except that the OLS conveniently expresses the random error as a separate parameter.
In a Poisson regression model, the specific form of the conditional response is debatable due to disagreements in the fields of probability and statistics. One popular option would be $Y/\hat{Y}$, which are still random variables albeit not exactly Poisson distributed - the value is considered an ancillary statistic like a residual, which does not depend on the estimated model parameters. An interesting thing to observe here is that, if the linear model is misspecified (such as omitted variable bias), this may induce a kind of "dependence" on the residue of the fitted component of the model that is not correctly captured.
Independence has a very specific probabilistic meaning, and most attempts to diagnose dependence with diagnostic tests are futile. This is mostly complicated by the important mathfact that independence implies covariance is zero, but zero covariance does not imply independence.
Poisson regression and OLS are special cases of "generalized linear models" or GLMs so we can conveniently deal with the independence of observations in GLMs. The classic OLS residual plot, which we use to detect heteroscedasticity, is effective to visualize a covariance structure, but additional assumptions are needed to declare independence. In a GLM such a Poisson, a non-trivial mean-variance relationship is an expected feature of the model; a standard residual plot would be useless. So we rather consider the Pearson residuals versus fitted as a diagnostic test. In R, a simple poisson GLM(x <- seq(-3, 3, by=0.1); y<-rpois(length(x), exp(-3 + x)); f <- glm(y ~ x, family=poisson). The resulting graph shows a tapering curve of residuals and, arguably a funnel shape, with a LOESS smoother showing a mostly constant and 0-expectation mean residual trend.
As an example, we may describe the distribution of the $x$ as the design of the study. While many examples here and in textbooks deal with $X$ as random, that's merely a convenience and not that reflective of reality. The design of experiment is to assess viral load of infected mouse models treated with antivirals at a sequence of dose concentrations, say, control, $X = (0 \text{ control}, 10, 50, 200, 1,000,$ and $5,000)$ mg/kg. For an effective ART, the sequence of viral loads is expected to be descending because of a dose-response relationship. The outcome might look $Y = (10^5, 10^5, 10^4, 10^3, LLOD, LLOD)$. This response vector is not unconditionally independent, there is a strong "autoregressive" trend induced by the design. Trivially, when the fitted effect is estimated through the regressions, the conditional response is completely mutually independent.
A more involved but real life example is detailed in Agresti's categorical data analysis second edition in Chapter 3 covering poisson regression. This deals with the issue of estimating the number "satellite crabs" in a horseshoe crab nest, a sort of interesting polyamory. Data analyses can be found here. | What exactly needs to be independent in GLMs?
What is actually required is conditional independence of the response variable. Conditional on the regressors, that is. A Poisson regression model - for independent data - is no different from an ordi |
52,745 | What exactly needs to be independent in GLMs? | A general form of expressing a model is
$$Y_i = f(\textbf{X}_i,\boldsymbol\beta,\epsilon_i)$$
The function $f$ expresses the random variable $Y_i$ in terms of a random latent variable $\epsilon_i$, a fixed/known regressor variable $\textbf{X}_i$, and some distribution parameters $\boldsymbol\beta$.
Note:
Here the subscript $i$ refers to the observation within the sample. In your question you have a subscript relating to the elements in the vector $\textbf{X}^{T} = (X_1, X_2, \ldots, X_p)$. You could also write it with two subscripts $\textbf{X}_i^{T} = (X_{1i}, X_{2i}, \ldots, X_{pi})$ where $\textbf{X}_i$ is a matrix and the row index relates to the observation and the column index relates to the regressor/feature.
It is the random part $\epsilon_i$ that is assumed to be independent. (often, more complex models can assume some dependency between the different $\epsilon_i$)
If you perform an experiment then the regressor variables $\textbf{X}_i$ can be dependent. For example you might repeat a measurement with the same values for several $\textbf{X}_i$. But what needs to be independent is the random part $\epsilon_i$.
Example:
Say we have
$$Y_i = Q(\mu = a + b x_i, p= \epsilon_i)$$
where we use $\epsilon_i$ a uniform distributed variable, $Q(\mu,p)$ is the quantile function of the Poisson distribution with $\mu$ the mean and $p$ the quantile.
Then some simulated data, with parameters $a = 10$ and $b=1$, could look like
x epsilon y
[1,] 10 0.26550866 17
[2,] 10 0.37212390 18
[3,] 10 0.57285336 21
[4,] 10 0.90820779 26
[5,] 20 0.20168193 25
[6,] 20 0.89838968 37
[7,] 20 0.94467527 39
[8,] 20 0.66079779 32
[9,] 30 0.62911404 42
[10,] 30 0.06178627 31
[11,] 30 0.20597457 35
[12,] 30 0.17655675 34
An r-code to compute the above numbers is:
set.seed(1)
a = 10
b = 1
x = c(10,10,10,10,20,20,20,20,30,30,30,30)
epsilon = runif(12) # generate the nois part based on uniform distribution
y = qpois(epsilon,a + b * x) # transform to the y variable using the quantile function
plot(x,y)
cbind(x,epsilon,y) | What exactly needs to be independent in GLMs? | A general form of expressing a model is
$$Y_i = f(\textbf{X}_i,\boldsymbol\beta,\epsilon_i)$$
The function $f$ expresses the random variable $Y_i$ in terms of a random latent variable $\epsilon_i$, a | What exactly needs to be independent in GLMs?
A general form of expressing a model is
$$Y_i = f(\textbf{X}_i,\boldsymbol\beta,\epsilon_i)$$
The function $f$ expresses the random variable $Y_i$ in terms of a random latent variable $\epsilon_i$, a fixed/known regressor variable $\textbf{X}_i$, and some distribution parameters $\boldsymbol\beta$.
Note:
Here the subscript $i$ refers to the observation within the sample. In your question you have a subscript relating to the elements in the vector $\textbf{X}^{T} = (X_1, X_2, \ldots, X_p)$. You could also write it with two subscripts $\textbf{X}_i^{T} = (X_{1i}, X_{2i}, \ldots, X_{pi})$ where $\textbf{X}_i$ is a matrix and the row index relates to the observation and the column index relates to the regressor/feature.
It is the random part $\epsilon_i$ that is assumed to be independent. (often, more complex models can assume some dependency between the different $\epsilon_i$)
If you perform an experiment then the regressor variables $\textbf{X}_i$ can be dependent. For example you might repeat a measurement with the same values for several $\textbf{X}_i$. But what needs to be independent is the random part $\epsilon_i$.
Example:
Say we have
$$Y_i = Q(\mu = a + b x_i, p= \epsilon_i)$$
where we use $\epsilon_i$ a uniform distributed variable, $Q(\mu,p)$ is the quantile function of the Poisson distribution with $\mu$ the mean and $p$ the quantile.
Then some simulated data, with parameters $a = 10$ and $b=1$, could look like
x epsilon y
[1,] 10 0.26550866 17
[2,] 10 0.37212390 18
[3,] 10 0.57285336 21
[4,] 10 0.90820779 26
[5,] 20 0.20168193 25
[6,] 20 0.89838968 37
[7,] 20 0.94467527 39
[8,] 20 0.66079779 32
[9,] 30 0.62911404 42
[10,] 30 0.06178627 31
[11,] 30 0.20597457 35
[12,] 30 0.17655675 34
An r-code to compute the above numbers is:
set.seed(1)
a = 10
b = 1
x = c(10,10,10,10,20,20,20,20,30,30,30,30)
epsilon = runif(12) # generate the nois part based on uniform distribution
y = qpois(epsilon,a + b * x) # transform to the y variable using the quantile function
plot(x,y)
cbind(x,epsilon,y) | What exactly needs to be independent in GLMs?
A general form of expressing a model is
$$Y_i = f(\textbf{X}_i,\boldsymbol\beta,\epsilon_i)$$
The function $f$ expresses the random variable $Y_i$ in terms of a random latent variable $\epsilon_i$, a |
52,746 | Boxplot | 5-Number-Summary | To clarify your doubt, consider the following example using the standard definition of the boxplot.
Suppose we have the following observations $x = (-40,0, 2, 3, 4,10, 40)$. The median is 3, the first quartile is $Q_1 = 1$, and the third quartile is $Q_3 = 7$, thus $\text{IQR} = 8$. Let $u = Q_3+1.5\times \text{IQR} =16$ and $l = Q_1-1.5\times \text{IQR}=-8$.
The upper whisker would then be
$$\max_{x_i\leq u} x,$$
which equals 10. The lower whisker would be
$$\min_{x_i\geq l} x,$$
which equals 0.
Therefore, observations -40 and 40 fall outside the whiskers, and are thus "outlying" observations.
The conclusion is thus: the maximum and the minimum observed values may or may not correspond to the whiskers, depending on the distribution of observations.
Note: There are many ways to compute sample quantiles. In this example, I calculated them in R by the quantile function and using the default method. | Boxplot | 5-Number-Summary | To clarify your doubt, consider the following example using the standard definition of the boxplot.
Suppose we have the following observations $x = (-40,0, 2, 3, 4,10, 40)$. The median is 3, the first | Boxplot | 5-Number-Summary
To clarify your doubt, consider the following example using the standard definition of the boxplot.
Suppose we have the following observations $x = (-40,0, 2, 3, 4,10, 40)$. The median is 3, the first quartile is $Q_1 = 1$, and the third quartile is $Q_3 = 7$, thus $\text{IQR} = 8$. Let $u = Q_3+1.5\times \text{IQR} =16$ and $l = Q_1-1.5\times \text{IQR}=-8$.
The upper whisker would then be
$$\max_{x_i\leq u} x,$$
which equals 10. The lower whisker would be
$$\min_{x_i\geq l} x,$$
which equals 0.
Therefore, observations -40 and 40 fall outside the whiskers, and are thus "outlying" observations.
The conclusion is thus: the maximum and the minimum observed values may or may not correspond to the whiskers, depending on the distribution of observations.
Note: There are many ways to compute sample quantiles. In this example, I calculated them in R by the quantile function and using the default method. | Boxplot | 5-Number-Summary
To clarify your doubt, consider the following example using the standard definition of the boxplot.
Suppose we have the following observations $x = (-40,0, 2, 3, 4,10, 40)$. The median is 3, the first |
52,747 | Reject null hypothesis and alternative hypothesis simultaneously? | To your first question, a chi-square goodness of fit test only tests if the frequency distribution is different from your expectation. In this case, it is only testing one categorical factor. A chi-square test of independence tries to test if there is a relationship between multiple categorical factors. So in your case, you would be using the chi-square test of independence since you are using a 2x2 contingency table. That distinction can be seen in this useful link.
A chi-square test does not test the alternative hypothesis. It only tests the null hypothesis, which is that there is no relationship between your factors. This subject can be a bit of a rabbit hole, but for the sake of a chi-square test, just know that rejecting the null does not mean you have proven the alternative hypothesis.
The p value is important to a degree to have (you often must report it in academic journals as a requirement to their manuscript rules), but I think that you would also want to obtain an effect size to approximate the magnitude of the effect, such as Yule's Q coefficient. You should also meet the general assumptions required for a chi-square test as well. | Reject null hypothesis and alternative hypothesis simultaneously? | To your first question, a chi-square goodness of fit test only tests if the frequency distribution is different from your expectation. In this case, it is only testing one categorical factor. A chi-sq | Reject null hypothesis and alternative hypothesis simultaneously?
To your first question, a chi-square goodness of fit test only tests if the frequency distribution is different from your expectation. In this case, it is only testing one categorical factor. A chi-square test of independence tries to test if there is a relationship between multiple categorical factors. So in your case, you would be using the chi-square test of independence since you are using a 2x2 contingency table. That distinction can be seen in this useful link.
A chi-square test does not test the alternative hypothesis. It only tests the null hypothesis, which is that there is no relationship between your factors. This subject can be a bit of a rabbit hole, but for the sake of a chi-square test, just know that rejecting the null does not mean you have proven the alternative hypothesis.
The p value is important to a degree to have (you often must report it in academic journals as a requirement to their manuscript rules), but I think that you would also want to obtain an effect size to approximate the magnitude of the effect, such as Yule's Q coefficient. You should also meet the general assumptions required for a chi-square test as well. | Reject null hypothesis and alternative hypothesis simultaneously?
To your first question, a chi-square goodness of fit test only tests if the frequency distribution is different from your expectation. In this case, it is only testing one categorical factor. A chi-sq |
52,748 | Reject null hypothesis and alternative hypothesis simultaneously? | It appears that you have not formulated your null and alternative hypotheses in a way that is consistent with what you want to test. Your null hypothesis should be that the frequency of the word in corpus B is greater than or equal to the frequency of the word in corpus A.
Given that, once we see that the observed frequency of the word in corpus B is, in fact, greater than in corpus A, we already know that we aren't going to reject the null hypothesis.
An appropriate test for this one-sided hypothesis would be based on comparing the observed frequencies. In the real world, words aren't independent of each other, so any test based on the independence assumption will be seriously wrong. Still, for illustrative purposes, we carry on. In this case, given the large sample size and our assumption of independence, a Normal approximation for the distribution of the observed frequencies will work well.
Our test statistic $\tau$ is:
$$\tau = {\hat{p}_A-\hat{p}_B \over \sqrt{\hat{p}_0(1-\hat{p}_0) \cdot \left({1 \over N_A}+{1\over N_b}\right)}}$$
where $\hat{p}_A, \hat{p}_B$ are the observed percentages of the time the word occurs in corpus A and B respectively, and $\hat{p}_0$ is the observed percentage in the combined sample.
Our null hypothesis is that $\hat{p}_A \leq \hat{p}_B$. The calculations result in $\tau = -5.99$. The associated p-value is obviously close enough to zero that it's not worth calculating. Consequently, we fail to reject the null hypothesis. | Reject null hypothesis and alternative hypothesis simultaneously? | It appears that you have not formulated your null and alternative hypotheses in a way that is consistent with what you want to test. Your null hypothesis should be that the frequency of the word in c | Reject null hypothesis and alternative hypothesis simultaneously?
It appears that you have not formulated your null and alternative hypotheses in a way that is consistent with what you want to test. Your null hypothesis should be that the frequency of the word in corpus B is greater than or equal to the frequency of the word in corpus A.
Given that, once we see that the observed frequency of the word in corpus B is, in fact, greater than in corpus A, we already know that we aren't going to reject the null hypothesis.
An appropriate test for this one-sided hypothesis would be based on comparing the observed frequencies. In the real world, words aren't independent of each other, so any test based on the independence assumption will be seriously wrong. Still, for illustrative purposes, we carry on. In this case, given the large sample size and our assumption of independence, a Normal approximation for the distribution of the observed frequencies will work well.
Our test statistic $\tau$ is:
$$\tau = {\hat{p}_A-\hat{p}_B \over \sqrt{\hat{p}_0(1-\hat{p}_0) \cdot \left({1 \over N_A}+{1\over N_b}\right)}}$$
where $\hat{p}_A, \hat{p}_B$ are the observed percentages of the time the word occurs in corpus A and B respectively, and $\hat{p}_0$ is the observed percentage in the combined sample.
Our null hypothesis is that $\hat{p}_A \leq \hat{p}_B$. The calculations result in $\tau = -5.99$. The associated p-value is obviously close enough to zero that it's not worth calculating. Consequently, we fail to reject the null hypothesis. | Reject null hypothesis and alternative hypothesis simultaneously?
It appears that you have not formulated your null and alternative hypotheses in a way that is consistent with what you want to test. Your null hypothesis should be that the frequency of the word in c |
52,749 | Can we always write a random variable as conditional expectation plus independent error? | Suppose
$$
Y=X^2+u
$$
where $u|X\sim(0,X^2)$ has conditional heteroskedasticity. Then,
$$
\epsilon=Y-E(Y|X)=X^2+u-E(Y|X)=u,
$$
which has conditional mean zero but is not independent of $X$, as its second moment depends on $X$. | Can we always write a random variable as conditional expectation plus independent error? | Suppose
$$
Y=X^2+u
$$
where $u|X\sim(0,X^2)$ has conditional heteroskedasticity. Then,
$$
\epsilon=Y-E(Y|X)=X^2+u-E(Y|X)=u,
$$
which has conditional mean zero but is not independent of $X$, as its sec | Can we always write a random variable as conditional expectation plus independent error?
Suppose
$$
Y=X^2+u
$$
where $u|X\sim(0,X^2)$ has conditional heteroskedasticity. Then,
$$
\epsilon=Y-E(Y|X)=X^2+u-E(Y|X)=u,
$$
which has conditional mean zero but is not independent of $X$, as its second moment depends on $X$. | Can we always write a random variable as conditional expectation plus independent error?
Suppose
$$
Y=X^2+u
$$
where $u|X\sim(0,X^2)$ has conditional heteroskedasticity. Then,
$$
\epsilon=Y-E(Y|X)=X^2+u-E(Y|X)=u,
$$
which has conditional mean zero but is not independent of $X$, as its sec |
52,750 | What is a good technique for testing whether data is Rayleigh distributed? | Literally nothing you do with a sample will show you that the population distribution is Rayleigh (there's an infinite number of distributions that are not-Rayleigh, but nevertheless closer to your data than the Rayleigh is), but that's okay because you can bet the population distribution probably isn't exactly Rayleigh; even when you have a strong theoretical reason to think it should be Rayleigh, various things (like measurement error, for one example) will mean you don't quite have it in the actual data-generating process.
The best you'd normally hope for is that the Rayleigh may be a suitable/useful approximation.
George Box wrote "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful".
I think that summarizes it pretty well.
What a test could show you is that the data aren't very consistent with the population distribution being Rayleigh. However, failure to reject isn't necessarily helpful (since if your power was low you may have missed a substantial deviation from a Rayleigh), and on the other hand, rejection doesn't mean that a Rayleigh isn't a useful model (e.g. with large samples you may reject even though the Rayleigh is an excellent and useful approximation).
One easy way to check for whether a Rayleigh-distribution is a good approximation is to square the values and check for an exponential distribution - there are many tests and several possible diagnostic displays for that. If you must test, that's probably the easiest way to go about it -- square and test for exponential but I suggest diagnostic displays will serve you better than tests in most situations.
If you must use a goodness of fit test, it looks like the Cramer-von Mises and Anderson-Darling tests (the particular versions adjusted for the estimation of the parameter) are good omnibus tests for the exponential. See the power study in Chapter 10 of D'Agostino and Stephens' Goodness of Fit Techniques.
(i) Note that taking the log of an exponential would yield a shifted version of a fully specified distribution, so a Q-Q plot of the log of the square (against quantiles for the negative of a Gumbel) should yield a plot with a slope of 1 and an intercept that is related to the scale parameter of the Rayleigh.
(ii) Alternatively, the Rayleigh is a Weibull with shape parameter 2, so a Weibull plot would also work (plot log data values against $\log(-\log(1-p_i))$ for $p_i = \frac{i-\alpha}{n+1-2\alpha}$ for some $\alpha$ between $0$ and $1$. In this case, $\alpha=0.3$ is common, though $\alpha=\frac{3}{8}$ which is common in normal Q-Q plots would work just about as well (in the plots below the large-n default in R of $\alpha=\frac12$ was used). For it to be Rayleigh you'd want to see a straight line with slope $\frac12$ (given the log-data is on the y-axis). The lower tail can wobble about a lot even if the data were exactly Rayleigh, though; the same problem also occurs with the plot in (i); they only differ in slope.
Another plot that should work pretty well (and suffers less from the wiggly lower tail that you get in the Weibull plot) is to take the 1/1.8 power of the data (the 1.8th root$^\dagger$) and do a normal Q-Q plot; that should look very close to straight. I should work out an approximate intercept and slope as functions of the Rayleigh scale parameter, but I have not done so.
Here's the mentioned plots, for simulated standard Rayleigh data.
For the data in your question we can look at these plots:
I didn't put on the two lines here because the curvature is strong enough that its not worth worrying about the slope of the line. The left tail is shorter and the right tail is longer than you'd expect with any Weibull,
so worrying about whether it might be the specific Weibull that is the Rayleigh would be a waste of time.
$\dagger$ I should offer some justification of that 1.8th root lest it seem I just plucked it out of thin air, or used the data to arrive at it. With Weibull distributions, a shape parameter of around $3.6$ is very close to symmetric and reasonably close to normal (the parameter we might regard transition point from left skew through sort of symmetricalish to right skew depends on how you choose to measure skewness - there's an interval of more or less plausible values that are close to symmetry and not so clearly skew one way; 3.6 is a nice roundish number in this interval). You can convert between Weibulls by taking powers. e.g. to convert an exponential (shape 1) to a Weibull with shape 3.6, you take the 3.6th root (power 1/3.6)$^\ddagger$. Similarly to convert a Rayleigh (shape 2) to a Weibull with shape 3.6 you take the 1.8th root (power 1/1.8). Simple as that!
$\ddagger$ you might then wonder -- since the exponential is also a gamma distribution, why would we not use cube roots (power $1/3$ rather than $1/3.6$), as the Wilson-Hilferty transformation would suggest. The answer to that is for gamma with large shape parameter, the Wilson-Hilferty is indeed excellent at achieving near-symmetry and approximate normality, but with small shape parameters it's too weak to attain near-symmetry, and the result is still clearly right-skew. By the time the gamma shape gets down to $1$, the power needs to be somewhere about $1/3.6$ to attain near symmetry (a fact I discovered by trial and error before realizing that it was otherwise obvious that it must be about this because of the exponential also being a special case of the Weibull and I knew this number already). At somewhat smaller gamma even stronger transformations are needed, though as we progress through a sequence of ever-smaller gamma shape parameters, power transformations soon don't work all that well at symmetrizing it. | What is a good technique for testing whether data is Rayleigh distributed? | Literally nothing you do with a sample will show you that the population distribution is Rayleigh (there's an infinite number of distributions that are not-Rayleigh, but nevertheless closer to your da | What is a good technique for testing whether data is Rayleigh distributed?
Literally nothing you do with a sample will show you that the population distribution is Rayleigh (there's an infinite number of distributions that are not-Rayleigh, but nevertheless closer to your data than the Rayleigh is), but that's okay because you can bet the population distribution probably isn't exactly Rayleigh; even when you have a strong theoretical reason to think it should be Rayleigh, various things (like measurement error, for one example) will mean you don't quite have it in the actual data-generating process.
The best you'd normally hope for is that the Rayleigh may be a suitable/useful approximation.
George Box wrote "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful".
I think that summarizes it pretty well.
What a test could show you is that the data aren't very consistent with the population distribution being Rayleigh. However, failure to reject isn't necessarily helpful (since if your power was low you may have missed a substantial deviation from a Rayleigh), and on the other hand, rejection doesn't mean that a Rayleigh isn't a useful model (e.g. with large samples you may reject even though the Rayleigh is an excellent and useful approximation).
One easy way to check for whether a Rayleigh-distribution is a good approximation is to square the values and check for an exponential distribution - there are many tests and several possible diagnostic displays for that. If you must test, that's probably the easiest way to go about it -- square and test for exponential but I suggest diagnostic displays will serve you better than tests in most situations.
If you must use a goodness of fit test, it looks like the Cramer-von Mises and Anderson-Darling tests (the particular versions adjusted for the estimation of the parameter) are good omnibus tests for the exponential. See the power study in Chapter 10 of D'Agostino and Stephens' Goodness of Fit Techniques.
(i) Note that taking the log of an exponential would yield a shifted version of a fully specified distribution, so a Q-Q plot of the log of the square (against quantiles for the negative of a Gumbel) should yield a plot with a slope of 1 and an intercept that is related to the scale parameter of the Rayleigh.
(ii) Alternatively, the Rayleigh is a Weibull with shape parameter 2, so a Weibull plot would also work (plot log data values against $\log(-\log(1-p_i))$ for $p_i = \frac{i-\alpha}{n+1-2\alpha}$ for some $\alpha$ between $0$ and $1$. In this case, $\alpha=0.3$ is common, though $\alpha=\frac{3}{8}$ which is common in normal Q-Q plots would work just about as well (in the plots below the large-n default in R of $\alpha=\frac12$ was used). For it to be Rayleigh you'd want to see a straight line with slope $\frac12$ (given the log-data is on the y-axis). The lower tail can wobble about a lot even if the data were exactly Rayleigh, though; the same problem also occurs with the plot in (i); they only differ in slope.
Another plot that should work pretty well (and suffers less from the wiggly lower tail that you get in the Weibull plot) is to take the 1/1.8 power of the data (the 1.8th root$^\dagger$) and do a normal Q-Q plot; that should look very close to straight. I should work out an approximate intercept and slope as functions of the Rayleigh scale parameter, but I have not done so.
Here's the mentioned plots, for simulated standard Rayleigh data.
For the data in your question we can look at these plots:
I didn't put on the two lines here because the curvature is strong enough that its not worth worrying about the slope of the line. The left tail is shorter and the right tail is longer than you'd expect with any Weibull,
so worrying about whether it might be the specific Weibull that is the Rayleigh would be a waste of time.
$\dagger$ I should offer some justification of that 1.8th root lest it seem I just plucked it out of thin air, or used the data to arrive at it. With Weibull distributions, a shape parameter of around $3.6$ is very close to symmetric and reasonably close to normal (the parameter we might regard transition point from left skew through sort of symmetricalish to right skew depends on how you choose to measure skewness - there's an interval of more or less plausible values that are close to symmetry and not so clearly skew one way; 3.6 is a nice roundish number in this interval). You can convert between Weibulls by taking powers. e.g. to convert an exponential (shape 1) to a Weibull with shape 3.6, you take the 3.6th root (power 1/3.6)$^\ddagger$. Similarly to convert a Rayleigh (shape 2) to a Weibull with shape 3.6 you take the 1.8th root (power 1/1.8). Simple as that!
$\ddagger$ you might then wonder -- since the exponential is also a gamma distribution, why would we not use cube roots (power $1/3$ rather than $1/3.6$), as the Wilson-Hilferty transformation would suggest. The answer to that is for gamma with large shape parameter, the Wilson-Hilferty is indeed excellent at achieving near-symmetry and approximate normality, but with small shape parameters it's too weak to attain near-symmetry, and the result is still clearly right-skew. By the time the gamma shape gets down to $1$, the power needs to be somewhere about $1/3.6$ to attain near symmetry (a fact I discovered by trial and error before realizing that it was otherwise obvious that it must be about this because of the exponential also being a special case of the Weibull and I knew this number already). At somewhat smaller gamma even stronger transformations are needed, though as we progress through a sequence of ever-smaller gamma shape parameters, power transformations soon don't work all that well at symmetrizing it. | What is a good technique for testing whether data is Rayleigh distributed?
Literally nothing you do with a sample will show you that the population distribution is Rayleigh (there's an infinite number of distributions that are not-Rayleigh, but nevertheless closer to your da |
52,751 | Do I need to test for autocorrelation or normality assumption if I am running the regression with standard errors? | With so many observations, tests for normality or autocorrelation will most likely end up giving extremely low $p$-values, suggesting to reject the null.
Using robust standard errors is fine and perfectly acceptable. Perhaps you may consider doing a bit of model selection (say, lasso, stepwise, subset regression, etc.) if you are uncertain as to which variables are relevant and which are not. | Do I need to test for autocorrelation or normality assumption if I am running the regression with st | With so many observations, tests for normality or autocorrelation will most likely end up giving extremely low $p$-values, suggesting to reject the null.
Using robust standard errors is fine and perfe | Do I need to test for autocorrelation or normality assumption if I am running the regression with standard errors?
With so many observations, tests for normality or autocorrelation will most likely end up giving extremely low $p$-values, suggesting to reject the null.
Using robust standard errors is fine and perfectly acceptable. Perhaps you may consider doing a bit of model selection (say, lasso, stepwise, subset regression, etc.) if you are uncertain as to which variables are relevant and which are not. | Do I need to test for autocorrelation or normality assumption if I am running the regression with st
With so many observations, tests for normality or autocorrelation will most likely end up giving extremely low $p$-values, suggesting to reject the null.
Using robust standard errors is fine and perfe |
52,752 | Do I need to test for autocorrelation or normality assumption if I am running the regression with standard errors? | As @utobi correctly notes in another answer, with such a large data set almost any test of a violation of model assumptions will tend to produce "statistically significant" results that might be practically unimportant. You need to apply your understanding of the subject matter carefully.
A big question is how much your robust standard errors differ from the usual OLS standard errors. If there is a big difference, it suggests that there is a problem with specification of your model (e.g., in the functional form of the regression, or assumptions about distributions and variance functions) that needs to be addressed.
King and Roberts discuss this in How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do about It.* They show several examples, and provide code that includes a test they propose for evaluating the difference between standard and robust standard errors.
As they describe in Section 2, the simplest robust standard error estimator is based on an assumption that there is no autocorrelation. So if your data might involve autocorrelation and you didn't use a robust estimator designed to handle that (see Section 2.4 of King and Roberts), then your robust errors are already problematic.
It certainly is OK to report robust standard errors; as King and Roberts note, a great number of articles in political science and in other fields do so. But it's still important to evaluate and explain why they were needed. From Section 7 of King and Roberts:
Scholarly work that includes robust standard errors that differ from classical standard errors requires considerable scrutiny. At best their estimators are inefficient, but in all likelihood estimators from their model of at least some quantities are biased. The bigger the difference robust standard errors make, the stronger the evidence for misspecification. To be clear, merely choosing to report only classical standard errors is not the solution here, as our last empirical example illustrates. And reporting only robust standard errors, without classical standard errors, or only classical without robust standard errors, is similarly unhelpful.
Robust standard errors should be treated not as a way to avoid reviewer criticism or as a magical cure-all. They are neither. They should instead be used for their fundamental contribution—-as an excellent model diagnostic procedure.
*Final published version: Political Analysis (2015) 23:159–179 doi:10.1093/pan/mpu015 | Do I need to test for autocorrelation or normality assumption if I am running the regression with st | As @utobi correctly notes in another answer, with such a large data set almost any test of a violation of model assumptions will tend to produce "statistically significant" results that might be pract | Do I need to test for autocorrelation or normality assumption if I am running the regression with standard errors?
As @utobi correctly notes in another answer, with such a large data set almost any test of a violation of model assumptions will tend to produce "statistically significant" results that might be practically unimportant. You need to apply your understanding of the subject matter carefully.
A big question is how much your robust standard errors differ from the usual OLS standard errors. If there is a big difference, it suggests that there is a problem with specification of your model (e.g., in the functional form of the regression, or assumptions about distributions and variance functions) that needs to be addressed.
King and Roberts discuss this in How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do about It.* They show several examples, and provide code that includes a test they propose for evaluating the difference between standard and robust standard errors.
As they describe in Section 2, the simplest robust standard error estimator is based on an assumption that there is no autocorrelation. So if your data might involve autocorrelation and you didn't use a robust estimator designed to handle that (see Section 2.4 of King and Roberts), then your robust errors are already problematic.
It certainly is OK to report robust standard errors; as King and Roberts note, a great number of articles in political science and in other fields do so. But it's still important to evaluate and explain why they were needed. From Section 7 of King and Roberts:
Scholarly work that includes robust standard errors that differ from classical standard errors requires considerable scrutiny. At best their estimators are inefficient, but in all likelihood estimators from their model of at least some quantities are biased. The bigger the difference robust standard errors make, the stronger the evidence for misspecification. To be clear, merely choosing to report only classical standard errors is not the solution here, as our last empirical example illustrates. And reporting only robust standard errors, without classical standard errors, or only classical without robust standard errors, is similarly unhelpful.
Robust standard errors should be treated not as a way to avoid reviewer criticism or as a magical cure-all. They are neither. They should instead be used for their fundamental contribution—-as an excellent model diagnostic procedure.
*Final published version: Political Analysis (2015) 23:159–179 doi:10.1093/pan/mpu015 | Do I need to test for autocorrelation or normality assumption if I am running the regression with st
As @utobi correctly notes in another answer, with such a large data set almost any test of a violation of model assumptions will tend to produce "statistically significant" results that might be pract |
52,753 | Degrees of freedom of a coin toss | Degrees of freedom apply to a parametrisation of a model, not to the observed outcome. We model a single coin toss by a Bernoulli($p$) distribution, which only has a single parameter, namely the $p$. "Heads" is an outcome, not a parameter, so it neither has degrees of freedom nor is to be counted in order to know the degrees of freedom. If we want to estimate the parameter $p$ from data, say $n$ coin tosses, we usually assume the tosses to be i.i.d. (independently identically distributed) according to a Bernoulli($p$)-distribution, so there still is $p$ as the only parameter, as long as $n$ is known and treated as fixed (which usually is the case). The standard estimator is the relative frequency of heads (assuming "1" corresponds to "heads"), so we need to know this in order to estimate $p$, but it's an outcome, not a parameter to be estimated, so again it doesn't count on top of the $p$ for determining the number of parameters. You need to understand that there is an essential difference between the parameter $p$ to be estimated and the observed relative frequency used to estimate $p$. | Degrees of freedom of a coin toss | Degrees of freedom apply to a parametrisation of a model, not to the observed outcome. We model a single coin toss by a Bernoulli($p$) distribution, which only has a single parameter, namely the $p$. | Degrees of freedom of a coin toss
Degrees of freedom apply to a parametrisation of a model, not to the observed outcome. We model a single coin toss by a Bernoulli($p$) distribution, which only has a single parameter, namely the $p$. "Heads" is an outcome, not a parameter, so it neither has degrees of freedom nor is to be counted in order to know the degrees of freedom. If we want to estimate the parameter $p$ from data, say $n$ coin tosses, we usually assume the tosses to be i.i.d. (independently identically distributed) according to a Bernoulli($p$)-distribution, so there still is $p$ as the only parameter, as long as $n$ is known and treated as fixed (which usually is the case). The standard estimator is the relative frequency of heads (assuming "1" corresponds to "heads"), so we need to know this in order to estimate $p$, but it's an outcome, not a parameter to be estimated, so again it doesn't count on top of the $p$ for determining the number of parameters. You need to understand that there is an essential difference between the parameter $p$ to be estimated and the observed relative frequency used to estimate $p$. | Degrees of freedom of a coin toss
Degrees of freedom apply to a parametrisation of a model, not to the observed outcome. We model a single coin toss by a Bernoulli($p$) distribution, which only has a single parameter, namely the $p$. |
52,754 | What is "power cut"? | This is an interesting observation: the paragraph talks about experiments and how to analyze them if stopped early. So it's not a stretch to connect "power cut" with "statistical power", a concept relevant to planning scientific experiments. However, In All Likelihood takes an informal approach*, so the odds are (pun intended) that "power" refers to electricity and "power cut" means a power outage.
٭ From the description on Goodreads: "The book generally takes an informal approach, where most important results are established using heuristic arguments and motivated with realistic examples." | What is "power cut"? | This is an interesting observation: the paragraph talks about experiments and how to analyze them if stopped early. So it's not a stretch to connect "power cut" with "statistical power", a concept rel | What is "power cut"?
This is an interesting observation: the paragraph talks about experiments and how to analyze them if stopped early. So it's not a stretch to connect "power cut" with "statistical power", a concept relevant to planning scientific experiments. However, In All Likelihood takes an informal approach*, so the odds are (pun intended) that "power" refers to electricity and "power cut" means a power outage.
٭ From the description on Goodreads: "The book generally takes an informal approach, where most important results are established using heuristic arguments and motivated with realistic examples." | What is "power cut"?
This is an interesting observation: the paragraph talks about experiments and how to analyze them if stopped early. So it's not a stretch to connect "power cut" with "statistical power", a concept rel |
52,755 | What is a random variable in ADAM optimizer? | Converting my comment into an answer.
The sentence right below your screenshot in the paper is the answer.
The stochasticity might come from the evaluation at random subsamples (minibatches) of datapoints, or arise from inherent function noise. | What is a random variable in ADAM optimizer? | Converting my comment into an answer.
The sentence right below your screenshot in the paper is the answer.
The stochasticity might come from the evaluation at random subsamples (minibatches) of datap | What is a random variable in ADAM optimizer?
Converting my comment into an answer.
The sentence right below your screenshot in the paper is the answer.
The stochasticity might come from the evaluation at random subsamples (minibatches) of datapoints, or arise from inherent function noise. | What is a random variable in ADAM optimizer?
Converting my comment into an answer.
The sentence right below your screenshot in the paper is the answer.
The stochasticity might come from the evaluation at random subsamples (minibatches) of datap |
52,756 | Linear regression's (OLS) coefficient interpretation with heteroscedasticity | Heteroscedasticity makes it so that the OLS estimator is not the best linear unbiased estimator of the regression slopes and makes it so that the usual standard errors (and the quantities based on them, such as p-values and confidence intervals) are incorrect. It doesn't affect the interpretation of the regression coefficients, which depends not the on estimation procedure or assumption but on the structure of the model. That is, if you specify
$$E[Y|X] = \beta_0 + \beta_1 x_1 + ... + \beta_k x_k$$
the usual regression model, it doesn't matter how you estimate the coefficients or whether the assumptions are valid. The model itself tells you that for two observations that differ in $x_1$ by one unit but are the same on the other predictors, the expected difference in their outcomes is $\beta_1$. You might use a terrible method for estimating the coefficients, but that doesn't change their interpretation.
If you have heteroscedasticity, just use a robust standard error and carry on. | Linear regression's (OLS) coefficient interpretation with heteroscedasticity | Heteroscedasticity makes it so that the OLS estimator is not the best linear unbiased estimator of the regression slopes and makes it so that the usual standard errors (and the quantities based on the | Linear regression's (OLS) coefficient interpretation with heteroscedasticity
Heteroscedasticity makes it so that the OLS estimator is not the best linear unbiased estimator of the regression slopes and makes it so that the usual standard errors (and the quantities based on them, such as p-values and confidence intervals) are incorrect. It doesn't affect the interpretation of the regression coefficients, which depends not the on estimation procedure or assumption but on the structure of the model. That is, if you specify
$$E[Y|X] = \beta_0 + \beta_1 x_1 + ... + \beta_k x_k$$
the usual regression model, it doesn't matter how you estimate the coefficients or whether the assumptions are valid. The model itself tells you that for two observations that differ in $x_1$ by one unit but are the same on the other predictors, the expected difference in their outcomes is $\beta_1$. You might use a terrible method for estimating the coefficients, but that doesn't change their interpretation.
If you have heteroscedasticity, just use a robust standard error and carry on. | Linear regression's (OLS) coefficient interpretation with heteroscedasticity
Heteroscedasticity makes it so that the OLS estimator is not the best linear unbiased estimator of the regression slopes and makes it so that the usual standard errors (and the quantities based on the |
52,757 | Is it normal to have thousands of df in a logistic regression model? | The discrepancy between DF for different estimates likely means that these are the results of a mixed model. There were probably a bit more than 25.69 participants in the study (leading to 25.69 DF for fluency), but these people probably had over 100 measurements each, or over 2125 total (leading to 2125 DF on other coefficients). Because there were only around 30ish measurements of whether individuals were fluent or not (one for each person in the study), and there were over 2125 measurements of features of speech (one for each speech sample collected), more DF can be used for the features of speech. | Is it normal to have thousands of df in a logistic regression model? | The discrepancy between DF for different estimates likely means that these are the results of a mixed model. There were probably a bit more than 25.69 participants in the study (leading to 25.69 DF fo | Is it normal to have thousands of df in a logistic regression model?
The discrepancy between DF for different estimates likely means that these are the results of a mixed model. There were probably a bit more than 25.69 participants in the study (leading to 25.69 DF for fluency), but these people probably had over 100 measurements each, or over 2125 total (leading to 2125 DF on other coefficients). Because there were only around 30ish measurements of whether individuals were fluent or not (one for each person in the study), and there were over 2125 measurements of features of speech (one for each speech sample collected), more DF can be used for the features of speech. | Is it normal to have thousands of df in a logistic regression model?
The discrepancy between DF for different estimates likely means that these are the results of a mixed model. There were probably a bit more than 25.69 participants in the study (leading to 25.69 DF fo |
52,758 | Show that, for any real numbers a and b such that m ≤ a ≤ b or m ≥ a ≥ b, E|Y − a| ≤ E|Y − b| ,where Y be a random variable with finite expectation | Intuition
As explained at Expectation of a function of a random variable from CDF, an integration by parts shows that when a random variable $X$ has a (cumulative) distribution function $F,$ the expectation of $|X-a|$ is the sum of the shaded areas shown:
The left hand region is the area under $F$ to the left of $a$ while the right hand region is the area above $F$ to the right of $a:$ that is, it's the area under $1-F$ to the right of $a.$ This is an extremely useful picture to have in mind when thinking about expectations. It can make seemingly complicated relationships intuitively obvious.
The height of $F$ is at least $1/2$ at the median $m,$ as shown by the dotted lines.
When $a \ge m$ is increased to $b,$ these areas change: the left hand area grows while the right hand area shrinks. The resulting region is shown here in blue.
The old area was $I+II+IV$ while the new one is $II + III + IV.$ Their difference therefore is $III - I.$ (This is the integral of $F - (1 - F) = 1 - 2F$ between $a$ and $b.$) Because $III$ includes the entire rectangle between $a$ and $b$ below the height $1/2$ and $I$ lies within the rectangle between $a$ and $b$ above the height $1/2,$ the difference $III - I$ cannot be negative (and actually is positive in the figure). Thus, as $a\ge m$ increases, the expectation of $|X-a|$ cannot shrink -- it can only grow.
A similar geometric argument can be used to analyze any quantile, not just the median, by means of a suitable weighting of the upper and lower areas. See https://stats.stackexchange.com/a/252043/919 for some analysis of this.
Formal Solution
Fix an arbitrary (but finite) $b \gt m,$ suppose $a$ is any number for which $m \le a \le b,$ and consider the difference
$$h(a) = E\left[|Y-b| - |Y-a|\right] = E\left[|Y-b|\right] - E\left[|Y-a|\right].$$
We aim to show $h(a) \ge 0.$ We will do this by computing the expectation and finding a simple lower bound for it. The only technique we need is a straightforward integration by parts.
Writing $F$ for the distribution of $Y$ and integrating in the sense of Lebesgue-Stieltjes, compute this expectation by breaking the integral into three regions bounded by $a$ and $b$ so that the absolute values can be expressed more simply within each region (look closely at how the signs in the integrands vary in the third line):
$$\begin{aligned}
h(a) &= \int \left(|x-b| - |x-a|\right)\,\mathrm{d}F(x) \\
&= \left(\int_{-\infty}^a + \int_a^b + \int_b^\infty\right) \left(|x-b| - |x-a|\right)\,\mathrm{d}F(x) \\
&= \int_{-\infty}^a(b-x)-(a-x)\,\mathrm{d}F(x) + \int_a^b(b-x)-(x-a) \,\mathrm{d}F(x) \\&\quad+ \int_b^\infty(x-b)-(x-a)\,\mathrm{d}F(x) \\
&= (b-a)\int_{-\infty}^a\,\mathrm{d}F(x) + \int_a^b(a+b-2x) \,\mathrm{d}F(x) + (a-b)\int_b^\infty\,\mathrm{d}F(x). \\
\end{aligned}$$
Evaluate the middle integral by parts:
$$\begin{aligned}
\int_a^b(a+b-2x) \,\mathrm{d}F(x) &= (a+b-2x)F(x)\bigg|^b_a + 2\int_a^b F(x)\,\mathrm{d}x\\
&= (a-b)F(b) - (b-a)F(a)+ 2\int_a^b F(x)\,\mathrm{d}x \\
&= (a-b)(F(b) + F(a))+ 2\int_a^b F(x)\,\mathrm{d}x.
\end{aligned}$$
Plug this into the previous expression for $h(a)$ and note that because $a \ge m,$ $F(x) \ge 1/2$ throughout this integral:
$$\begin{aligned}
h(a) & = (b-a)F(a) + \left[(a-b)(F(b)+F(a)) + 2\int_a^b F(x)\,\mathrm{d}x\right] + (a-b)(1 - F(b)) \\
&= a-b + 2\int_a^b F(x)\,\mathrm{d}x\\
&= 2\int_a^b \left(F(x) - \frac{1}{2}\right)\,\mathrm{d}x \\
&\ge 2\int_a^b \left(\frac{1}{2}-\frac{1}{2}\right)\,\mathrm{d}x \\
&=0,
\end{aligned}$$
QED.
The demonstration for the other case $b \le a \le m$ follows by applying this result to the random variable $-Y$ and the values $-m\le-a\le-b$ (because $-m$ is a median of $-Y.$) | Show that, for any real numbers a and b such that m ≤ a ≤ b or m ≥ a ≥ b, E|Y − a| ≤ E|Y − b| ,where | Intuition
As explained at Expectation of a function of a random variable from CDF, an integration by parts shows that when a random variable $X$ has a (cumulative) distribution function $F,$ the expec | Show that, for any real numbers a and b such that m ≤ a ≤ b or m ≥ a ≥ b, E|Y − a| ≤ E|Y − b| ,where Y be a random variable with finite expectation
Intuition
As explained at Expectation of a function of a random variable from CDF, an integration by parts shows that when a random variable $X$ has a (cumulative) distribution function $F,$ the expectation of $|X-a|$ is the sum of the shaded areas shown:
The left hand region is the area under $F$ to the left of $a$ while the right hand region is the area above $F$ to the right of $a:$ that is, it's the area under $1-F$ to the right of $a.$ This is an extremely useful picture to have in mind when thinking about expectations. It can make seemingly complicated relationships intuitively obvious.
The height of $F$ is at least $1/2$ at the median $m,$ as shown by the dotted lines.
When $a \ge m$ is increased to $b,$ these areas change: the left hand area grows while the right hand area shrinks. The resulting region is shown here in blue.
The old area was $I+II+IV$ while the new one is $II + III + IV.$ Their difference therefore is $III - I.$ (This is the integral of $F - (1 - F) = 1 - 2F$ between $a$ and $b.$) Because $III$ includes the entire rectangle between $a$ and $b$ below the height $1/2$ and $I$ lies within the rectangle between $a$ and $b$ above the height $1/2,$ the difference $III - I$ cannot be negative (and actually is positive in the figure). Thus, as $a\ge m$ increases, the expectation of $|X-a|$ cannot shrink -- it can only grow.
A similar geometric argument can be used to analyze any quantile, not just the median, by means of a suitable weighting of the upper and lower areas. See https://stats.stackexchange.com/a/252043/919 for some analysis of this.
Formal Solution
Fix an arbitrary (but finite) $b \gt m,$ suppose $a$ is any number for which $m \le a \le b,$ and consider the difference
$$h(a) = E\left[|Y-b| - |Y-a|\right] = E\left[|Y-b|\right] - E\left[|Y-a|\right].$$
We aim to show $h(a) \ge 0.$ We will do this by computing the expectation and finding a simple lower bound for it. The only technique we need is a straightforward integration by parts.
Writing $F$ for the distribution of $Y$ and integrating in the sense of Lebesgue-Stieltjes, compute this expectation by breaking the integral into three regions bounded by $a$ and $b$ so that the absolute values can be expressed more simply within each region (look closely at how the signs in the integrands vary in the third line):
$$\begin{aligned}
h(a) &= \int \left(|x-b| - |x-a|\right)\,\mathrm{d}F(x) \\
&= \left(\int_{-\infty}^a + \int_a^b + \int_b^\infty\right) \left(|x-b| - |x-a|\right)\,\mathrm{d}F(x) \\
&= \int_{-\infty}^a(b-x)-(a-x)\,\mathrm{d}F(x) + \int_a^b(b-x)-(x-a) \,\mathrm{d}F(x) \\&\quad+ \int_b^\infty(x-b)-(x-a)\,\mathrm{d}F(x) \\
&= (b-a)\int_{-\infty}^a\,\mathrm{d}F(x) + \int_a^b(a+b-2x) \,\mathrm{d}F(x) + (a-b)\int_b^\infty\,\mathrm{d}F(x). \\
\end{aligned}$$
Evaluate the middle integral by parts:
$$\begin{aligned}
\int_a^b(a+b-2x) \,\mathrm{d}F(x) &= (a+b-2x)F(x)\bigg|^b_a + 2\int_a^b F(x)\,\mathrm{d}x\\
&= (a-b)F(b) - (b-a)F(a)+ 2\int_a^b F(x)\,\mathrm{d}x \\
&= (a-b)(F(b) + F(a))+ 2\int_a^b F(x)\,\mathrm{d}x.
\end{aligned}$$
Plug this into the previous expression for $h(a)$ and note that because $a \ge m,$ $F(x) \ge 1/2$ throughout this integral:
$$\begin{aligned}
h(a) & = (b-a)F(a) + \left[(a-b)(F(b)+F(a)) + 2\int_a^b F(x)\,\mathrm{d}x\right] + (a-b)(1 - F(b)) \\
&= a-b + 2\int_a^b F(x)\,\mathrm{d}x\\
&= 2\int_a^b \left(F(x) - \frac{1}{2}\right)\,\mathrm{d}x \\
&\ge 2\int_a^b \left(\frac{1}{2}-\frac{1}{2}\right)\,\mathrm{d}x \\
&=0,
\end{aligned}$$
QED.
The demonstration for the other case $b \le a \le m$ follows by applying this result to the random variable $-Y$ and the values $-m\le-a\le-b$ (because $-m$ is a median of $-Y.$) | Show that, for any real numbers a and b such that m ≤ a ≤ b or m ≥ a ≥ b, E|Y − a| ≤ E|Y − b| ,where
Intuition
As explained at Expectation of a function of a random variable from CDF, an integration by parts shows that when a random variable $X$ has a (cumulative) distribution function $F,$ the expec |
52,759 | Computing p-value vs. constructing confidence interval from sample for proportions | My confusion is, are these two methods really equivalent?
No the methods are indeed not equivalent.
Note that there are also many different ways to construct the confidence intervals (and different ways to express hypothesis tests). The use of the parameter estimate $\hat{p}$ in the expression $N(\hat{p}, \hat{p}(1-\hat{p})/n)$ is a simplification and does not give an exact interval. The justification is that $\hat{p}(1-\hat{p})$ and $p(1-p)$ do not differ much when the sample size is large enough.
See more on the Wikipedia page about different ways to construct confidence intervals for the binomial proportion.
The interval based on $N(\hat{p}, \hat{p}(1-\hat{p})/n)$ corresponds to the Wald interval. (the hypothesis test that corresponds to this interval is the Wald test)
The expression $N(0.5, (0.5*0.5)/n)$ is more related to the Wilson score interval.
See also Confidence interval / p-value duality: don't they use different distributions? in which the examples in the answer by Demetri Pananos and in some of the comments relate to the binomial proportion. | Computing p-value vs. constructing confidence interval from sample for proportions | My confusion is, are these two methods really equivalent?
No the methods are indeed not equivalent.
Note that there are also many different ways to construct the confidence intervals (and different w | Computing p-value vs. constructing confidence interval from sample for proportions
My confusion is, are these two methods really equivalent?
No the methods are indeed not equivalent.
Note that there are also many different ways to construct the confidence intervals (and different ways to express hypothesis tests). The use of the parameter estimate $\hat{p}$ in the expression $N(\hat{p}, \hat{p}(1-\hat{p})/n)$ is a simplification and does not give an exact interval. The justification is that $\hat{p}(1-\hat{p})$ and $p(1-p)$ do not differ much when the sample size is large enough.
See more on the Wikipedia page about different ways to construct confidence intervals for the binomial proportion.
The interval based on $N(\hat{p}, \hat{p}(1-\hat{p})/n)$ corresponds to the Wald interval. (the hypothesis test that corresponds to this interval is the Wald test)
The expression $N(0.5, (0.5*0.5)/n)$ is more related to the Wilson score interval.
See also Confidence interval / p-value duality: don't they use different distributions? in which the examples in the answer by Demetri Pananos and in some of the comments relate to the binomial proportion. | Computing p-value vs. constructing confidence interval from sample for proportions
My confusion is, are these two methods really equivalent?
No the methods are indeed not equivalent.
Note that there are also many different ways to construct the confidence intervals (and different w |
52,760 | Computing p-value vs. constructing confidence interval from sample for proportions | A confidence interval based on normal approximation for the Bernoulli where $\hat p(1−\hat p)$ (by the way $\hat p=\bar X_n$) is plugged in for the variance estimator involves two approximations (one by the Central Limit Theorem, the other by variance estimation) and is therefore not equivalent to a test that does not involve variance estimation.
There are confidence intervals for the Bernoulli that don't estimate the variance either though, see https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval.
I believe that the Wilson score interval explained there will, by checking whether 0.5 is in it, give you a test equivalent to the one you discuss, i.e., with normal approximation but without estimating the variance (assuming that the test is two-sided). | Computing p-value vs. constructing confidence interval from sample for proportions | A confidence interval based on normal approximation for the Bernoulli where $\hat p(1−\hat p)$ (by the way $\hat p=\bar X_n$) is plugged in for the variance estimator involves two approximations (one | Computing p-value vs. constructing confidence interval from sample for proportions
A confidence interval based on normal approximation for the Bernoulli where $\hat p(1−\hat p)$ (by the way $\hat p=\bar X_n$) is plugged in for the variance estimator involves two approximations (one by the Central Limit Theorem, the other by variance estimation) and is therefore not equivalent to a test that does not involve variance estimation.
There are confidence intervals for the Bernoulli that don't estimate the variance either though, see https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval.
I believe that the Wilson score interval explained there will, by checking whether 0.5 is in it, give you a test equivalent to the one you discuss, i.e., with normal approximation but without estimating the variance (assuming that the test is two-sided). | Computing p-value vs. constructing confidence interval from sample for proportions
A confidence interval based on normal approximation for the Bernoulli where $\hat p(1−\hat p)$ (by the way $\hat p=\bar X_n$) is plugged in for the variance estimator involves two approximations (one |
52,761 | Computing p-value vs. constructing confidence interval from sample for proportions | Suppose you have $n = 100$ independent observations $X_i$
from a Bernoulli distribution with Success probability $p.$
Then $$T_{100} = \sum_{i=1}^{100} X_i \sim\mathsf{Binom}(n=100,p).$$
Suppose you want to test $H_0: p = 0.5$ against $H_a: p \ne 0.5$
In particular, you might observe $T = 38$ Successes in $n = 100$ trials.
Then using binom.test in R, you get the following results:
binom.test(38, 100, .5)
Exact binomial test
data: 38 and 100
number of successes = 38, number of trials = 100,
p-value = 0.02098
alternative hypothesis:
true probability of success is not equal to 0.5
95 percent confidence interval:
0.2847675 0.4825393
sample estimates:
probability of success
0.38
The P-value of this test is $0.02098 < 0.05 = 5\%.$
so you reject $H_0$ in favor of $H_a$ at the 5% level of significance.
The P-value for this 2-sided test can be computed as
$$P(T \le 38) + P(T \ge 62) = 2P(T \le 38) = 0.02097874,$$ where $T \sim \mathsf{Binom}(100, 0.5).$ Computation in R below.
2 * pbinom(38, 100, 0.5)
[1] 0.02097874
The idea is to find the probability of a value as far or farther from
the mean $np = 100(.5) = 50$ as is $38,$ in either direction.
If you want to use critical values, then you would reject if
the observed total $T \le 39$ or $T \ge 61.$ Then the size of
the test is $\alpha = 0.352.$ If you tried to use critical
values $40$ and $60$ (instead of 39 and 51), then the size of
the test would be $0.057,$ which exceeds 5%. Because of the
discreteness of the binomial distribution, it is not possible
to test at exactly the 5% level.
2*pbinom(39, 100, .5)
[1] 0.0352002
2*pbinom(40, 100, .5)
[1] 0.05688793
Also, the confidence interval $(0.285 0.483)$ for $p$ shown in the R output has
very nearly the intended 95% coverage probability for all possible
values of $p.$
In the figure below, the P-value of the test is the sum
of the heights of the vertical black bars outside the vertical blue lines.
R code for figure:
t = 0:100; PDF = dbinom(t, 100, .5)
hdr = "Null Dist'n BINOM(100, 0.5)"
plot(t, PDF, type="h", lwd = 3, main=hdr)
abline(h=0, col="green2")
abline(v=0, col="green2")
abline(v = c(38.5, 61.5), col="blue")
Notes: (1) Because $n$ is sufficiently large that $T$ is approximately
normal, then you could use an approximate normal test. With
such a test you can pretend to test at the 5% level, but
the standardized binomial statistic $Z = \frac{T - np_0}{\sqrt{np_0(1-p_0)}}$ does not take all of the values implied by $|Z| \ge 1.96.$
For $T = 38,$ the normal test statistic
is $Z = -2.4$ and the P-value is about $0.016 < 0.05 = 5\%.$
t = 38; n = 100; p=0.5
z = (t - n*p)/sqrt(n*p*(1-p)); z
[1] -2.4
2 * pnorm(z)
[1] 0.01639507
(2) Various statistical programs do the exact test or the
normal approximation, or both. Also, some give one or more
style of confidence interval. Here is output from a recent
release of Minitab software. The first output shown is essentially
the same as for the exact test in R, but displayed differently:
Test and CI for One Proportion
Test of p = 0.5 vs p ≠ 0.5
Exact
Sample X N Sample p 95% CI P-Value
1 38 100 0.380000 (0.284767, 0.482539) 0.021
Minitab's version of the approximate normal test is shown below; accordingly, it gives a
different P-value $(0.16)$ than for the exact test, but still happens to reject
at the 5% level. Of course, when you have two different tests
it might happen that you reject with one and not with the other. [In particular, if you have $T = 38$ and $n=100.$ then Minitab's exact test will not reject at the 2% level and its approximate normal test will reject at that level.]
Test and CI for One Proportion
Test of p = 0.5 vs p ≠ 0.5
Sample X N Sample p 95% CI Z-Value P-Value
1 38 100 0.380000 (0.284866, 0.475134) -2.40 0.016
(3) The two Minitab printouts give slightly different 95% CIs. @SextusEmpiricius provides a link to a Wikipedia
page that shows several styles of CIs in common usage.
It seems that the second Minitab printout gives the Wald
95% CI: With $\hat p = 38/100,$ it is
$\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p)}{n}},$ which
computes to $(0.285,\, 0.475).$
p.hat = t/n
CI = p.hat + qnorm(c(.025,.975))*sqrt(p.hat*(1-p.hat)/n)
CI
[1] 0.284866 0.475134
The Wald interval is an asymptotic interval, which should
be used only for large $n.$ It does not 'invert the test'
and so should not be used as a substitute for the approximate
normal test unless $n$ is very large (my personal rule is
$n \ge 500).$
Various styles of CIs can give remarkably different results
for some sample sizes and totals $T.$ Consequently, if you are
going to use CIs to do tests, you might see frequent
disagreement whether to accept or reject. In particular,
some 'exact' 95% CIs are quite long in order to be sure to give
at least 95% coverage for all possible values of $p.$ These
intervals are less likely to reject $H_0$ at significance level 5%. | Computing p-value vs. constructing confidence interval from sample for proportions | Suppose you have $n = 100$ independent observations $X_i$
from a Bernoulli distribution with Success probability $p.$
Then $$T_{100} = \sum_{i=1}^{100} X_i \sim\mathsf{Binom}(n=100,p).$$
Suppose you w | Computing p-value vs. constructing confidence interval from sample for proportions
Suppose you have $n = 100$ independent observations $X_i$
from a Bernoulli distribution with Success probability $p.$
Then $$T_{100} = \sum_{i=1}^{100} X_i \sim\mathsf{Binom}(n=100,p).$$
Suppose you want to test $H_0: p = 0.5$ against $H_a: p \ne 0.5$
In particular, you might observe $T = 38$ Successes in $n = 100$ trials.
Then using binom.test in R, you get the following results:
binom.test(38, 100, .5)
Exact binomial test
data: 38 and 100
number of successes = 38, number of trials = 100,
p-value = 0.02098
alternative hypothesis:
true probability of success is not equal to 0.5
95 percent confidence interval:
0.2847675 0.4825393
sample estimates:
probability of success
0.38
The P-value of this test is $0.02098 < 0.05 = 5\%.$
so you reject $H_0$ in favor of $H_a$ at the 5% level of significance.
The P-value for this 2-sided test can be computed as
$$P(T \le 38) + P(T \ge 62) = 2P(T \le 38) = 0.02097874,$$ where $T \sim \mathsf{Binom}(100, 0.5).$ Computation in R below.
2 * pbinom(38, 100, 0.5)
[1] 0.02097874
The idea is to find the probability of a value as far or farther from
the mean $np = 100(.5) = 50$ as is $38,$ in either direction.
If you want to use critical values, then you would reject if
the observed total $T \le 39$ or $T \ge 61.$ Then the size of
the test is $\alpha = 0.352.$ If you tried to use critical
values $40$ and $60$ (instead of 39 and 51), then the size of
the test would be $0.057,$ which exceeds 5%. Because of the
discreteness of the binomial distribution, it is not possible
to test at exactly the 5% level.
2*pbinom(39, 100, .5)
[1] 0.0352002
2*pbinom(40, 100, .5)
[1] 0.05688793
Also, the confidence interval $(0.285 0.483)$ for $p$ shown in the R output has
very nearly the intended 95% coverage probability for all possible
values of $p.$
In the figure below, the P-value of the test is the sum
of the heights of the vertical black bars outside the vertical blue lines.
R code for figure:
t = 0:100; PDF = dbinom(t, 100, .5)
hdr = "Null Dist'n BINOM(100, 0.5)"
plot(t, PDF, type="h", lwd = 3, main=hdr)
abline(h=0, col="green2")
abline(v=0, col="green2")
abline(v = c(38.5, 61.5), col="blue")
Notes: (1) Because $n$ is sufficiently large that $T$ is approximately
normal, then you could use an approximate normal test. With
such a test you can pretend to test at the 5% level, but
the standardized binomial statistic $Z = \frac{T - np_0}{\sqrt{np_0(1-p_0)}}$ does not take all of the values implied by $|Z| \ge 1.96.$
For $T = 38,$ the normal test statistic
is $Z = -2.4$ and the P-value is about $0.016 < 0.05 = 5\%.$
t = 38; n = 100; p=0.5
z = (t - n*p)/sqrt(n*p*(1-p)); z
[1] -2.4
2 * pnorm(z)
[1] 0.01639507
(2) Various statistical programs do the exact test or the
normal approximation, or both. Also, some give one or more
style of confidence interval. Here is output from a recent
release of Minitab software. The first output shown is essentially
the same as for the exact test in R, but displayed differently:
Test and CI for One Proportion
Test of p = 0.5 vs p ≠ 0.5
Exact
Sample X N Sample p 95% CI P-Value
1 38 100 0.380000 (0.284767, 0.482539) 0.021
Minitab's version of the approximate normal test is shown below; accordingly, it gives a
different P-value $(0.16)$ than for the exact test, but still happens to reject
at the 5% level. Of course, when you have two different tests
it might happen that you reject with one and not with the other. [In particular, if you have $T = 38$ and $n=100.$ then Minitab's exact test will not reject at the 2% level and its approximate normal test will reject at that level.]
Test and CI for One Proportion
Test of p = 0.5 vs p ≠ 0.5
Sample X N Sample p 95% CI Z-Value P-Value
1 38 100 0.380000 (0.284866, 0.475134) -2.40 0.016
(3) The two Minitab printouts give slightly different 95% CIs. @SextusEmpiricius provides a link to a Wikipedia
page that shows several styles of CIs in common usage.
It seems that the second Minitab printout gives the Wald
95% CI: With $\hat p = 38/100,$ it is
$\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p)}{n}},$ which
computes to $(0.285,\, 0.475).$
p.hat = t/n
CI = p.hat + qnorm(c(.025,.975))*sqrt(p.hat*(1-p.hat)/n)
CI
[1] 0.284866 0.475134
The Wald interval is an asymptotic interval, which should
be used only for large $n.$ It does not 'invert the test'
and so should not be used as a substitute for the approximate
normal test unless $n$ is very large (my personal rule is
$n \ge 500).$
Various styles of CIs can give remarkably different results
for some sample sizes and totals $T.$ Consequently, if you are
going to use CIs to do tests, you might see frequent
disagreement whether to accept or reject. In particular,
some 'exact' 95% CIs are quite long in order to be sure to give
at least 95% coverage for all possible values of $p.$ These
intervals are less likely to reject $H_0$ at significance level 5%. | Computing p-value vs. constructing confidence interval from sample for proportions
Suppose you have $n = 100$ independent observations $X_i$
from a Bernoulli distribution with Success probability $p.$
Then $$T_{100} = \sum_{i=1}^{100} X_i \sim\mathsf{Binom}(n=100,p).$$
Suppose you w |
52,762 | help with formula to calculate Bayesian ranking of M-star reviews | This is sloppy writing, and the author should be embarrassed :)
$N$ is the total number of ratings, and $S$ is the sum of "scores". Scores can be 1 or 0 (as in a binary voting system), or fractional (as in a star-rating system). I made a poor choice of variable names, and should have said:
An M-star rating system can be seen as a more continuous version of the preceding, and we can set $m$ stars rewarded as equivalent a score of $\frac{m}{M}$
In your example: observing a rating 4 and a rating 5 (assuming that $M=5$ stars is the best possible score). Then $N=2$, and $S = \frac{4}{5} + \frac{5}{5}$. | help with formula to calculate Bayesian ranking of M-star reviews | This is sloppy writing, and the author should be embarrassed :)
$N$ is the total number of ratings, and $S$ is the sum of "scores". Scores can be 1 or 0 (as in a binary voting system), or fractional ( | help with formula to calculate Bayesian ranking of M-star reviews
This is sloppy writing, and the author should be embarrassed :)
$N$ is the total number of ratings, and $S$ is the sum of "scores". Scores can be 1 or 0 (as in a binary voting system), or fractional (as in a star-rating system). I made a poor choice of variable names, and should have said:
An M-star rating system can be seen as a more continuous version of the preceding, and we can set $m$ stars rewarded as equivalent a score of $\frac{m}{M}$
In your example: observing a rating 4 and a rating 5 (assuming that $M=5$ stars is the best possible score). Then $N=2$, and $S = \frac{4}{5} + \frac{5}{5}$. | help with formula to calculate Bayesian ranking of M-star reviews
This is sloppy writing, and the author should be embarrassed :)
$N$ is the total number of ratings, and $S$ is the sum of "scores". Scores can be 1 or 0 (as in a binary voting system), or fractional ( |
52,763 | Ridge regression subtlety on intercept | I will give you an unrigorous but intuitive reason as to why the intercept is not penalized. When we estimate a penalized model, we usually scale and centre the predictors. This means that the intercept is estimated to be the mean of the outcome variable.
Note that the mean of the outcome variable is the simplest prediction we could make (aside form predicting a random number unrelated to the outcome, in which case why use data at all, right?). Aside from the simplicity, the sample mean is also the minimizer of squared loss when we don't consider any other variables.
Penalizing the intercept would mean that we would bias our model's predictions away from the sample mean in extreme cases when all model parameters are shrunk towards 0. This would result in poorer predictions than we could make otherwise, or put another way, that we could actually minimize squared error further. | Ridge regression subtlety on intercept | I will give you an unrigorous but intuitive reason as to why the intercept is not penalized. When we estimate a penalized model, we usually scale and centre the predictors. This means that the inter | Ridge regression subtlety on intercept
I will give you an unrigorous but intuitive reason as to why the intercept is not penalized. When we estimate a penalized model, we usually scale and centre the predictors. This means that the intercept is estimated to be the mean of the outcome variable.
Note that the mean of the outcome variable is the simplest prediction we could make (aside form predicting a random number unrelated to the outcome, in which case why use data at all, right?). Aside from the simplicity, the sample mean is also the minimizer of squared loss when we don't consider any other variables.
Penalizing the intercept would mean that we would bias our model's predictions away from the sample mean in extreme cases when all model parameters are shrunk towards 0. This would result in poorer predictions than we could make otherwise, or put another way, that we could actually minimize squared error further. | Ridge regression subtlety on intercept
I will give you an unrigorous but intuitive reason as to why the intercept is not penalized. When we estimate a penalized model, we usually scale and centre the predictors. This means that the inter |
52,764 | Log-linear and GLM (Poisson) regression | The term "log-linear" isn't uniquely defined. Even Wikipedia doesn't seem to come to internal agreement. Its entry on log-linear analysis has to do with modeling counts in contingency tables, while its log-linear model entry describes your approach to modeling. I try to avoid that terminology and just say what's being modeled. In your case, it's ordinary linear regression with a log-transformed outcome.
There's nothing wrong with log transformation of a continuous, strictly positive outcome. If the residuals from your resulting linear regression model are well behaved, it's probably the simplest way to go. A drawback is that you are modeling mean values on the log scale, which isn't how people typically think about means.
You are correct that a Poisson GLM isn't appropriate for continuous non-count data, but other types of GLM can use a log link and might work for your data. This page suggests other GLM approaches, like Gaussian (possibly inverse) or gamma with log links, which might work better and more readily give predictions on an untransformed mean scale. | Log-linear and GLM (Poisson) regression | The term "log-linear" isn't uniquely defined. Even Wikipedia doesn't seem to come to internal agreement. Its entry on log-linear analysis has to do with modeling counts in contingency tables, while it | Log-linear and GLM (Poisson) regression
The term "log-linear" isn't uniquely defined. Even Wikipedia doesn't seem to come to internal agreement. Its entry on log-linear analysis has to do with modeling counts in contingency tables, while its log-linear model entry describes your approach to modeling. I try to avoid that terminology and just say what's being modeled. In your case, it's ordinary linear regression with a log-transformed outcome.
There's nothing wrong with log transformation of a continuous, strictly positive outcome. If the residuals from your resulting linear regression model are well behaved, it's probably the simplest way to go. A drawback is that you are modeling mean values on the log scale, which isn't how people typically think about means.
You are correct that a Poisson GLM isn't appropriate for continuous non-count data, but other types of GLM can use a log link and might work for your data. This page suggests other GLM approaches, like Gaussian (possibly inverse) or gamma with log links, which might work better and more readily give predictions on an untransformed mean scale. | Log-linear and GLM (Poisson) regression
The term "log-linear" isn't uniquely defined. Even Wikipedia doesn't seem to come to internal agreement. Its entry on log-linear analysis has to do with modeling counts in contingency tables, while it |
52,765 | Log-linear and GLM (Poisson) regression | Let $y$ be your outcome (accounting amount) and let $x_1, x_2, x_3$ be your three predictors (for one individual). Then your approach is modeling
$$\log y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \varepsilon$$
Taking the exponential of both sides gives
$$y = \exp \left( \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \varepsilon \right)$$
The mean of $y$ conditional on $x_1$, $x_2$ and $x_3$ is
$$E(y | x_1, x_2, x_3) = \exp \left( \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3\right) E(\exp (\varepsilon ) )$$
Now imagine that $x_1$ is replaced with $x_1 + 1$. Then
$$E(y | x_1 + 1 , x_2, x_3) = \exp(\beta_1) E(y | x_1, x_2, x_3)$$
In other words, a unit change in the predictor changes the mean by a multiplicative factor.
The point is that the log transform, while better in terms of ensuring normality of residuals, changes the interpretation of the model coefficients. To directly answer your question, it is a perfectly acceptable approach but you should think about what you really want to model.
For the second question: there are several extensions to a Poisson GLM that account for overdispersed data (where the conditional variance is greater than the conditional mean). For example, you could use a negative binomial GLM (glm.nb) in R or a quasi-likelihood approach | Log-linear and GLM (Poisson) regression | Let $y$ be your outcome (accounting amount) and let $x_1, x_2, x_3$ be your three predictors (for one individual). Then your approach is modeling
$$\log y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta | Log-linear and GLM (Poisson) regression
Let $y$ be your outcome (accounting amount) and let $x_1, x_2, x_3$ be your three predictors (for one individual). Then your approach is modeling
$$\log y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \varepsilon$$
Taking the exponential of both sides gives
$$y = \exp \left( \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \varepsilon \right)$$
The mean of $y$ conditional on $x_1$, $x_2$ and $x_3$ is
$$E(y | x_1, x_2, x_3) = \exp \left( \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3\right) E(\exp (\varepsilon ) )$$
Now imagine that $x_1$ is replaced with $x_1 + 1$. Then
$$E(y | x_1 + 1 , x_2, x_3) = \exp(\beta_1) E(y | x_1, x_2, x_3)$$
In other words, a unit change in the predictor changes the mean by a multiplicative factor.
The point is that the log transform, while better in terms of ensuring normality of residuals, changes the interpretation of the model coefficients. To directly answer your question, it is a perfectly acceptable approach but you should think about what you really want to model.
For the second question: there are several extensions to a Poisson GLM that account for overdispersed data (where the conditional variance is greater than the conditional mean). For example, you could use a negative binomial GLM (glm.nb) in R or a quasi-likelihood approach | Log-linear and GLM (Poisson) regression
Let $y$ be your outcome (accounting amount) and let $x_1, x_2, x_3$ be your three predictors (for one individual). Then your approach is modeling
$$\log y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta |
52,766 | When do I need something "fancier" than multiple regression? | For my money, if your goal is to understand the relationship between your predictors and the outcome, multiple regression is absolutely fine here, BUT you need to worry a bit about multiple comparisons.
You have lots of predictors. Even if none of your predictors are really related to the outcome, just by chance you would expect ~5% of them to come out as significantly associated with it in your data, using multiple regression or any maximum-likelihood method.
LASSO and related methods deal with this by finding a set of regression weights that fit your data reasonably well, while trying to keep the weights small, and having as many weights of $0$ as possible. This is great for prediction, but, for reasons I won't cover here, you can't really interpret the weights estimated using LASSO, particularly when your predictors are correlated.
An alternative way to deal with this is just to use multiple regression, identify all of the significant predictors ($p < .05$), and then, to control your false discovery rate, use something like the Benjamni-Hochberg procedure to throw out the predictors whose effects are too weak.
PS: It's worth mentioning that depending on how many observations and how many features you have, there comes a point where multiple regression no longer works, even with correction for multiple comparisons. I think $\frac{1000}{30} \approx 30$ observations per predictor is probably still fine. | When do I need something "fancier" than multiple regression? | For my money, if your goal is to understand the relationship between your predictors and the outcome, multiple regression is absolutely fine here, BUT you need to worry a bit about multiple comparison | When do I need something "fancier" than multiple regression?
For my money, if your goal is to understand the relationship between your predictors and the outcome, multiple regression is absolutely fine here, BUT you need to worry a bit about multiple comparisons.
You have lots of predictors. Even if none of your predictors are really related to the outcome, just by chance you would expect ~5% of them to come out as significantly associated with it in your data, using multiple regression or any maximum-likelihood method.
LASSO and related methods deal with this by finding a set of regression weights that fit your data reasonably well, while trying to keep the weights small, and having as many weights of $0$ as possible. This is great for prediction, but, for reasons I won't cover here, you can't really interpret the weights estimated using LASSO, particularly when your predictors are correlated.
An alternative way to deal with this is just to use multiple regression, identify all of the significant predictors ($p < .05$), and then, to control your false discovery rate, use something like the Benjamni-Hochberg procedure to throw out the predictors whose effects are too weak.
PS: It's worth mentioning that depending on how many observations and how many features you have, there comes a point where multiple regression no longer works, even with correction for multiple comparisons. I think $\frac{1000}{30} \approx 30$ observations per predictor is probably still fine. | When do I need something "fancier" than multiple regression?
For my money, if your goal is to understand the relationship between your predictors and the outcome, multiple regression is absolutely fine here, BUT you need to worry a bit about multiple comparison |
52,767 | When do I need something "fancier" than multiple regression? | To @Eoin's point, the apparent variable importance is a very unstable quantity when the sample size is not in the millions. What exposes the difficulty of the task and provides actionable information is to use the bootstrap to get confidence intervals on importance ranks of all the predictors simultaneously. The more predictors you have the more difficult it is to select strong predictors from them, and the wider will be the rank confidence intervals. Likewise when you have collinearity. The ranks of importance can be computed on any measure including univariate correlations, partial $R^2$ in a multiple regression model, $\chi^2$ when using maximum likelihood, etc. An example with R code may be found in Section 5.4 of the RMS course notes. | When do I need something "fancier" than multiple regression? | To @Eoin's point, the apparent variable importance is a very unstable quantity when the sample size is not in the millions. What exposes the difficulty of the task and provides actionable information | When do I need something "fancier" than multiple regression?
To @Eoin's point, the apparent variable importance is a very unstable quantity when the sample size is not in the millions. What exposes the difficulty of the task and provides actionable information is to use the bootstrap to get confidence intervals on importance ranks of all the predictors simultaneously. The more predictors you have the more difficult it is to select strong predictors from them, and the wider will be the rank confidence intervals. Likewise when you have collinearity. The ranks of importance can be computed on any measure including univariate correlations, partial $R^2$ in a multiple regression model, $\chi^2$ when using maximum likelihood, etc. An example with R code may be found in Section 5.4 of the RMS course notes. | When do I need something "fancier" than multiple regression?
To @Eoin's point, the apparent variable importance is a very unstable quantity when the sample size is not in the millions. What exposes the difficulty of the task and provides actionable information |
52,768 | When do I need something "fancier" than multiple regression? | It's always a balance trying to balance predictive ability and interpretation. You can try to use LASSO or other shrinkage methods if you would like to emphasize prediction a bit more than multiple regression.
This may improve predictive ability while preserving some level of interpretability. If you transformed your data, I believe there are interpretability issues with shrinkage methods. Of course, if you want to perform inference on the parameters (t-test/F-test), you will not be able to do this with shrinkage methods (as far as I know).
You can still do some variable selection analysis even if collinearity is NOT an issue. You may find, for example, that a smaller model may have a better/equivalent AIC or Adjusted $R^2$ than the full model. | When do I need something "fancier" than multiple regression? | It's always a balance trying to balance predictive ability and interpretation. You can try to use LASSO or other shrinkage methods if you would like to emphasize prediction a bit more than multiple re | When do I need something "fancier" than multiple regression?
It's always a balance trying to balance predictive ability and interpretation. You can try to use LASSO or other shrinkage methods if you would like to emphasize prediction a bit more than multiple regression.
This may improve predictive ability while preserving some level of interpretability. If you transformed your data, I believe there are interpretability issues with shrinkage methods. Of course, if you want to perform inference on the parameters (t-test/F-test), you will not be able to do this with shrinkage methods (as far as I know).
You can still do some variable selection analysis even if collinearity is NOT an issue. You may find, for example, that a smaller model may have a better/equivalent AIC or Adjusted $R^2$ than the full model. | When do I need something "fancier" than multiple regression?
It's always a balance trying to balance predictive ability and interpretation. You can try to use LASSO or other shrinkage methods if you would like to emphasize prediction a bit more than multiple re |
52,769 | When do I need something "fancier" than multiple regression? | Something fancier than a univariable or multiple regression model is needed when there is a very clear, very complicated non-linear relationship between the endpoint and covariates that cannot be addressed using a routine link function or transformed covariate. If your paneled scatter plots show clouds of data points with suggestive trends then anything fancier could be considered overfitting.
Based on your labeling of what is the dependent and independent variables, your model will tell you if any DV's (high or low) are more common in words with certain properties. Your description had this reversed, suggesting reversal of dependent and independent variables.
Yes, with a large enough sample some of your predictors are bound to be statistically significant, and that is a good thing. In fact, it is a great thing. Don't be disappointed with small p-values. However, just because you have the power to detect an effect does not imply it is a meaningful effect to discuss and report.
As decaf and Eoin have suggested, you can use LASSO and FDR methods among others to weed out independent variables. My preference is to use the p-value as a continuous index for the weight of the evidence by running univariable models and sorting the predictors by ascending p-value, as well as by the magnitude of the estimated effect size. Those predictors with the smallest p-values do the best job explaining your endpoint for your particular data set and stand the best chance of re-demonstrating their effects in a repeated experiment. If this "short list" is still a bit too long you can choose to highlight only the top, say, $k$ independent variables on the list. If we think in terms of Neyman-Pearson error rates instead of evidential p-values, we can talk in terms of per-comparison error rates so that no adjustments to the p-values are necessary, acknowledging that family-wise error rates may very well be higher. The key is to interpret the results as tentative evidence, not irrefutable proof. Those independent variables with the largest effect sizes are the most interesting to discuss and report, even if the p-value does not reach a conventional cutoff like 0.05.
The independent variables that were promising in univariable models can be explored together in a multivariable model to see if the conditional effects persist in smaller subgroups. I suggest creating paneled and grouped plots to explore possible interaction effects. However, this can become very nebulous very quickly. For that reason I prefer to keep emphasis on the univariable analyses that describe the higher-level conditional effect for each independent variable. This may seem elementary, but it may be naive to slice and dice the data by a dozen or more independent variables simultaneously and discuss a miniscule effect in an obscure subgroup especially when the results are tentative evidence and not irrefutable proof. It may be worthwhile to define a subpopulation and fit a simpler model to the data available on this subpopulation, rather than trying to interpret a multivariable regression model. Such a stratified model may provide a better fit to the data and the reduced sample size will produce more conservative inference.
No matter what modeling technique you choose the only true validation is to repeat the experiment and see if the same model does well at explaining the dependent variable. If the experiment can be replicated many times then what was tentative evidence, taken together, becomes irrefutable evidence. | When do I need something "fancier" than multiple regression? | Something fancier than a univariable or multiple regression model is needed when there is a very clear, very complicated non-linear relationship between the endpoint and covariates that cannot be addr | When do I need something "fancier" than multiple regression?
Something fancier than a univariable or multiple regression model is needed when there is a very clear, very complicated non-linear relationship between the endpoint and covariates that cannot be addressed using a routine link function or transformed covariate. If your paneled scatter plots show clouds of data points with suggestive trends then anything fancier could be considered overfitting.
Based on your labeling of what is the dependent and independent variables, your model will tell you if any DV's (high or low) are more common in words with certain properties. Your description had this reversed, suggesting reversal of dependent and independent variables.
Yes, with a large enough sample some of your predictors are bound to be statistically significant, and that is a good thing. In fact, it is a great thing. Don't be disappointed with small p-values. However, just because you have the power to detect an effect does not imply it is a meaningful effect to discuss and report.
As decaf and Eoin have suggested, you can use LASSO and FDR methods among others to weed out independent variables. My preference is to use the p-value as a continuous index for the weight of the evidence by running univariable models and sorting the predictors by ascending p-value, as well as by the magnitude of the estimated effect size. Those predictors with the smallest p-values do the best job explaining your endpoint for your particular data set and stand the best chance of re-demonstrating their effects in a repeated experiment. If this "short list" is still a bit too long you can choose to highlight only the top, say, $k$ independent variables on the list. If we think in terms of Neyman-Pearson error rates instead of evidential p-values, we can talk in terms of per-comparison error rates so that no adjustments to the p-values are necessary, acknowledging that family-wise error rates may very well be higher. The key is to interpret the results as tentative evidence, not irrefutable proof. Those independent variables with the largest effect sizes are the most interesting to discuss and report, even if the p-value does not reach a conventional cutoff like 0.05.
The independent variables that were promising in univariable models can be explored together in a multivariable model to see if the conditional effects persist in smaller subgroups. I suggest creating paneled and grouped plots to explore possible interaction effects. However, this can become very nebulous very quickly. For that reason I prefer to keep emphasis on the univariable analyses that describe the higher-level conditional effect for each independent variable. This may seem elementary, but it may be naive to slice and dice the data by a dozen or more independent variables simultaneously and discuss a miniscule effect in an obscure subgroup especially when the results are tentative evidence and not irrefutable proof. It may be worthwhile to define a subpopulation and fit a simpler model to the data available on this subpopulation, rather than trying to interpret a multivariable regression model. Such a stratified model may provide a better fit to the data and the reduced sample size will produce more conservative inference.
No matter what modeling technique you choose the only true validation is to repeat the experiment and see if the same model does well at explaining the dependent variable. If the experiment can be replicated many times then what was tentative evidence, taken together, becomes irrefutable evidence. | When do I need something "fancier" than multiple regression?
Something fancier than a univariable or multiple regression model is needed when there is a very clear, very complicated non-linear relationship between the endpoint and covariates that cannot be addr |
52,770 | Schoenfeld residuals - Plain English explanation, please! | What's plotted starts with a variance-weighted transformation of the Schoenfeld residuals for a covariate, into what are called "scaled Schoenfeld residuals." Those are then added to the corresponding time-invariant coefficient estimate from the Cox model under the proportional hazards (PH) assumption and smoothed. The result is a plot of an estimate of the regression coefficient for the covariate over time. If the plot is reasonably flat, the PH assumption holds. Take this one step at a time.
Risk-weighted covariate averages and covariances
You start by determining, for each event time, the risk-weighted averages of covariate values and the corresponding risk-weighted covariance among covariate values over all individuals at risk at that time. That's essentially a part of the model-fitting process, anyway. The risks used for the weighting are simply the corresponding hazard ratios for the individuals at that time, the exponentiated linear-predictor values from the model.
Schoenfeld residuals
The Schoenfeld residuals are calculated for all covariates for each individual experiencing an event at a given time. Those are the differences between that individual's covariate values at the event time and the corresponding risk-weighted average of covariate values among all those then at risk. The word "residual" thus makes sense, as it's the difference between an observed covariate value and what you might have expected based on all those at risk at that time.
Scaled Schoenfeld residuals
The Schoenfeld residuals are then scaled inversely with respect to their (co)variances. The scaled values at an event time for an individual come from pre-multiplying the vector of original Schoenfeld residuals by the inverse of the corresponding risk-weighted covariate covariance matrix at that time. You can think of this as down-weighting Schoenfeld residuals whose values are uncertain because of high variance.
The "plot of scaled Schoenfeld residuals"
Although the plot you show is generally called a "plot of scaled Schoenfeld residuals," that's not quite right.
The importance of the scaled Schoenfeld residuals comes from their associations with the time-dependence of a Cox regression coefficient. If $s_{k,j}^*$ is a scaled Schoenfeld residual for covariate $j$ at time $t_k$ and the estimated time-fixed Cox regression coefficient under PH is $\hat \beta_j$, then the expected value of $s_{k,j}^*$ is approximately the deviation of the actual coefficient value at time $t_k$, $\beta_j(t_k)$, from the PH-based estimate:
$$E(s_{k,j}^*) + \hat \beta_j \approx \beta_j(t_k).$$
That was shown by Grambsch and Therneau in 1994. The y-axis values of the plot for covariate $j$ are the sums of the scaled Schoenfeld residuals with the corresponding PH estimate $\hat \beta_j$.
Simple answer to the question
The smoothed plot is thus an estimate of the time dependence of the coefficient for the covariate $j$, $\beta_j(t_k)$. In your case, the plot indicates that your biomarker is most strongly associated with outcome at early times, dropping off to almost no association beyond a time value of 50-60.
The above is pretty much based on Therneau and Grambsch. Section 6.2 presents the plotting of scaled Schoenfeld residuals, with ordinary Schoenfeld residuals described in Section 4.6 and the formulas for risk-weighted covariate means and covariances in Section 3.1 (equations 3.5 and 3.7, respectively). | Schoenfeld residuals - Plain English explanation, please! | What's plotted starts with a variance-weighted transformation of the Schoenfeld residuals for a covariate, into what are called "scaled Schoenfeld residuals." Those are then added to the corresponding | Schoenfeld residuals - Plain English explanation, please!
What's plotted starts with a variance-weighted transformation of the Schoenfeld residuals for a covariate, into what are called "scaled Schoenfeld residuals." Those are then added to the corresponding time-invariant coefficient estimate from the Cox model under the proportional hazards (PH) assumption and smoothed. The result is a plot of an estimate of the regression coefficient for the covariate over time. If the plot is reasonably flat, the PH assumption holds. Take this one step at a time.
Risk-weighted covariate averages and covariances
You start by determining, for each event time, the risk-weighted averages of covariate values and the corresponding risk-weighted covariance among covariate values over all individuals at risk at that time. That's essentially a part of the model-fitting process, anyway. The risks used for the weighting are simply the corresponding hazard ratios for the individuals at that time, the exponentiated linear-predictor values from the model.
Schoenfeld residuals
The Schoenfeld residuals are calculated for all covariates for each individual experiencing an event at a given time. Those are the differences between that individual's covariate values at the event time and the corresponding risk-weighted average of covariate values among all those then at risk. The word "residual" thus makes sense, as it's the difference between an observed covariate value and what you might have expected based on all those at risk at that time.
Scaled Schoenfeld residuals
The Schoenfeld residuals are then scaled inversely with respect to their (co)variances. The scaled values at an event time for an individual come from pre-multiplying the vector of original Schoenfeld residuals by the inverse of the corresponding risk-weighted covariate covariance matrix at that time. You can think of this as down-weighting Schoenfeld residuals whose values are uncertain because of high variance.
The "plot of scaled Schoenfeld residuals"
Although the plot you show is generally called a "plot of scaled Schoenfeld residuals," that's not quite right.
The importance of the scaled Schoenfeld residuals comes from their associations with the time-dependence of a Cox regression coefficient. If $s_{k,j}^*$ is a scaled Schoenfeld residual for covariate $j$ at time $t_k$ and the estimated time-fixed Cox regression coefficient under PH is $\hat \beta_j$, then the expected value of $s_{k,j}^*$ is approximately the deviation of the actual coefficient value at time $t_k$, $\beta_j(t_k)$, from the PH-based estimate:
$$E(s_{k,j}^*) + \hat \beta_j \approx \beta_j(t_k).$$
That was shown by Grambsch and Therneau in 1994. The y-axis values of the plot for covariate $j$ are the sums of the scaled Schoenfeld residuals with the corresponding PH estimate $\hat \beta_j$.
Simple answer to the question
The smoothed plot is thus an estimate of the time dependence of the coefficient for the covariate $j$, $\beta_j(t_k)$. In your case, the plot indicates that your biomarker is most strongly associated with outcome at early times, dropping off to almost no association beyond a time value of 50-60.
The above is pretty much based on Therneau and Grambsch. Section 6.2 presents the plotting of scaled Schoenfeld residuals, with ordinary Schoenfeld residuals described in Section 4.6 and the formulas for risk-weighted covariate means and covariances in Section 3.1 (equations 3.5 and 3.7, respectively). | Schoenfeld residuals - Plain English explanation, please!
What's plotted starts with a variance-weighted transformation of the Schoenfeld residuals for a covariate, into what are called "scaled Schoenfeld residuals." Those are then added to the corresponding |
52,771 | What are relatively simple simulations that succeed with an irrational probability? | I think the details may be somewhat different
for each 'desired irrational constant', but
here is a strategy that may work for many such
constants.
Here is a simple algorithm that estimates $\pi/4,$ the area in the unit square beneath
the quarter unit-circle with center at the origin.
set.seed(2021)
x = runif(10^6); y = runif(10^6)
mean(y <= sqrt(1-x^2))
[1] 0.785459
pi/4
[1] 0.7853982
Can you find a function in a suitable square or rectangle that
bounds an area equal to each of your desired irrational constants (or a simple function thereof)?
Figure with only 50,000 points in the unit square for clarity.
B = 50000; x = runif(B); y = runif(B)
plot(x, y, pch=".")
blue = (y <= sqrt(1-x^2))
points(x[blue], y[blue], col="blue", pch=".") | What are relatively simple simulations that succeed with an irrational probability? | I think the details may be somewhat different
for each 'desired irrational constant', but
here is a strategy that may work for many such
constants.
Here is a simple algorithm that estimates $\pi/4,$ t | What are relatively simple simulations that succeed with an irrational probability?
I think the details may be somewhat different
for each 'desired irrational constant', but
here is a strategy that may work for many such
constants.
Here is a simple algorithm that estimates $\pi/4,$ the area in the unit square beneath
the quarter unit-circle with center at the origin.
set.seed(2021)
x = runif(10^6); y = runif(10^6)
mean(y <= sqrt(1-x^2))
[1] 0.785459
pi/4
[1] 0.7853982
Can you find a function in a suitable square or rectangle that
bounds an area equal to each of your desired irrational constants (or a simple function thereof)?
Figure with only 50,000 points in the unit square for clarity.
B = 50000; x = runif(B); y = runif(B)
plot(x, y, pch=".")
blue = (y <= sqrt(1-x^2))
points(x[blue], y[blue], col="blue", pch=".") | What are relatively simple simulations that succeed with an irrational probability?
I think the details may be somewhat different
for each 'desired irrational constant', but
here is a strategy that may work for many such
constants.
Here is a simple algorithm that estimates $\pi/4,$ t |
52,772 | What are relatively simple simulations that succeed with an irrational probability? | There is a universal algorithm. It doesn't matter whether the probability is irrational or not.
It suffices to implement a procedure to output either $0$ or $1$ that will (a) almost surely terminate and (b) output $1$ with a probability $\phi,$ where $\phi$ is any number in the interval $[0,1]$ (rational or irrational). The following description relies on an arbitrarily long sequence of iid Bernoulli$(1/2)$ variables $X_1,X_2,X_3,\ldots.$
Procedure f(phi):
for i in 1, 2, 3, ...
if phi >= 1/2 then
if X[i]==0 return(1) else return(f(2*phi-1))
else
if X[i]==0 return(0) else return(f(2*phi))
It is manifestly simple, using only (a) comparison to $1/2,$ (b) multiplication by $2,$ and (c) subtraction of $1.$
This algorithm randomly walks the binary tree determined by binary expansions of real numbers in the interval $[0,1].$ It outputs $1$ as soon as it enters a branch all of whose ultimate values will be less than $\phi$ and it outputs $0$ as soon as it enters any branch all of whose ultimate values will be $\phi$ or greater.
You can easily establish that the chance of outputting $1$ is no less than any finite binary number less than $\phi$ and is no greater than any finite binary number greater than $\phi,$ demonstrating $f$ implements a Bernoulli$(\phi)$ variable.
It is also straightforward to show that on any call, $f$ has a $1/2$ chance of terminating, whence (a) it will terminate almost surely (b) with an expected number of calls equal to $1+1/2+1/2^2+\cdots = 2.$
Here is an R implementation. sample.int(2,1) implements the sequence of $X_i:$ it returns 1 and 2 with equal probabilities.
f <- function(phi) {
X <- sample.int(2, 1)
if(isTRUE(phi >= 1/2)) {
if (isTRUE(X == 1)) return(1) else return(f(2*phi - 1))
} else {
if (isTRUE(X == 1)) return(0) else return(f(2*phi))
}
}
I applied this two thousand times to each of 128 randomly-generated floating point numbers in $[0,1],$ keeping track of the calls to $f$ and comparing the mean value (which estimates $\phi$) to $\phi$ itself with a Z score. This required generating a quarter million Bernoulli$(\phi)$ values (for various $\phi$). On a single core it took 2.5 seconds, showing it is practicable and reasonably efficient.
These graphics summarize the results.
Most Z scores are between $-2$ and $2,$ as expected of a correct procedure.
All averages are close to $2,$ as claimed, and do not depend on the value of $\phi,$ as indicated by the near-horizontal Loess smooth.
It is rare, in any of these simulations, for any call to $f$ to nest more deeply than $15$ in the recursion stack. In other words, there is essentially no risk that any one call to $f$ will take an inordinately long time. (This can be proven by examining the hypergeometric distribution of the number of calls to $f.$)
R code
This is the full (reproducible) simulation study.
#
# The algorithm. It requires 0 <= phi <= 1.
#
f <- function(phi) {
COUNT <<- COUNT+1 # For the study only--not an essential part of `f`
X <- sample.int(2, 1)
if(isTRUE(phi >= 1/2)) {
if (isTRUE(X == 1)) return(1) else return(f(2*phi - 1))
} else {
if (isTRUE(X == 1)) return(0) else return(f(2*phi))
}
}
#
# Simulation study.
#
set.seed(17)
replications <- 2e3
COUNT <- 0
system.time({
results <- sapply(seq_len(128), function(i) {
phi <- runif(1) # A (uniformly) random probability to study
COUNT <<- 0 # Total number of calls to `f`
MAX <- 0 # Largest number of calls for any one value of `phi`
sim <- replicate(replications, {
count <- COUNT
x <- f(phi)
if (COUNT - count > MAX) MAX <<- COUNT - count
x
})
m <- mean(sim) # The simulation estimate of `phi`
se <- sqrt(var(sim) / length(sim)) # Its standard error
c(Value=phi, Estimate=m, SE=se, Z=(m-phi)/se,
Calls=COUNT, Max=MAX, Replications=length(sim), Expectation=COUNT/length(sim))
})
})
#
# Plots.
#
X <- as.data.frame(t(results))
sub <- paste(replications, "replications")
ggplot(X, aes(Value, Z)) + geom_point(alpha=1/2) + ggtitle("Z Scores", sub)
ggplot(X, aes(Value, Max)) + geom_point(alpha=1/2) + ggtitle("Most Calls to f", sub)
ggplot(X, aes(Value, Expectation)) + geom_point(alpha=1/2) + geom_smooth(span=1) +
ggtitle("Average Calls to f", sub) | What are relatively simple simulations that succeed with an irrational probability? | There is a universal algorithm. It doesn't matter whether the probability is irrational or not.
It suffices to implement a procedure to output either $0$ or $1$ that will (a) almost surely terminate a | What are relatively simple simulations that succeed with an irrational probability?
There is a universal algorithm. It doesn't matter whether the probability is irrational or not.
It suffices to implement a procedure to output either $0$ or $1$ that will (a) almost surely terminate and (b) output $1$ with a probability $\phi,$ where $\phi$ is any number in the interval $[0,1]$ (rational or irrational). The following description relies on an arbitrarily long sequence of iid Bernoulli$(1/2)$ variables $X_1,X_2,X_3,\ldots.$
Procedure f(phi):
for i in 1, 2, 3, ...
if phi >= 1/2 then
if X[i]==0 return(1) else return(f(2*phi-1))
else
if X[i]==0 return(0) else return(f(2*phi))
It is manifestly simple, using only (a) comparison to $1/2,$ (b) multiplication by $2,$ and (c) subtraction of $1.$
This algorithm randomly walks the binary tree determined by binary expansions of real numbers in the interval $[0,1].$ It outputs $1$ as soon as it enters a branch all of whose ultimate values will be less than $\phi$ and it outputs $0$ as soon as it enters any branch all of whose ultimate values will be $\phi$ or greater.
You can easily establish that the chance of outputting $1$ is no less than any finite binary number less than $\phi$ and is no greater than any finite binary number greater than $\phi,$ demonstrating $f$ implements a Bernoulli$(\phi)$ variable.
It is also straightforward to show that on any call, $f$ has a $1/2$ chance of terminating, whence (a) it will terminate almost surely (b) with an expected number of calls equal to $1+1/2+1/2^2+\cdots = 2.$
Here is an R implementation. sample.int(2,1) implements the sequence of $X_i:$ it returns 1 and 2 with equal probabilities.
f <- function(phi) {
X <- sample.int(2, 1)
if(isTRUE(phi >= 1/2)) {
if (isTRUE(X == 1)) return(1) else return(f(2*phi - 1))
} else {
if (isTRUE(X == 1)) return(0) else return(f(2*phi))
}
}
I applied this two thousand times to each of 128 randomly-generated floating point numbers in $[0,1],$ keeping track of the calls to $f$ and comparing the mean value (which estimates $\phi$) to $\phi$ itself with a Z score. This required generating a quarter million Bernoulli$(\phi)$ values (for various $\phi$). On a single core it took 2.5 seconds, showing it is practicable and reasonably efficient.
These graphics summarize the results.
Most Z scores are between $-2$ and $2,$ as expected of a correct procedure.
All averages are close to $2,$ as claimed, and do not depend on the value of $\phi,$ as indicated by the near-horizontal Loess smooth.
It is rare, in any of these simulations, for any call to $f$ to nest more deeply than $15$ in the recursion stack. In other words, there is essentially no risk that any one call to $f$ will take an inordinately long time. (This can be proven by examining the hypergeometric distribution of the number of calls to $f.$)
R code
This is the full (reproducible) simulation study.
#
# The algorithm. It requires 0 <= phi <= 1.
#
f <- function(phi) {
COUNT <<- COUNT+1 # For the study only--not an essential part of `f`
X <- sample.int(2, 1)
if(isTRUE(phi >= 1/2)) {
if (isTRUE(X == 1)) return(1) else return(f(2*phi - 1))
} else {
if (isTRUE(X == 1)) return(0) else return(f(2*phi))
}
}
#
# Simulation study.
#
set.seed(17)
replications <- 2e3
COUNT <- 0
system.time({
results <- sapply(seq_len(128), function(i) {
phi <- runif(1) # A (uniformly) random probability to study
COUNT <<- 0 # Total number of calls to `f`
MAX <- 0 # Largest number of calls for any one value of `phi`
sim <- replicate(replications, {
count <- COUNT
x <- f(phi)
if (COUNT - count > MAX) MAX <<- COUNT - count
x
})
m <- mean(sim) # The simulation estimate of `phi`
se <- sqrt(var(sim) / length(sim)) # Its standard error
c(Value=phi, Estimate=m, SE=se, Z=(m-phi)/se,
Calls=COUNT, Max=MAX, Replications=length(sim), Expectation=COUNT/length(sim))
})
})
#
# Plots.
#
X <- as.data.frame(t(results))
sub <- paste(replications, "replications")
ggplot(X, aes(Value, Z)) + geom_point(alpha=1/2) + ggtitle("Z Scores", sub)
ggplot(X, aes(Value, Max)) + geom_point(alpha=1/2) + ggtitle("Most Calls to f", sub)
ggplot(X, aes(Value, Expectation)) + geom_point(alpha=1/2) + geom_smooth(span=1) +
ggtitle("Average Calls to f", sub) | What are relatively simple simulations that succeed with an irrational probability?
There is a universal algorithm. It doesn't matter whether the probability is irrational or not.
It suffices to implement a procedure to output either $0$ or $1$ that will (a) almost surely terminate a |
52,773 | What can cause exploding statistic values and p-values near zero with the Wilcoxon-Mann-Whitney test? | Nothing out of the ordinary is going on from the sound of it.
In almost all cases, I get huge values for the statistic
Have you looked at the range of possible values for the statistic?
For the usual form of the U-statistic, it can take values between $0$ and $mn$ where $m$ and $n$ are the two sample sizes.
If you divide the statistic by $mn$, you get $\frac{U}{mn}$, which is the proportion (rather than the count) of cases in which a value from one sample exceeds a value from the other, which takes values between $0$ and $1$. The null case corresponds to an expected proportion of $\frac12$ (with standard error $\sqrt{\frac{m+n+1}{12mn}}$).
Alternatively, you could look at a z-score, which you may find somewhat more intuitive than the raw test statistic.
Is it possible to get p-values around zero with these tests and what does that mean?
Certainly. Unless your sample sizes are very small, extremely small p-values are possible.
For a one-tailed test the p-value may be as small as $\frac{m!\, n!}{(m+n)!}$ and twice that for a two-tailed test. For example with small sample sizes of $m=n=10$, you could see a two-tailed p-value as small as $1/92378$ or about $0.000011$ and the smallest available p-values decrease very rapidly as sample sizes increase. Doubling both sample sizes to $m=n=20$ reduces the smallest possible p-value by a factor of about $746000$, to $1.45\times 10^{-11}$.
Is there a cause which could explain this behavior?
A small to moderate effect size with large samples or a large effect size with smaller samples can both do it. | What can cause exploding statistic values and p-values near zero with the Wilcoxon-Mann-Whitney test | Nothing out of the ordinary is going on from the sound of it.
In almost all cases, I get huge values for the statistic
Have you looked at the range of possible values for the statistic?
For the usu | What can cause exploding statistic values and p-values near zero with the Wilcoxon-Mann-Whitney test?
Nothing out of the ordinary is going on from the sound of it.
In almost all cases, I get huge values for the statistic
Have you looked at the range of possible values for the statistic?
For the usual form of the U-statistic, it can take values between $0$ and $mn$ where $m$ and $n$ are the two sample sizes.
If you divide the statistic by $mn$, you get $\frac{U}{mn}$, which is the proportion (rather than the count) of cases in which a value from one sample exceeds a value from the other, which takes values between $0$ and $1$. The null case corresponds to an expected proportion of $\frac12$ (with standard error $\sqrt{\frac{m+n+1}{12mn}}$).
Alternatively, you could look at a z-score, which you may find somewhat more intuitive than the raw test statistic.
Is it possible to get p-values around zero with these tests and what does that mean?
Certainly. Unless your sample sizes are very small, extremely small p-values are possible.
For a one-tailed test the p-value may be as small as $\frac{m!\, n!}{(m+n)!}$ and twice that for a two-tailed test. For example with small sample sizes of $m=n=10$, you could see a two-tailed p-value as small as $1/92378$ or about $0.000011$ and the smallest available p-values decrease very rapidly as sample sizes increase. Doubling both sample sizes to $m=n=20$ reduces the smallest possible p-value by a factor of about $746000$, to $1.45\times 10^{-11}$.
Is there a cause which could explain this behavior?
A small to moderate effect size with large samples or a large effect size with smaller samples can both do it. | What can cause exploding statistic values and p-values near zero with the Wilcoxon-Mann-Whitney test
Nothing out of the ordinary is going on from the sound of it.
In almost all cases, I get huge values for the statistic
Have you looked at the range of possible values for the statistic?
For the usu |
52,774 | How to interpret negative values for -2LL, AIC, and BIC? | The bottom line is that (as Jeremy Miles says) the value of the negative log-likelihood doesn't really matter, only differences between the negative log-likelihoods. But you might still wonder why you are getting negative values.
Reproducing an answer of mine from here:
Technically, a probability cannot be >1, so a log-likelihood cannot be >0, so a negative log-likelihood cannot be negative. AIC/BIC etc. are composed of negative log-likelihoods plus positive 'penalty' terms (except in cluster analysis, where people typically flip the sign of the definitions!), so they can't be negative either.
However, there is one common case where we can get negative values for the "negative log-likelihood" function we use (which in these cases isn't exactly a negative log-likelihood). (These in turn can give rise to negative AICs (i.e., $-2\log(L)+2k<0$).)
For continuous response variables, what we are writing down is really a negative log-likelihood density function, rather than a negative log-likelihood function. For example, here's a picture of the normal density with μ=0,σ=0.1.
You can see that the density goes above 1, which means that the log density is >0, which means that the negative log-likelihood density is negative. This will happen any time the likelihood curve is very narrow. (Mathematically, there should be an infinitesimal $dx$ term in our expressions — we typically ignore this because it doesn't affect our inferences.)
Another common scenario (but not one where I have found an example) is that we often drop the normalization constants in likelihood expressions because they're a nuisance and they don't affect inference. If we had a discrete distribution (so that the likelihood was really a probability and had to be <= 1) and the normalization constant was sufficiently small, dropping the normalization constant could in principle give us an expression that was >1. I haven't found an example where this actually happens, though ... | How to interpret negative values for -2LL, AIC, and BIC? | The bottom line is that (as Jeremy Miles says) the value of the negative log-likelihood doesn't really matter, only differences between the negative log-likelihoods. But you might still wonder why you | How to interpret negative values for -2LL, AIC, and BIC?
The bottom line is that (as Jeremy Miles says) the value of the negative log-likelihood doesn't really matter, only differences between the negative log-likelihoods. But you might still wonder why you are getting negative values.
Reproducing an answer of mine from here:
Technically, a probability cannot be >1, so a log-likelihood cannot be >0, so a negative log-likelihood cannot be negative. AIC/BIC etc. are composed of negative log-likelihoods plus positive 'penalty' terms (except in cluster analysis, where people typically flip the sign of the definitions!), so they can't be negative either.
However, there is one common case where we can get negative values for the "negative log-likelihood" function we use (which in these cases isn't exactly a negative log-likelihood). (These in turn can give rise to negative AICs (i.e., $-2\log(L)+2k<0$).)
For continuous response variables, what we are writing down is really a negative log-likelihood density function, rather than a negative log-likelihood function. For example, here's a picture of the normal density with μ=0,σ=0.1.
You can see that the density goes above 1, which means that the log density is >0, which means that the negative log-likelihood density is negative. This will happen any time the likelihood curve is very narrow. (Mathematically, there should be an infinitesimal $dx$ term in our expressions — we typically ignore this because it doesn't affect our inferences.)
Another common scenario (but not one where I have found an example) is that we often drop the normalization constants in likelihood expressions because they're a nuisance and they don't affect inference. If we had a discrete distribution (so that the likelihood was really a probability and had to be <= 1) and the normalization constant was sufficiently small, dropping the normalization constant could in principle give us an expression that was >1. I haven't found an example where this actually happens, though ... | How to interpret negative values for -2LL, AIC, and BIC?
The bottom line is that (as Jeremy Miles says) the value of the negative log-likelihood doesn't really matter, only differences between the negative log-likelihoods. But you might still wonder why you |
52,775 | How to interpret negative values for -2LL, AIC, and BIC? | Yes.
-2 LL means -2 multiplied by the log likelihood.
AIC, BIC etc are (as far as I know) only interpreted in relation to other values from different models. An AIC of -100 doesn't mean anything on its own. It means something when a different model, using the same data, has an AIC of -90, so the difference is 10. The difference is the interesting thing. | How to interpret negative values for -2LL, AIC, and BIC? | Yes.
-2 LL means -2 multiplied by the log likelihood.
AIC, BIC etc are (as far as I know) only interpreted in relation to other values from different models. An AIC of -100 doesn't mean anything on it | How to interpret negative values for -2LL, AIC, and BIC?
Yes.
-2 LL means -2 multiplied by the log likelihood.
AIC, BIC etc are (as far as I know) only interpreted in relation to other values from different models. An AIC of -100 doesn't mean anything on its own. It means something when a different model, using the same data, has an AIC of -90, so the difference is 10. The difference is the interesting thing. | How to interpret negative values for -2LL, AIC, and BIC?
Yes.
-2 LL means -2 multiplied by the log likelihood.
AIC, BIC etc are (as far as I know) only interpreted in relation to other values from different models. An AIC of -100 doesn't mean anything on it |
52,776 | What are some examples when the Average Treatment Effect on the Treated/Control (ATT,ATC) is more sought after than the ATE? | I'm writing a paper about this very topic, so I'll just summarize here and update with a link to the paper when it's ready. (Edit: Here is the arxiv version.) In short, the ATE, ATT, and ATC can be described as follows:
The ATE is the average effect of mandating a policy of treatment for everyone vs. mandating a policy of control for everyone
The ATT is the average effect of withholding treatment from those who would normally receive it
The ATC is the average effect of expanding treatment to those who would not normally receive it
It's important to also recognize that these are average effects in the study population, which may be narrowly defined (e.g., to eligible patients or patients with equipoise). These effects only differ from each other when treatment assignment is not random and when the treatment effect differs across individuals based on qualities related to treatment assignment.
The ATE might be useful when evaluating a policy that applies to everyone, e.g., whether medical providers should unilaterally prefer surgery A vs. surgery B for eligible patients. The ATE involves comparing the outcomes of a counterfactual world where everyone receives the treatment to a counterfactual world in which no one does. In my opinion, this is almost never a useful comparison unless the population is very narrowly defined to be a group where such policy would make sense. For example, it wouldn't make sense to ask what the ATE of smoking on lung cancer would be for the entire US population (e.g., based on a national survey) because you would never be interested in comparing a policy where nobody smoked to a policy where everyone smoked.
The ATT might be useful when deciding whether to ban a currently implemented practice or continue an experimental program. The ATT is the effect of withholding the treatment from those who receive it, so it is only concerned with those actually being treated. For example, it might make sense to ask what the ATT of smoking is on lung cancer, because you may be interested in a policy of withholding (i.e., preventing) smoking among those currently smoking. You might also be interested in the ATT of a program that would only be eligible to people like the current participants; this would help decide whether you should continue implementing the program (and its selection policy).
The ATC might be useful when deciding whether to extend a currently implemented program to those not currently receiving it. It is only concerned with those not receiving the treatment. The ATC might be a good estimand when challenging current clinical practice, e.g., when treatment is withheld from some patients who might actually benefit from it. It might be useful for understanding the effect of a campaign on an untapped market, e.g., to examine the effect of seeing an ad for a product on purchasing the product among those who don't typically see the ad.
The ATE is an extremely coarse estimand (unless the population is narrowly defined). It considers a policy that makes no reference to how people normally come to receive or not receive the treatment. The ATT and ATC are slightly more specific because they only consider specific relevant subgroups of units and respect the natural process of treatment assignment.
The ATT and ATC also require fewer (unverifiable) assumptions for identification than the ATE. The ATE requires mean conditional exchangeability for both treatment values, i.e.,
$$
E[Y^1|A=a, X] = E[Y^1| X] \\
E[Y^0|A=a, X] = E[Y^0| X]
$$
for all $a$ in $\{0,1\}$, where $A$ is the treatment, $X$ are the confounders, and $Y^a$ is the potential outcome under treatment $a$. whereas the ATT only requires
$$
E[Y^0|A = 1, X] = E[Y^0|A=0, X]
$$
and the ATC only requires
$$
E[Y^1|A = 0, X] = E[Y^1|A=1, X]
$$
Similarly, the requirements for positivity are slightly more relaxed for the ATT and ATC. For the ATE, $0 < P(A=1|X) < 1$, but the ATT only requires $P(A = 1|X) < 1$ and the ATC requires $P(A = 1|X) > 0$. Put simply, the ATE requires complete overlap in the covariate space for both groups, whereas the ATT only requires overlap in the covariate space of the treated units and the ATC only requires overlap in the covariate space of the controls.
It's important to formulate your research question in a way that actually answers the substantive question you want to answer. The ATE of smoking is uninteresting. The ATT doesn't help you decide whether you should enroll new types of participants in a program. The ATC doesn't help you decide whether a currently implement treatment is working. The reason this is so important is that different statistical methods target different estimands. Researchers need to decide on the estimand that makes the most sense for them and then use the correct statistical method that corresponds to that estimand. | What are some examples when the Average Treatment Effect on the Treated/Control (ATT,ATC) is more so | I'm writing a paper about this very topic, so I'll just summarize here and update with a link to the paper when it's ready. (Edit: Here is the arxiv version.) In short, the ATE, ATT, and ATC can be de | What are some examples when the Average Treatment Effect on the Treated/Control (ATT,ATC) is more sought after than the ATE?
I'm writing a paper about this very topic, so I'll just summarize here and update with a link to the paper when it's ready. (Edit: Here is the arxiv version.) In short, the ATE, ATT, and ATC can be described as follows:
The ATE is the average effect of mandating a policy of treatment for everyone vs. mandating a policy of control for everyone
The ATT is the average effect of withholding treatment from those who would normally receive it
The ATC is the average effect of expanding treatment to those who would not normally receive it
It's important to also recognize that these are average effects in the study population, which may be narrowly defined (e.g., to eligible patients or patients with equipoise). These effects only differ from each other when treatment assignment is not random and when the treatment effect differs across individuals based on qualities related to treatment assignment.
The ATE might be useful when evaluating a policy that applies to everyone, e.g., whether medical providers should unilaterally prefer surgery A vs. surgery B for eligible patients. The ATE involves comparing the outcomes of a counterfactual world where everyone receives the treatment to a counterfactual world in which no one does. In my opinion, this is almost never a useful comparison unless the population is very narrowly defined to be a group where such policy would make sense. For example, it wouldn't make sense to ask what the ATE of smoking on lung cancer would be for the entire US population (e.g., based on a national survey) because you would never be interested in comparing a policy where nobody smoked to a policy where everyone smoked.
The ATT might be useful when deciding whether to ban a currently implemented practice or continue an experimental program. The ATT is the effect of withholding the treatment from those who receive it, so it is only concerned with those actually being treated. For example, it might make sense to ask what the ATT of smoking is on lung cancer, because you may be interested in a policy of withholding (i.e., preventing) smoking among those currently smoking. You might also be interested in the ATT of a program that would only be eligible to people like the current participants; this would help decide whether you should continue implementing the program (and its selection policy).
The ATC might be useful when deciding whether to extend a currently implemented program to those not currently receiving it. It is only concerned with those not receiving the treatment. The ATC might be a good estimand when challenging current clinical practice, e.g., when treatment is withheld from some patients who might actually benefit from it. It might be useful for understanding the effect of a campaign on an untapped market, e.g., to examine the effect of seeing an ad for a product on purchasing the product among those who don't typically see the ad.
The ATE is an extremely coarse estimand (unless the population is narrowly defined). It considers a policy that makes no reference to how people normally come to receive or not receive the treatment. The ATT and ATC are slightly more specific because they only consider specific relevant subgroups of units and respect the natural process of treatment assignment.
The ATT and ATC also require fewer (unverifiable) assumptions for identification than the ATE. The ATE requires mean conditional exchangeability for both treatment values, i.e.,
$$
E[Y^1|A=a, X] = E[Y^1| X] \\
E[Y^0|A=a, X] = E[Y^0| X]
$$
for all $a$ in $\{0,1\}$, where $A$ is the treatment, $X$ are the confounders, and $Y^a$ is the potential outcome under treatment $a$. whereas the ATT only requires
$$
E[Y^0|A = 1, X] = E[Y^0|A=0, X]
$$
and the ATC only requires
$$
E[Y^1|A = 0, X] = E[Y^1|A=1, X]
$$
Similarly, the requirements for positivity are slightly more relaxed for the ATT and ATC. For the ATE, $0 < P(A=1|X) < 1$, but the ATT only requires $P(A = 1|X) < 1$ and the ATC requires $P(A = 1|X) > 0$. Put simply, the ATE requires complete overlap in the covariate space for both groups, whereas the ATT only requires overlap in the covariate space of the treated units and the ATC only requires overlap in the covariate space of the controls.
It's important to formulate your research question in a way that actually answers the substantive question you want to answer. The ATE of smoking is uninteresting. The ATT doesn't help you decide whether you should enroll new types of participants in a program. The ATC doesn't help you decide whether a currently implement treatment is working. The reason this is so important is that different statistical methods target different estimands. Researchers need to decide on the estimand that makes the most sense for them and then use the correct statistical method that corresponds to that estimand. | What are some examples when the Average Treatment Effect on the Treated/Control (ATT,ATC) is more so
I'm writing a paper about this very topic, so I'll just summarize here and update with a link to the paper when it's ready. (Edit: Here is the arxiv version.) In short, the ATE, ATT, and ATC can be de |
52,777 | Why use RBF kernel if less is needed? | One way of looking at it is to say that the RBF kernel dynamically scales the feature space with the number of points. As we know from geometry, for $p$ points you can always draw an at most $(p-1)$-dimensional hyperplane through them. That's the inherent dimensionality of the space implied by the RBF kernel. But, as you add more points, the dimensionality of the space rises accordingly. That makes the RBF kernel quite flexible. It gives you linear separability irrespective the number of points.
Update in response to comment:
I cannot give you a link to a formal proof, but I assume it shouldn't be too hard to construct. We know that:
a kernel is the dot product in a feature space,
$k(x, y) = \| \varphi(x) \| \cdot \| \varphi(y) \| \cdot \cos(\angle(\varphi(x), \varphi(y)))$, and, consequently
$k(x, x) = \| \varphi(x) \|^2$
RBF is a kernel,
for the RBF kernel, $k(x, x) = e^0 = 1$, and
$k(x, \infty) = e^{-\infty} = 0$
Geometrically, the RBF kernel projects the points onto a segment of a hypersphere with a radius of $1$ in a $p$-dimensional space. Points which are close to each other in the input space are mapped onto nearby points in the feature space. Points which are far from each other in the input space are mapped on (close to) orthogonal points on the hypersphere.
Theoretically, points in the RBF-induced feature space are always linearly separable, irrespective of $\gamma$. It's just a numerical issue that for a small $\gamma$ it could become hard to find the separating hyperplane.
On the other hand, if you choose $\gamma$ very large, you will push all the projections into corners of the hypercube enclosing the hypersphere: $(1, 0, 0, \ldots), (0, 1, 0, \ldots), (0, 0, 1, \ldots)$ etc. This will give you a trivially simple separability on the training set, but very bad generalisation.
Update (graphical example):
To get some intuition, observe this trivially simple, one-dimensional dataset. It is obvious that no linear boundary can separate the two classes, blue and red:
But, the RBF kernel transforms the data into a 3D feature space where they become linearly separable. If we denote $k_{ij} = k(x_i, x_j)$, it is easy to see that the transformation
$$
\begin{array}{rrrrrrr}
\textbf{z}_1 = \varphi(x_1) & = & [ & 1, & 0, & 0 & ]^T \\
\textbf{z}_2 = \varphi(x_2) & = & [ & k_{12}, & z_{22}, & 0 & ]^T \\
\textbf{z}_3 = \varphi(x_3) & = & [ & k_{13}, & (k_{23} - k_{12}k_{13}) / z_{22}, & z_{33} & ]^T \\
\end{array}
$$
reproduces the RBF kernel, $k(x_i, x_j) = \varphi(x_i) \cdot \varphi(x_j)$, where $z_{22} = \sqrt{1 - k_{12}^2}$ and $z_{33} = \sqrt{1 - z_{31}^2 - z_{32}^2}$. The kernel parameter $\gamma$ controls how far the points get in the feature space:
As you can see, as $\gamma \rightarrow 0$, the points get very close to each other. But this is only a part of the problem. If we zoom in for a small $\gamma$, we see that the points still lie on an almost straigt line:
True, the line is not exactly straight, but slightly bent, so there exists a plane to separate the two classes, but the margin is very thin and numerically hard to satisfy. You may say that $\gamma $ controls the non-linearity of the transformation: The smaller the $\gamma$, the closer the transformation to the linear one. | Why use RBF kernel if less is needed? | One way of looking at it is to say that the RBF kernel dynamically scales the feature space with the number of points. As we know from geometry, for $p$ points you can always draw an at most $(p-1)$-d | Why use RBF kernel if less is needed?
One way of looking at it is to say that the RBF kernel dynamically scales the feature space with the number of points. As we know from geometry, for $p$ points you can always draw an at most $(p-1)$-dimensional hyperplane through them. That's the inherent dimensionality of the space implied by the RBF kernel. But, as you add more points, the dimensionality of the space rises accordingly. That makes the RBF kernel quite flexible. It gives you linear separability irrespective the number of points.
Update in response to comment:
I cannot give you a link to a formal proof, but I assume it shouldn't be too hard to construct. We know that:
a kernel is the dot product in a feature space,
$k(x, y) = \| \varphi(x) \| \cdot \| \varphi(y) \| \cdot \cos(\angle(\varphi(x), \varphi(y)))$, and, consequently
$k(x, x) = \| \varphi(x) \|^2$
RBF is a kernel,
for the RBF kernel, $k(x, x) = e^0 = 1$, and
$k(x, \infty) = e^{-\infty} = 0$
Geometrically, the RBF kernel projects the points onto a segment of a hypersphere with a radius of $1$ in a $p$-dimensional space. Points which are close to each other in the input space are mapped onto nearby points in the feature space. Points which are far from each other in the input space are mapped on (close to) orthogonal points on the hypersphere.
Theoretically, points in the RBF-induced feature space are always linearly separable, irrespective of $\gamma$. It's just a numerical issue that for a small $\gamma$ it could become hard to find the separating hyperplane.
On the other hand, if you choose $\gamma$ very large, you will push all the projections into corners of the hypercube enclosing the hypersphere: $(1, 0, 0, \ldots), (0, 1, 0, \ldots), (0, 0, 1, \ldots)$ etc. This will give you a trivially simple separability on the training set, but very bad generalisation.
Update (graphical example):
To get some intuition, observe this trivially simple, one-dimensional dataset. It is obvious that no linear boundary can separate the two classes, blue and red:
But, the RBF kernel transforms the data into a 3D feature space where they become linearly separable. If we denote $k_{ij} = k(x_i, x_j)$, it is easy to see that the transformation
$$
\begin{array}{rrrrrrr}
\textbf{z}_1 = \varphi(x_1) & = & [ & 1, & 0, & 0 & ]^T \\
\textbf{z}_2 = \varphi(x_2) & = & [ & k_{12}, & z_{22}, & 0 & ]^T \\
\textbf{z}_3 = \varphi(x_3) & = & [ & k_{13}, & (k_{23} - k_{12}k_{13}) / z_{22}, & z_{33} & ]^T \\
\end{array}
$$
reproduces the RBF kernel, $k(x_i, x_j) = \varphi(x_i) \cdot \varphi(x_j)$, where $z_{22} = \sqrt{1 - k_{12}^2}$ and $z_{33} = \sqrt{1 - z_{31}^2 - z_{32}^2}$. The kernel parameter $\gamma$ controls how far the points get in the feature space:
As you can see, as $\gamma \rightarrow 0$, the points get very close to each other. But this is only a part of the problem. If we zoom in for a small $\gamma$, we see that the points still lie on an almost straigt line:
True, the line is not exactly straight, but slightly bent, so there exists a plane to separate the two classes, but the margin is very thin and numerically hard to satisfy. You may say that $\gamma $ controls the non-linearity of the transformation: The smaller the $\gamma$, the closer the transformation to the linear one. | Why use RBF kernel if less is needed?
One way of looking at it is to say that the RBF kernel dynamically scales the feature space with the number of points. As we know from geometry, for $p$ points you can always draw an at most $(p-1)$-d |
52,778 | Why use RBF kernel if less is needed? | While the points are notionally mapped into an infinite-dimensional space, they will necessarily lie within an at-most $p$-dimensional sub-space (as there are only $p$ points). Note that the (notionally infinite-dimensional) primal weight vector is a linear combination of the images of the support vectors in the feature space,
$\vec{w} = \sum_{i=1}^\ell y_i\alpha_i\phi(\vec{x}_i)$
which means that the vector is also required to lie within that $p$-dimensional sub-space. The additional dimensions are essentially irrelevant and do not affect the resulting model in any way.
This is why the ``kernel trick'' allows us to represent a notionally infinite-dimensional space using only finite dimensional quantities (such as the Gram matrix).
However, there are other reasons for using the RBF kernel, which is that the problem may be non-linearly separable. Consider the case where the $p=N$ points in the dataset all co-linear, lying along a straight line in N dimensions. For most labeling of the points, there will be no decision boundary that classifies the data points without error. However, if we take those same points and use an RBF kernel, the points will be mapped onto the positive orthant of an infinite-dimensional unit hyper-sphere (as shown by @IgorF. +1), and the points will be no longer co-linear and any labelling of the points can be linearly separated (provided none of the points of different labels are exact duplicates) | Why use RBF kernel if less is needed? | While the points are notionally mapped into an infinite-dimensional space, they will necessarily lie within an at-most $p$-dimensional sub-space (as there are only $p$ points). Note that the (notiona | Why use RBF kernel if less is needed?
While the points are notionally mapped into an infinite-dimensional space, they will necessarily lie within an at-most $p$-dimensional sub-space (as there are only $p$ points). Note that the (notionally infinite-dimensional) primal weight vector is a linear combination of the images of the support vectors in the feature space,
$\vec{w} = \sum_{i=1}^\ell y_i\alpha_i\phi(\vec{x}_i)$
which means that the vector is also required to lie within that $p$-dimensional sub-space. The additional dimensions are essentially irrelevant and do not affect the resulting model in any way.
This is why the ``kernel trick'' allows us to represent a notionally infinite-dimensional space using only finite dimensional quantities (such as the Gram matrix).
However, there are other reasons for using the RBF kernel, which is that the problem may be non-linearly separable. Consider the case where the $p=N$ points in the dataset all co-linear, lying along a straight line in N dimensions. For most labeling of the points, there will be no decision boundary that classifies the data points without error. However, if we take those same points and use an RBF kernel, the points will be mapped onto the positive orthant of an infinite-dimensional unit hyper-sphere (as shown by @IgorF. +1), and the points will be no longer co-linear and any labelling of the points can be linearly separated (provided none of the points of different labels are exact duplicates) | Why use RBF kernel if less is needed?
While the points are notionally mapped into an infinite-dimensional space, they will necessarily lie within an at-most $p$-dimensional sub-space (as there are only $p$ points). Note that the (notiona |
52,779 | understanding uniformly distributed success probability | This is a straightforward application of a standard introductory Bayesian example, namely that a beta prior is conjugate for a Bernoulli sampling distribution. Any introductory Bayesian statistics textbook will explain this in detail.
In brief, the idea is that if your prior distribution on a parameter is $\theta\sim B(\alpha,\beta)$, and you observe $n$ Bernoulli trials with success probability $\theta$ and $x$ successes (and $n-x$ failures), then your posterior distribution on $\theta$ is $B(\alpha+x,\beta+n-x)$. (The fact that the posterior is still a Beta is what makes this "conjugate".)
In the present case, your prior is uniform on $[0,1]$, which is just a special case of a Beta, namely $B(1,1)$.
In the "default" model, we have observed $3$ successes and $2$ failures, so the posterior is $B(1+3,1+2)=B(4,3)$. The expectation of this Beta distribution is $\frac{4}{4+3}\approx 57\%$.
In the "proposed" model, we discard the two failures and only consider the $3$ successes. Now the posterior is a $B(1+3,1)=B(4,1)$, with an expectation of $\frac{4}{4+1}=80\%$. | understanding uniformly distributed success probability | This is a straightforward application of a standard introductory Bayesian example, namely that a beta prior is conjugate for a Bernoulli sampling distribution. Any introductory Bayesian statistics tex | understanding uniformly distributed success probability
This is a straightforward application of a standard introductory Bayesian example, namely that a beta prior is conjugate for a Bernoulli sampling distribution. Any introductory Bayesian statistics textbook will explain this in detail.
In brief, the idea is that if your prior distribution on a parameter is $\theta\sim B(\alpha,\beta)$, and you observe $n$ Bernoulli trials with success probability $\theta$ and $x$ successes (and $n-x$ failures), then your posterior distribution on $\theta$ is $B(\alpha+x,\beta+n-x)$. (The fact that the posterior is still a Beta is what makes this "conjugate".)
In the present case, your prior is uniform on $[0,1]$, which is just a special case of a Beta, namely $B(1,1)$.
In the "default" model, we have observed $3$ successes and $2$ failures, so the posterior is $B(1+3,1+2)=B(4,3)$. The expectation of this Beta distribution is $\frac{4}{4+3}\approx 57\%$.
In the "proposed" model, we discard the two failures and only consider the $3$ successes. Now the posterior is a $B(1+3,1)=B(4,1)$, with an expectation of $\frac{4}{4+1}=80\%$. | understanding uniformly distributed success probability
This is a straightforward application of a standard introductory Bayesian example, namely that a beta prior is conjugate for a Bernoulli sampling distribution. Any introductory Bayesian statistics tex |
52,780 | Incorrect implementation of a t-test | You missed the pairing. Your observations aren’t independent, since they come from the same subjects, just at different times. Taking paired differences and testing those differences is the correct approach. There is greater power (ability to reject a false null) when you do the pairing. | Incorrect implementation of a t-test | You missed the pairing. Your observations aren’t independent, since they come from the same subjects, just at different times. Taking paired differences and testing those differences is the correct ap | Incorrect implementation of a t-test
You missed the pairing. Your observations aren’t independent, since they come from the same subjects, just at different times. Taking paired differences and testing those differences is the correct approach. There is greater power (ability to reject a false null) when you do the pairing. | Incorrect implementation of a t-test
You missed the pairing. Your observations aren’t independent, since they come from the same subjects, just at different times. Taking paired differences and testing those differences is the correct ap |
52,781 | Is there any hypothesis test for two binomial distribution without normal approximation? | You can just use a Fisher Exact Test. Let us know if you have trouble following what it does.
Not super related, but if you're thinking of difference of binomials, it's nice to convince yourself that if $p_1 \neq p_2$, then the difference is not itself a binomial! I think that's kinda fun to think about. | Is there any hypothesis test for two binomial distribution without normal approximation? | You can just use a Fisher Exact Test. Let us know if you have trouble following what it does.
Not super related, but if you're thinking of difference of binomials, it's nice to convince yourself that | Is there any hypothesis test for two binomial distribution without normal approximation?
You can just use a Fisher Exact Test. Let us know if you have trouble following what it does.
Not super related, but if you're thinking of difference of binomials, it's nice to convince yourself that if $p_1 \neq p_2$, then the difference is not itself a binomial! I think that's kinda fun to think about. | Is there any hypothesis test for two binomial distribution without normal approximation?
You can just use a Fisher Exact Test. Let us know if you have trouble following what it does.
Not super related, but if you're thinking of difference of binomials, it's nice to convince yourself that |
52,782 | Is there any hypothesis test for two binomial distribution without normal approximation? | So my question is, is there any statistical test available for given two binomial distributions $A \sim \mathrm{Bin}(n, p_a)$ and $B \sim \mathrm{Bin}(m, p_b)$ where $n$ and $m$ are the sample size of A and B to test if $p_a$ and $p_b$ are different without approximation to normal/Poisson distribution?
One way to do this is with Fisher's exact test. But, Fisher's exact test is often not 'so exact' because it is conditioning on the total number of (marginal) cases (considering this is an experimentally fixed number) which is often not the case (The 'exact' in the name for this test refers to the exact computation instead of an approximation. But in practice, it is often a computation for the wrong, non-exact, question).
The alternative is using Barnard's test, which considers a range of null hypotheses $A \sim B \sim \mathrm{Bin}(n, p)$ where $p$ is an unknown (nuisance) parameter, and the test is done by selecting the worst case (highest p-value) out of all possible $p$. | Is there any hypothesis test for two binomial distribution without normal approximation? | So my question is, is there any statistical test available for given two binomial distributions $A \sim \mathrm{Bin}(n, p_a)$ and $B \sim \mathrm{Bin}(m, p_b)$ where $n$ and $m$ are the sample size of | Is there any hypothesis test for two binomial distribution without normal approximation?
So my question is, is there any statistical test available for given two binomial distributions $A \sim \mathrm{Bin}(n, p_a)$ and $B \sim \mathrm{Bin}(m, p_b)$ where $n$ and $m$ are the sample size of A and B to test if $p_a$ and $p_b$ are different without approximation to normal/Poisson distribution?
One way to do this is with Fisher's exact test. But, Fisher's exact test is often not 'so exact' because it is conditioning on the total number of (marginal) cases (considering this is an experimentally fixed number) which is often not the case (The 'exact' in the name for this test refers to the exact computation instead of an approximation. But in practice, it is often a computation for the wrong, non-exact, question).
The alternative is using Barnard's test, which considers a range of null hypotheses $A \sim B \sim \mathrm{Bin}(n, p)$ where $p$ is an unknown (nuisance) parameter, and the test is done by selecting the worst case (highest p-value) out of all possible $p$. | Is there any hypothesis test for two binomial distribution without normal approximation?
So my question is, is there any statistical test available for given two binomial distributions $A \sim \mathrm{Bin}(n, p_a)$ and $B \sim \mathrm{Bin}(m, p_b)$ where $n$ and $m$ are the sample size of |
52,783 | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [duplicate] | That is, for the set of predicted values $\{\hat{Y}_1, \hat{Y}_2, ...\}$ and the set of original values $\{Y_1, Y_2, ...\}$, the means of the sets are always equal.
The difference between the predicted values and the original values are the residuals
$$\hat{Y}_i = Y_i + r_i$$
So you can write
$$\begin{array}{}
\frac{1}{n} \left(\hat{Y}_1+ \hat{Y}_2+ ...\right) &=& \frac{1}{n} \left(({Y}_1 + r_1)+( {Y}_2+r_2)+ ...\right) \\ &=&\frac{1}{n} \left({Y}_1+ {Y}_2+ ...\right)+\frac{1}{n} \left(r_1+ r_2+ ...\right) &=&\frac{1}{n} \left({Y}_1+ {Y}_2+ ...\right) \end{array}$$
and the last equality is true if the method has by design the following property $\left(r_1+ r_2+ ...\right) =0$ and that is the case for OLS. But note that this is only the case when the regression has an intercept term (as Christoph Hanck's answer explains). The residual term is perpendicular to the regressors. If the intercept is one of the regressors (or more generally as jld mentioned in the comments, if it's in the column space of the regressors) then the perpendicularity has as consequence that $\left(r_1,r_2,...\right) \cdot \left(1,1,...\right) = \left(r_1+ r_2+ ...\right) =0$
In simple words you could say that the $\hat{Y}$ are placed equaly in between the $Y$, as much above as below, and that is why they have the same mean. | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [ | That is, for the set of predicted values $\{\hat{Y}_1, \hat{Y}_2, ...\}$ and the set of original values $\{Y_1, Y_2, ...\}$, the means of the sets are always equal.
The difference between the predict | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [duplicate]
That is, for the set of predicted values $\{\hat{Y}_1, \hat{Y}_2, ...\}$ and the set of original values $\{Y_1, Y_2, ...\}$, the means of the sets are always equal.
The difference between the predicted values and the original values are the residuals
$$\hat{Y}_i = Y_i + r_i$$
So you can write
$$\begin{array}{}
\frac{1}{n} \left(\hat{Y}_1+ \hat{Y}_2+ ...\right) &=& \frac{1}{n} \left(({Y}_1 + r_1)+( {Y}_2+r_2)+ ...\right) \\ &=&\frac{1}{n} \left({Y}_1+ {Y}_2+ ...\right)+\frac{1}{n} \left(r_1+ r_2+ ...\right) &=&\frac{1}{n} \left({Y}_1+ {Y}_2+ ...\right) \end{array}$$
and the last equality is true if the method has by design the following property $\left(r_1+ r_2+ ...\right) =0$ and that is the case for OLS. But note that this is only the case when the regression has an intercept term (as Christoph Hanck's answer explains). The residual term is perpendicular to the regressors. If the intercept is one of the regressors (or more generally as jld mentioned in the comments, if it's in the column space of the regressors) then the perpendicularity has as consequence that $\left(r_1,r_2,...\right) \cdot \left(1,1,...\right) = \left(r_1+ r_2+ ...\right) =0$
In simple words you could say that the $\hat{Y}$ are placed equaly in between the $Y$, as much above as below, and that is why they have the same mean. | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [
That is, for the set of predicted values $\{\hat{Y}_1, \hat{Y}_2, ...\}$ and the set of original values $\{Y_1, Y_2, ...\}$, the means of the sets are always equal.
The difference between the predict |
52,784 | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [duplicate] | In matrix notation, the fitted values can be written as $\hat y=Py$, with the projection matrix $P=X(X'X)^{-1}X'$, wich can be verified by plugging in the definition of the OLS estimator into the formula for the fitted values, $\hat y =X\hat\beta$.
Their mean is, with $\iota$ a vector of ones,
$$
\iota'Py/n,
$$
as the inner product with $\iota$ just sums up elements, $\iota'a=\sum_ia_i$.
In general, we have $PX=X$, as can be verified by direct multiplication.
Now, if $X$ contains $\iota$, i.e., if you have a constant in your regression, we have $P\iota=\iota$, as one of the columns of the result $PX=X$.
Hence, by symmetry of $P$ (which, again, can be verified directly),
$$
\iota'Py/n=\iota'y/n,
$$
the mean of $y$. Hence, the statement is true if we have a constant in our regression. It is - see the comment by @jld - however also true if there are columns of $X$ that can be combined into $\iota$. That would for example be the case if we have exhaustive dummy variables but no constant (to avoid the dummy variable trap).
A little numerical illustration:
y <- rnorm(20)
x <- rnorm(20)
lm_with_cst <- lm(y~x)
mean(y)
mean(fitted(lm_with_cst))
lm_without_cst <- lm(y~x-1)
mean(fitted(lm_without_cst))
Output:
> mean(y)
[1] 0.04139399
> mean(fitted(lm_with_cst))
[1] 0.04139399
> mean(fitted(lm_without_cst))
[1] 0.05660456 | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [ | In matrix notation, the fitted values can be written as $\hat y=Py$, with the projection matrix $P=X(X'X)^{-1}X'$, wich can be verified by plugging in the definition of the OLS estimator into the form | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [duplicate]
In matrix notation, the fitted values can be written as $\hat y=Py$, with the projection matrix $P=X(X'X)^{-1}X'$, wich can be verified by plugging in the definition of the OLS estimator into the formula for the fitted values, $\hat y =X\hat\beta$.
Their mean is, with $\iota$ a vector of ones,
$$
\iota'Py/n,
$$
as the inner product with $\iota$ just sums up elements, $\iota'a=\sum_ia_i$.
In general, we have $PX=X$, as can be verified by direct multiplication.
Now, if $X$ contains $\iota$, i.e., if you have a constant in your regression, we have $P\iota=\iota$, as one of the columns of the result $PX=X$.
Hence, by symmetry of $P$ (which, again, can be verified directly),
$$
\iota'Py/n=\iota'y/n,
$$
the mean of $y$. Hence, the statement is true if we have a constant in our regression. It is - see the comment by @jld - however also true if there are columns of $X$ that can be combined into $\iota$. That would for example be the case if we have exhaustive dummy variables but no constant (to avoid the dummy variable trap).
A little numerical illustration:
y <- rnorm(20)
x <- rnorm(20)
lm_with_cst <- lm(y~x)
mean(y)
mean(fitted(lm_with_cst))
lm_without_cst <- lm(y~x-1)
mean(fitted(lm_without_cst))
Output:
> mean(y)
[1] 0.04139399
> mean(fitted(lm_with_cst))
[1] 0.04139399
> mean(fitted(lm_without_cst))
[1] 0.05660456 | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [
In matrix notation, the fitted values can be written as $\hat y=Py$, with the projection matrix $P=X(X'X)^{-1}X'$, wich can be verified by plugging in the definition of the OLS estimator into the form |
52,785 | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [duplicate] | It's intuitively clear. If you have the correct model as the linear regression, the residuals should be distributed with mean zero. If you take the average on the residuals, you are left only with the predicted values.
For example, if your model is
$y = c + ax + \epsilon$,
where
$c$ is constant vector,
$a$ is coefficient vector,
$x$ is the feature vector,
$\epsilon$ is the Gaussian residual vector.
When you take the expectation of $y$ for the mean, you get
$E(y) = E(c + ax + \epsilon) = E(c + ax) = E(\hat{y})$
because $E(\epsilon) = 0$ as the mean of residuals are zero. | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [ | It's intuitively clear. If you have the correct model as the linear regression, the residuals should be distributed with mean zero. If you take the average on the residuals, you are left only with the | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [duplicate]
It's intuitively clear. If you have the correct model as the linear regression, the residuals should be distributed with mean zero. If you take the average on the residuals, you are left only with the predicted values.
For example, if your model is
$y = c + ax + \epsilon$,
where
$c$ is constant vector,
$a$ is coefficient vector,
$x$ is the feature vector,
$\epsilon$ is the Gaussian residual vector.
When you take the expectation of $y$ for the mean, you get
$E(y) = E(c + ax + \epsilon) = E(c + ax) = E(\hat{y})$
because $E(\epsilon) = 0$ as the mean of residuals are zero. | Proof that the mean of predicted values in OLS regression is equal to the mean of original values? [
It's intuitively clear. If you have the correct model as the linear regression, the residuals should be distributed with mean zero. If you take the average on the residuals, you are left only with the |
52,786 | What are the differences between tests for overidentification in 2SLS | There are a lot of questions here, so I'll first give an overview, and then explain a bit more. You have 4 tests you're asking about: Hausman test, Sargan test, a Wald test of exogeneity, and a Hansen J Test. To fix some notation, let $Z$ be a vector of instruments, and consider $Y = \beta_1 X_1 + \beta_2 X_2 + e$, where $X_1$ are exogenous variables you include in the model, and $X_2$ are endogenous, and you wish to use $Z$ instruments for $X_2$. Sometimes I will use $X = (X_1,X_2)$ as well. In what follows, I will describe each test, and then provide intuition and an approach. I may stray from my notation during the intuition parts, but tried to stick to it during the approach parts.
Before beginning, the TL;DR is that Wald and Hausmann test for exogeneity of $X$ (assuming exogeneity of $Z$), and Hansen's J and Sargan test for exogeneity of $Z$ (assuming you have more instruments than endogenous variables). Wald and Hausmann are very similar, but Wald is often better than Hausmann, and Sargan is a simpler version of Hansen's J used with TSLS (Hansen's J is used with IV-GMM). Since Hausman and the Sargan test different things, it makes sense you get different results.
Here's an explanation of what each test basically does:
Wald test of exogeneity: You assume that the instruments $Z$ satisfy exogeneity, and you test if $X_2$ may actually be exogenous.
Intuition: You have a valid instrument $Z$ (this assumption is key) for some variable $X$, and the first stage basically fits $X = \hat{\alpha} Z + \hat{e}$, and intuitively, in TSLS, we replace $X$ with $\hat{\alpha}Z$ in the second stage, which is the part of $X$ that is predicted by $Z$. Now what's $\hat{e}$? Well it's the part of $X$ that is unexplained by $Z$. If we ran a regression of $Y$ on $\hat{e}$. and find $\hat{e}$ has no effect on $Y$, then the part of $X$ that explains $Y$ is basically accounted for by $\hat{\alpha}Z$, but since $Z$ is exogenous by assumption, then $X$'s effect on $Y$ is a combination of the $Z$ fitted part and the $Z$ unfitted part, but we just found out that the unfitted part does not matter, and so $X$ is actually exogenous for all intents and purposes: the only part of it that matters is the part explained by $Z$, and $Z$ itself is exogenous, so $X$ must be exogenous. In such a case, you don't have to use IV and can just run OLS, which is more efficient.
Approach: we run the regression $Y = \delta_1 X + \delta_2\text{resid}(X_2) + \epsilon$, where $\text{resid}(X_2)$ are the residuals from the first stage regression of $X_2$ on $Z$. Then the exogeneity test is a Wald test that $\delta_2 = 0$ (ie jointly testing that all coefficients in the vector $\delta_2$ are $0$). Rejecting the test means that $X_2$ is not exogenous.
Hausman's test for endogeneity: This test is very similar to the above Wald test, and should be quite similar (I think exactly the same) under homoscedasticity. It is not used because we don't want to necessarily impose such an assumption, and because it involves a generalized inversion of a matrix that is often hard to calculate numerically. So we instead use a Wald test as above.
Intuition: Same as Wald test above.
Approach: First get first stage of TSLS and get the residuals $r$. Then run a regression $Y = \beta X + \delta r$ and test if $\delta = 0$. If significantly different, $(X_1,X_2)$ is not exogenous, and you should use TSLS, otherwise you can use the more efficient OLS. Note that unlike Wald test, the first stage and residuals are for all $(X_1,X_2)$ using $(X_1,Z)$, not just $X_2$.
Hansen's J: If we have more instruments than endogenous variables, i.e. $dim(Z) > dim(X_2)$, then we can test if all instruments are exogenous assuming that at least one of them is exogenous.
Intuition: If $dim(Z) > dim(X_2)$, we have more instruments than we need, and so we can actually use some of them for testing purposes instead of using them for estimation. I'm not super sure how to give an intuitive explanation here, but basically if we have more instruments then needed, TSLS will use all these instruments to build a set of instruments of $dim(X_2)$, and so I can take the residuals of TSLS from using this 'reduced' set of instruments, and then run a regression of these residuals (denoted $r_{TSLS}$) on $Z$. If $dim(Z) = dim(X_2)$, then by construction the coefficient of such a regression will be $0$, i.e. $r_{TSLS} = \hat{\alpha}Z$ will always result in $\hat{\alpha} = 0$, and so we don't learn anything. In contrast, if $dim(Z) > dim(X_2)$, then this need not be the case, but if the instruments truly were exogenous, then it should still be $0$. This is what we are testing here.
Approach: This is really used with IV-GMM, which is not what you are doing, so I don't know how much you want to know about this. As I'll next explain, the Sargan test is basically the simplified version of this test used with TSLS (the analogy is typically as follows: IV is to GMM as Sargan's test is to Hansen's J test).
Sargan: Very similar to Hansen's J. We use it to test exogeneity of instruments assuming one is at least exogenous, when we have more instruments than $X_2$ endogenous variables. It is popular when performing TSLS. Following on comment below, it seems that Hausmann's test for overidentification, as defined by OP as in Section 15.5 of Wooldridge's Introductory Econometrics, is also defined as this test.
Intuition: Same as Hansen's J.
Approach: If we assume homoskedasticity, the Sargan's test is a special case of Hansen's J test. We first run TSLS with all instruments, and get the residuals, and then regress these on the instruments. The sample size times $R^2$ of this regression is approximately $\chi^2$ with number of excess instruments as degrees of freedom. | What are the differences between tests for overidentification in 2SLS | There are a lot of questions here, so I'll first give an overview, and then explain a bit more. You have 4 tests you're asking about: Hausman test, Sargan test, a Wald test of exogeneity, and a Hansen | What are the differences between tests for overidentification in 2SLS
There are a lot of questions here, so I'll first give an overview, and then explain a bit more. You have 4 tests you're asking about: Hausman test, Sargan test, a Wald test of exogeneity, and a Hansen J Test. To fix some notation, let $Z$ be a vector of instruments, and consider $Y = \beta_1 X_1 + \beta_2 X_2 + e$, where $X_1$ are exogenous variables you include in the model, and $X_2$ are endogenous, and you wish to use $Z$ instruments for $X_2$. Sometimes I will use $X = (X_1,X_2)$ as well. In what follows, I will describe each test, and then provide intuition and an approach. I may stray from my notation during the intuition parts, but tried to stick to it during the approach parts.
Before beginning, the TL;DR is that Wald and Hausmann test for exogeneity of $X$ (assuming exogeneity of $Z$), and Hansen's J and Sargan test for exogeneity of $Z$ (assuming you have more instruments than endogenous variables). Wald and Hausmann are very similar, but Wald is often better than Hausmann, and Sargan is a simpler version of Hansen's J used with TSLS (Hansen's J is used with IV-GMM). Since Hausman and the Sargan test different things, it makes sense you get different results.
Here's an explanation of what each test basically does:
Wald test of exogeneity: You assume that the instruments $Z$ satisfy exogeneity, and you test if $X_2$ may actually be exogenous.
Intuition: You have a valid instrument $Z$ (this assumption is key) for some variable $X$, and the first stage basically fits $X = \hat{\alpha} Z + \hat{e}$, and intuitively, in TSLS, we replace $X$ with $\hat{\alpha}Z$ in the second stage, which is the part of $X$ that is predicted by $Z$. Now what's $\hat{e}$? Well it's the part of $X$ that is unexplained by $Z$. If we ran a regression of $Y$ on $\hat{e}$. and find $\hat{e}$ has no effect on $Y$, then the part of $X$ that explains $Y$ is basically accounted for by $\hat{\alpha}Z$, but since $Z$ is exogenous by assumption, then $X$'s effect on $Y$ is a combination of the $Z$ fitted part and the $Z$ unfitted part, but we just found out that the unfitted part does not matter, and so $X$ is actually exogenous for all intents and purposes: the only part of it that matters is the part explained by $Z$, and $Z$ itself is exogenous, so $X$ must be exogenous. In such a case, you don't have to use IV and can just run OLS, which is more efficient.
Approach: we run the regression $Y = \delta_1 X + \delta_2\text{resid}(X_2) + \epsilon$, where $\text{resid}(X_2)$ are the residuals from the first stage regression of $X_2$ on $Z$. Then the exogeneity test is a Wald test that $\delta_2 = 0$ (ie jointly testing that all coefficients in the vector $\delta_2$ are $0$). Rejecting the test means that $X_2$ is not exogenous.
Hausman's test for endogeneity: This test is very similar to the above Wald test, and should be quite similar (I think exactly the same) under homoscedasticity. It is not used because we don't want to necessarily impose such an assumption, and because it involves a generalized inversion of a matrix that is often hard to calculate numerically. So we instead use a Wald test as above.
Intuition: Same as Wald test above.
Approach: First get first stage of TSLS and get the residuals $r$. Then run a regression $Y = \beta X + \delta r$ and test if $\delta = 0$. If significantly different, $(X_1,X_2)$ is not exogenous, and you should use TSLS, otherwise you can use the more efficient OLS. Note that unlike Wald test, the first stage and residuals are for all $(X_1,X_2)$ using $(X_1,Z)$, not just $X_2$.
Hansen's J: If we have more instruments than endogenous variables, i.e. $dim(Z) > dim(X_2)$, then we can test if all instruments are exogenous assuming that at least one of them is exogenous.
Intuition: If $dim(Z) > dim(X_2)$, we have more instruments than we need, and so we can actually use some of them for testing purposes instead of using them for estimation. I'm not super sure how to give an intuitive explanation here, but basically if we have more instruments then needed, TSLS will use all these instruments to build a set of instruments of $dim(X_2)$, and so I can take the residuals of TSLS from using this 'reduced' set of instruments, and then run a regression of these residuals (denoted $r_{TSLS}$) on $Z$. If $dim(Z) = dim(X_2)$, then by construction the coefficient of such a regression will be $0$, i.e. $r_{TSLS} = \hat{\alpha}Z$ will always result in $\hat{\alpha} = 0$, and so we don't learn anything. In contrast, if $dim(Z) > dim(X_2)$, then this need not be the case, but if the instruments truly were exogenous, then it should still be $0$. This is what we are testing here.
Approach: This is really used with IV-GMM, which is not what you are doing, so I don't know how much you want to know about this. As I'll next explain, the Sargan test is basically the simplified version of this test used with TSLS (the analogy is typically as follows: IV is to GMM as Sargan's test is to Hansen's J test).
Sargan: Very similar to Hansen's J. We use it to test exogeneity of instruments assuming one is at least exogenous, when we have more instruments than $X_2$ endogenous variables. It is popular when performing TSLS. Following on comment below, it seems that Hausmann's test for overidentification, as defined by OP as in Section 15.5 of Wooldridge's Introductory Econometrics, is also defined as this test.
Intuition: Same as Hansen's J.
Approach: If we assume homoskedasticity, the Sargan's test is a special case of Hansen's J test. We first run TSLS with all instruments, and get the residuals, and then regress these on the instruments. The sample size times $R^2$ of this regression is approximately $\chi^2$ with number of excess instruments as degrees of freedom. | What are the differences between tests for overidentification in 2SLS
There are a lot of questions here, so I'll first give an overview, and then explain a bit more. You have 4 tests you're asking about: Hausman test, Sargan test, a Wald test of exogeneity, and a Hansen |
52,787 | How to do “broken stick linear regression” in R? | Here is how to do this for one cultivar:
plot(shoot ~ P, data = subset(DF, cultivar == "Dinninup"))
fit1 <- nls(shoot ~ ifelse(P < bp, m * P + c, m * bp + c),
data = subset(DF, cultivar == "Dinninup"),
start = list(c = 1, m = 0.05, bp = 25), na.action = na.omit)
summary(fit1)
#Formula: shoot ~ ifelse(P < bp, m * P + c, m * bp + c)
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#c 0.831689 0.105243 7.903 4.31e-07 ***
#m 0.033129 0.005829 5.684 2.69e-05 ***
#bp 24.700463 2.193671 11.260 2.65e-09 ***
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 0.106 on 17 degrees of freedom
#
#Number of iterations to convergence: 2
#Achieved convergence tolerance: 2.205e-08
curve(predict(fit1, newdata = data.frame(P = x)), add = TRUE)
#calculate k:
coef(fit1)[["c"]] + coef(fit1)[["m"]] * coef(fit1)[["bp"]]
#[1] 1.65
You can then create a combined model using the approach from my answer to your question at Stack Overflow: https://stackoverflow.com/a/59677502/1412059
The model might be sensitive to starting values (in particular for the break point). You should take care and fit the model repeatedly with slightly different starting values. | How to do “broken stick linear regression” in R? | Here is how to do this for one cultivar:
plot(shoot ~ P, data = subset(DF, cultivar == "Dinninup"))
fit1 <- nls(shoot ~ ifelse(P < bp, m * P + c, m * bp + c),
data = subset(DF, cultivar | How to do “broken stick linear regression” in R?
Here is how to do this for one cultivar:
plot(shoot ~ P, data = subset(DF, cultivar == "Dinninup"))
fit1 <- nls(shoot ~ ifelse(P < bp, m * P + c, m * bp + c),
data = subset(DF, cultivar == "Dinninup"),
start = list(c = 1, m = 0.05, bp = 25), na.action = na.omit)
summary(fit1)
#Formula: shoot ~ ifelse(P < bp, m * P + c, m * bp + c)
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#c 0.831689 0.105243 7.903 4.31e-07 ***
#m 0.033129 0.005829 5.684 2.69e-05 ***
#bp 24.700463 2.193671 11.260 2.65e-09 ***
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 0.106 on 17 degrees of freedom
#
#Number of iterations to convergence: 2
#Achieved convergence tolerance: 2.205e-08
curve(predict(fit1, newdata = data.frame(P = x)), add = TRUE)
#calculate k:
coef(fit1)[["c"]] + coef(fit1)[["m"]] * coef(fit1)[["bp"]]
#[1] 1.65
You can then create a combined model using the approach from my answer to your question at Stack Overflow: https://stackoverflow.com/a/59677502/1412059
The model might be sensitive to starting values (in particular for the break point). You should take care and fit the model repeatedly with slightly different starting values. | How to do “broken stick linear regression” in R?
Here is how to do this for one cultivar:
plot(shoot ~ P, data = subset(DF, cultivar == "Dinninup"))
fit1 <- nls(shoot ~ ifelse(P < bp, m * P + c, m * bp + c),
data = subset(DF, cultivar |
52,788 | How to do “broken stick linear regression” in R? | The package mcp was made just for scenarios like this. See below how I structured your data as df later.
Fit a change point model
First, let's define a slope followed by a joined plateau. We add varying (random) change point locations (the left-hand side of the equation):
model = list(
shoot ~ 1 + P, # intercept and slope
1 + (1|cultivar) ~ 0 # joined plateau
)
Now we fit the model with default priors:
library(mcp)
fit = mcp(model, data = df, iter = 5000)
Inspect the fit
Let's inspect the full fit for each cultivar:
plot(fit, facet_by = "cultivar", cp_dens = FALSE)
You can see raw parameter estimates using summary(fit) and the corresponding plot_pars(fit) (population-level). To focus on the varying change points (i.e., how each cultivar group deviates from the population-level change point (cp_1)), do ranef(fit) and plot_pars(fit, "varying").
Testing
Here are two ideas how to test hypotheses. If you want to test whether a given change point occurs later than another, do this to obtain Bayes Factors:
hypothesis(fit, "`cp_1_cultivar[Dinninup]` < `cp_1_cultivar[Yarloop]`")
I get a BF of around 16 for this one. If you want to test whether a varying change point improves predictive performance in general, fit a null model and use cross-validation:
model_null = list(shoot ~ 1 + P, ~ 0)
fit_null = fit = mcp(model, data = df, iter = 5000)
# Leave-one-out cross-validation
fit$loo = loo(fit)
fit_null$loo = loo(fit_null)
loo::loo_compare(fit$loo, fit_null$loo)
You can read about mcp on the mcp website and the underlying models in the associated preprint.
Perhaps it would be appropriate to use an informative prior that the first slope is positive.
Data
I removed cases with NA values, and put the rest in a data.frame:
df = data.frame(
P = c(12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17, 18.24, 18.24, 18.24,
18.24, 24.39, 24.39, 24.39, 24.39, 48.35, 48.35, 48.35, 48.35,
12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17, 18.24, 18.24,
18.24, 18.24, 24.39, 24.39, 24.39, 24.39, 48.35, 48.35, 48.35,
48.35, 12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17, 18.24,
18.24, 18.24, 18.24, 24.39, 24.39, 24.39, 24.39, 48.35, 48.35,
48.35, 48.35, 12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17,
18.24, 18.24, 18.24, 18.24, 24.39, 24.39, 24.39, 24.39, 48.35,
48.35, 48.35, 48.35),
shoot = c(1.24, 1.12, 1.28, 1.28, 1.37,
1.4, 1.39, 1.34, 1.34, 1.53, 1.25, 1.4, 1.44, 1.83, 1.65, 1.71,
1.52, 1.75, 1.63, 1.7, 1.23, 1.22, 1.26, 0.89, 1.2, 1.55, 1.4,
1.19, 1.75, 1.92, 1.63, 1.64, 1.34, 1.54, 1.66, 1.88, 1.9, 2.18,
2.03, 1.68, 0.9, 1.49, 1.41, 1.57, 0.94, 1.83, 1.6, NA, 1.98,
2.04, 1.64, 1.71, 1.97, 1.97, 1.87, 2.21, 2.1, 2.25, 2.1, 2.24,
1.23, 1.32, 1.47, 1.54, 1.38, 1.09, 1.41, NA, 1.23, 1.14, 1.63,
1.61, 1.42, 1.12, 1.74, 1.89, 1.4, 1.58, 1.71, 1.64),
cultivar = rep(c("Dinninup", "Yarloop", "Riverina", "Seaton"), each = 20)
)
df = df[complete.cases(df),] | How to do “broken stick linear regression” in R? | The package mcp was made just for scenarios like this. See below how I structured your data as df later.
Fit a change point model
First, let's define a slope followed by a joined plateau. We add varyi | How to do “broken stick linear regression” in R?
The package mcp was made just for scenarios like this. See below how I structured your data as df later.
Fit a change point model
First, let's define a slope followed by a joined plateau. We add varying (random) change point locations (the left-hand side of the equation):
model = list(
shoot ~ 1 + P, # intercept and slope
1 + (1|cultivar) ~ 0 # joined plateau
)
Now we fit the model with default priors:
library(mcp)
fit = mcp(model, data = df, iter = 5000)
Inspect the fit
Let's inspect the full fit for each cultivar:
plot(fit, facet_by = "cultivar", cp_dens = FALSE)
You can see raw parameter estimates using summary(fit) and the corresponding plot_pars(fit) (population-level). To focus on the varying change points (i.e., how each cultivar group deviates from the population-level change point (cp_1)), do ranef(fit) and plot_pars(fit, "varying").
Testing
Here are two ideas how to test hypotheses. If you want to test whether a given change point occurs later than another, do this to obtain Bayes Factors:
hypothesis(fit, "`cp_1_cultivar[Dinninup]` < `cp_1_cultivar[Yarloop]`")
I get a BF of around 16 for this one. If you want to test whether a varying change point improves predictive performance in general, fit a null model and use cross-validation:
model_null = list(shoot ~ 1 + P, ~ 0)
fit_null = fit = mcp(model, data = df, iter = 5000)
# Leave-one-out cross-validation
fit$loo = loo(fit)
fit_null$loo = loo(fit_null)
loo::loo_compare(fit$loo, fit_null$loo)
You can read about mcp on the mcp website and the underlying models in the associated preprint.
Perhaps it would be appropriate to use an informative prior that the first slope is positive.
Data
I removed cases with NA values, and put the rest in a data.frame:
df = data.frame(
P = c(12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17, 18.24, 18.24, 18.24,
18.24, 24.39, 24.39, 24.39, 24.39, 48.35, 48.35, 48.35, 48.35,
12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17, 18.24, 18.24,
18.24, 18.24, 24.39, 24.39, 24.39, 24.39, 48.35, 48.35, 48.35,
48.35, 12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17, 18.24,
18.24, 18.24, 18.24, 24.39, 24.39, 24.39, 24.39, 48.35, 48.35,
48.35, 48.35, 12.1, 12.1, 12.1, 12.1, 15.17, 15.17, 15.17, 15.17,
18.24, 18.24, 18.24, 18.24, 24.39, 24.39, 24.39, 24.39, 48.35,
48.35, 48.35, 48.35),
shoot = c(1.24, 1.12, 1.28, 1.28, 1.37,
1.4, 1.39, 1.34, 1.34, 1.53, 1.25, 1.4, 1.44, 1.83, 1.65, 1.71,
1.52, 1.75, 1.63, 1.7, 1.23, 1.22, 1.26, 0.89, 1.2, 1.55, 1.4,
1.19, 1.75, 1.92, 1.63, 1.64, 1.34, 1.54, 1.66, 1.88, 1.9, 2.18,
2.03, 1.68, 0.9, 1.49, 1.41, 1.57, 0.94, 1.83, 1.6, NA, 1.98,
2.04, 1.64, 1.71, 1.97, 1.97, 1.87, 2.21, 2.1, 2.25, 2.1, 2.24,
1.23, 1.32, 1.47, 1.54, 1.38, 1.09, 1.41, NA, 1.23, 1.14, 1.63,
1.61, 1.42, 1.12, 1.74, 1.89, 1.4, 1.58, 1.71, 1.64),
cultivar = rep(c("Dinninup", "Yarloop", "Riverina", "Seaton"), each = 20)
)
df = df[complete.cases(df),] | How to do “broken stick linear regression” in R?
The package mcp was made just for scenarios like this. See below how I structured your data as df later.
Fit a change point model
First, let's define a slope followed by a joined plateau. We add varyi |
52,789 | The best way of presenting the correlation / normality results of big data | This is a somewhat subjective question, but in general presenting a reader with cross correlations between 82 different factors is not particularly helpful, no matter how it is presented. The idea of exploratory analysis is to disseminate something useful to the reader without them having to necessarily go through all of the analysis themselves, and give them an idea of your thought process in what you did next. Depending on the nature of the work, you could include something verbose in an appendix, but in general you could be served here by:
Asking yourself whether a correlation matrix is really the best approach to present what you want to a reader, have you considered other feature selection methods and the context of the problem to help highlight potential leads in the data to then test and attempt to model?
Presenting only the variables of interest given what you observe, in a condensed version of the correlation matrix, perhaps with some notion of saying that only certain significant correlations are present
Grouping variables into logical groups and producing summary statistics, with the option for a reader to dig deeper by providing some qualification of these groupings in the report
A single number summary of the correlations you've observed would not be particularly informative, and as far as I know nothing really exists to express this unless I am misunderstanding the question. I could add 40 columns to your data containing noise and massively change any single observed metric or number; the nature of correlation is that the attributes you are talking about matter, not the dataset as a whole. | The best way of presenting the correlation / normality results of big data | This is a somewhat subjective question, but in general presenting a reader with cross correlations between 82 different factors is not particularly helpful, no matter how it is presented. The idea of | The best way of presenting the correlation / normality results of big data
This is a somewhat subjective question, but in general presenting a reader with cross correlations between 82 different factors is not particularly helpful, no matter how it is presented. The idea of exploratory analysis is to disseminate something useful to the reader without them having to necessarily go through all of the analysis themselves, and give them an idea of your thought process in what you did next. Depending on the nature of the work, you could include something verbose in an appendix, but in general you could be served here by:
Asking yourself whether a correlation matrix is really the best approach to present what you want to a reader, have you considered other feature selection methods and the context of the problem to help highlight potential leads in the data to then test and attempt to model?
Presenting only the variables of interest given what you observe, in a condensed version of the correlation matrix, perhaps with some notion of saying that only certain significant correlations are present
Grouping variables into logical groups and producing summary statistics, with the option for a reader to dig deeper by providing some qualification of these groupings in the report
A single number summary of the correlations you've observed would not be particularly informative, and as far as I know nothing really exists to express this unless I am misunderstanding the question. I could add 40 columns to your data containing noise and massively change any single observed metric or number; the nature of correlation is that the attributes you are talking about matter, not the dataset as a whole. | The best way of presenting the correlation / normality results of big data
This is a somewhat subjective question, but in general presenting a reader with cross correlations between 82 different factors is not particularly helpful, no matter how it is presented. The idea of |
52,790 | The best way of presenting the correlation / normality results of big data | I will throw my 2 cents as I had to battle with exactly the same problem for presenting results for a 77x77 variables correlation matrix.
I tried just about anything you can think of in terms of conveniently visualizing a 77x77 matrix by using R, SPSS and Excel. From my experience, there is simply no magical pill/graph that will result in an 82x82 matrix (your dimensions), which can be easily visually digested, just due to too many variables being present. Now I resorted to employing the following strategy, which I thought was reasonable.
Strategy
As you are, I was interested in showing highly-correlated pairs of variables. First, I looked through the literature to understand what is classed as a high correlation. Disclaimer: as with all rules of thumb, you should not blindly rely on such recommendations. Instead, take them as a starting point and then check papers/research in your respective field to determine/validate as to what is commonly thought to be highly correlated. For example, in the field of psychometrics, it is not uncommon to think that variables correlated as |.70| and greater are highly correlated. This is also supported by some more general published guidelines produced by Kenny.
Next, suppose you have chosen some criterion (e.g. |.70| or similar). Next, I resorted to writing an R code that will essentially give me the pairs of variables that adhere to this criterion (attached at the very bottom). Unless, everything in your correlation matrix is highly correlated, which I doubt by looking at the corrplot you attached, you end up with a neater and more manageable task. That is, now, the task can be reduced to producing only, for example, a 20x20 correlation matrix of only highly correlated variables.
At this point, let's backtrack just a little to counter a potential critique, as someone may say: "Yes, but what about showing other lower correlations amongst the remaining variables (as you cannot ignore non-high correlations)?" Again, due to the difficulties of presenting an 82x82 matrix in a visually digestible way, I would recommend to start presenting your results by describing the correlation matrix more generally, by noting a few key patterns. For example,
Note dimensions of the correlation matrix.
State proportion of positive/negative correlations.
State proportion of variables that essentially showed no correlation; weak correlation; moderate correlation; and high correlation.
Give examples in each case (it is useful).
Now, at this point, notice how you ended your general summary of the correlation patterns with noting high correlations. From this point on, you can continue the flow and concentrate on the above, i.e. defining what you mean by high correlations and what made you consider a specific threshold to identify high correlations, then provide a more manageable and visually digestible corrplot of only highly-correlated pairs of variables. In addition to that, you may, optionally, rank-order highly correlated variables and present them in a simple table. From my presentation experience, it also looked very digestible and easy to follow. I will include a snapshot (note my table included extra metrics so I cut it and only attached a relevant part of the table; also my apologies for an over-sized picture, I could not figure out how to make it smaller).
### Identify highly-correlated pairs of variables
__________________________________________________
rm(list = ls()) # clean environment
library(foreign) # load packages
library(dplyr)
library(psych)
CORRECTED_cor <- cor(mydata) # cor matrix
CORRECTED_cor <- tbl_df(CORRECTED_cor) # tabulate cor matrix
print(CORRECTED_cor)
CORRECTED_cor[lower.tri(CORRECTED_cor, diag = TRUE)] <- NA # remove lower triangle and diagonal elements
print(CORRECTED_cor)
CORRECTED_cor[abs(CORRECTED_cor) < 0.70] <- NA # correlations below |.70| NA
print(CORRECTED_cor)
sum(!is.na(CORRECTED_cor)) # only show cases with cor > |.70|
S <- which(abs(CORRECTED_cor) >= .70, arr.ind = T) # output rows and colums of correlations > |.70|
S
cbind(S, abs(CORRECTED_cor[S]))
final <- cbind(S, abs(CORRECTED_cor[S]))
final <- tbl_df(final)
print(final)
# Sanity check to make sure everything is correct
CORRECTED_cor[20, 27]
CORRECTED_cor[13, 73]
CORRECTED_cor[55, 71]
CORRECTED_cor[30, 36]
print(final)
# Sort by increasing order
arrange(final, V1)
# Final dataframe sorted by increasing order
final <- arrange(final, V1)
print(final) | The best way of presenting the correlation / normality results of big data | I will throw my 2 cents as I had to battle with exactly the same problem for presenting results for a 77x77 variables correlation matrix.
I tried just about anything you can think of in terms of conv | The best way of presenting the correlation / normality results of big data
I will throw my 2 cents as I had to battle with exactly the same problem for presenting results for a 77x77 variables correlation matrix.
I tried just about anything you can think of in terms of conveniently visualizing a 77x77 matrix by using R, SPSS and Excel. From my experience, there is simply no magical pill/graph that will result in an 82x82 matrix (your dimensions), which can be easily visually digested, just due to too many variables being present. Now I resorted to employing the following strategy, which I thought was reasonable.
Strategy
As you are, I was interested in showing highly-correlated pairs of variables. First, I looked through the literature to understand what is classed as a high correlation. Disclaimer: as with all rules of thumb, you should not blindly rely on such recommendations. Instead, take them as a starting point and then check papers/research in your respective field to determine/validate as to what is commonly thought to be highly correlated. For example, in the field of psychometrics, it is not uncommon to think that variables correlated as |.70| and greater are highly correlated. This is also supported by some more general published guidelines produced by Kenny.
Next, suppose you have chosen some criterion (e.g. |.70| or similar). Next, I resorted to writing an R code that will essentially give me the pairs of variables that adhere to this criterion (attached at the very bottom). Unless, everything in your correlation matrix is highly correlated, which I doubt by looking at the corrplot you attached, you end up with a neater and more manageable task. That is, now, the task can be reduced to producing only, for example, a 20x20 correlation matrix of only highly correlated variables.
At this point, let's backtrack just a little to counter a potential critique, as someone may say: "Yes, but what about showing other lower correlations amongst the remaining variables (as you cannot ignore non-high correlations)?" Again, due to the difficulties of presenting an 82x82 matrix in a visually digestible way, I would recommend to start presenting your results by describing the correlation matrix more generally, by noting a few key patterns. For example,
Note dimensions of the correlation matrix.
State proportion of positive/negative correlations.
State proportion of variables that essentially showed no correlation; weak correlation; moderate correlation; and high correlation.
Give examples in each case (it is useful).
Now, at this point, notice how you ended your general summary of the correlation patterns with noting high correlations. From this point on, you can continue the flow and concentrate on the above, i.e. defining what you mean by high correlations and what made you consider a specific threshold to identify high correlations, then provide a more manageable and visually digestible corrplot of only highly-correlated pairs of variables. In addition to that, you may, optionally, rank-order highly correlated variables and present them in a simple table. From my presentation experience, it also looked very digestible and easy to follow. I will include a snapshot (note my table included extra metrics so I cut it and only attached a relevant part of the table; also my apologies for an over-sized picture, I could not figure out how to make it smaller).
### Identify highly-correlated pairs of variables
__________________________________________________
rm(list = ls()) # clean environment
library(foreign) # load packages
library(dplyr)
library(psych)
CORRECTED_cor <- cor(mydata) # cor matrix
CORRECTED_cor <- tbl_df(CORRECTED_cor) # tabulate cor matrix
print(CORRECTED_cor)
CORRECTED_cor[lower.tri(CORRECTED_cor, diag = TRUE)] <- NA # remove lower triangle and diagonal elements
print(CORRECTED_cor)
CORRECTED_cor[abs(CORRECTED_cor) < 0.70] <- NA # correlations below |.70| NA
print(CORRECTED_cor)
sum(!is.na(CORRECTED_cor)) # only show cases with cor > |.70|
S <- which(abs(CORRECTED_cor) >= .70, arr.ind = T) # output rows and colums of correlations > |.70|
S
cbind(S, abs(CORRECTED_cor[S]))
final <- cbind(S, abs(CORRECTED_cor[S]))
final <- tbl_df(final)
print(final)
# Sanity check to make sure everything is correct
CORRECTED_cor[20, 27]
CORRECTED_cor[13, 73]
CORRECTED_cor[55, 71]
CORRECTED_cor[30, 36]
print(final)
# Sort by increasing order
arrange(final, V1)
# Final dataframe sorted by increasing order
final <- arrange(final, V1)
print(final) | The best way of presenting the correlation / normality results of big data
I will throw my 2 cents as I had to battle with exactly the same problem for presenting results for a 77x77 variables correlation matrix.
I tried just about anything you can think of in terms of conv |
52,791 | The best way of presenting the correlation / normality results of big data | Here is what I did once to report correlations results: prepare n-1 separate plots. On the x-axis of each plot, there were feature indexes from i+1 to n. On the y-axis of each plot, there were correlation results. I represented the plots in a grid. Sure, there were a lot of plots but they we much more readable. (you can also combine some of them to make it look nice). | The best way of presenting the correlation / normality results of big data | Here is what I did once to report correlations results: prepare n-1 separate plots. On the x-axis of each plot, there were feature indexes from i+1 to n. On the y-axis of each plot, there were correl | The best way of presenting the correlation / normality results of big data
Here is what I did once to report correlations results: prepare n-1 separate plots. On the x-axis of each plot, there were feature indexes from i+1 to n. On the y-axis of each plot, there were correlation results. I represented the plots in a grid. Sure, there were a lot of plots but they we much more readable. (you can also combine some of them to make it look nice). | The best way of presenting the correlation / normality results of big data
Here is what I did once to report correlations results: prepare n-1 separate plots. On the x-axis of each plot, there were feature indexes from i+1 to n. On the y-axis of each plot, there were correl |
52,792 | The best way of presenting the correlation / normality results of big data | Not sure whether the following approach really will provide insight into the structure of the datasets looked at, but you could try to calculate a variance inflation factor for every column, based on the rest of the table as predictors in a linear regression model, and report the maximum VIF per data set (assuming rows are observations and columns are variables).
VIF is a direct measure of multi-collinearity in a data set (see e.g. https://en.wikipedia.org/wiki/Variance_inflation_factor). VIF around 1 e.g. means that there is no multi-collinearity in the data set, while VIF=100 would mean that only 1% of a variable's variance is unique, while the other 99% can be explained by the predictors in the linear regression model. A common threshold for declaring substantial collinearity is VIF>=10.
This approach of course breaks down at some point either due to computational constraints (ncol or nrow too big), accumulation of missing values, which hampers modeling, or as soon as you have a complete separation of data points, i.e. for random data if ncol>=nrow.
But within some reasonable limits and for completely numeric data it should be possible as long as ncol << nrow, e.g.:
# create random matrix with dimensions as in OP above
m <- 21263
n <- 82
mydummytab <- matrix(rnorm(m*n), m, n)
# calculate variance inflation factors
library(preputils)
VIFs <- vifx(mydummytab)
VIFmax <- max(VIFs) | The best way of presenting the correlation / normality results of big data | Not sure whether the following approach really will provide insight into the structure of the datasets looked at, but you could try to calculate a variance inflation factor for every column, based on | The best way of presenting the correlation / normality results of big data
Not sure whether the following approach really will provide insight into the structure of the datasets looked at, but you could try to calculate a variance inflation factor for every column, based on the rest of the table as predictors in a linear regression model, and report the maximum VIF per data set (assuming rows are observations and columns are variables).
VIF is a direct measure of multi-collinearity in a data set (see e.g. https://en.wikipedia.org/wiki/Variance_inflation_factor). VIF around 1 e.g. means that there is no multi-collinearity in the data set, while VIF=100 would mean that only 1% of a variable's variance is unique, while the other 99% can be explained by the predictors in the linear regression model. A common threshold for declaring substantial collinearity is VIF>=10.
This approach of course breaks down at some point either due to computational constraints (ncol or nrow too big), accumulation of missing values, which hampers modeling, or as soon as you have a complete separation of data points, i.e. for random data if ncol>=nrow.
But within some reasonable limits and for completely numeric data it should be possible as long as ncol << nrow, e.g.:
# create random matrix with dimensions as in OP above
m <- 21263
n <- 82
mydummytab <- matrix(rnorm(m*n), m, n)
# calculate variance inflation factors
library(preputils)
VIFs <- vifx(mydummytab)
VIFmax <- max(VIFs) | The best way of presenting the correlation / normality results of big data
Not sure whether the following approach really will provide insight into the structure of the datasets looked at, but you could try to calculate a variance inflation factor for every column, based on |
52,793 | Why use both $\sin$ and $\cos$ functions in Transformer positional encoding? | The authors write
We chose this function because we hypothesized it would allow the
model to easily learn to attend by relative positions, since for any
fixed offset k, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
Indeed, $\sin(x+k) = u\sin(x) + v \cos(x)$ for some constants $u, v$, and likewise for $\cos(x+k)$, so this is true. If you only had $\cos$, it doesn't appear to me that you have this property. | Why use both $\sin$ and $\cos$ functions in Transformer positional encoding? | The authors write
We chose this function because we hypothesized it would allow the
model to easily learn to attend by relative positions, since for any
fixed offset k, $PE_{pos+k}$ can be repres | Why use both $\sin$ and $\cos$ functions in Transformer positional encoding?
The authors write
We chose this function because we hypothesized it would allow the
model to easily learn to attend by relative positions, since for any
fixed offset k, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
Indeed, $\sin(x+k) = u\sin(x) + v \cos(x)$ for some constants $u, v$, and likewise for $\cos(x+k)$, so this is true. If you only had $\cos$, it doesn't appear to me that you have this property. | Why use both $\sin$ and $\cos$ functions in Transformer positional encoding?
The authors write
We chose this function because we hypothesized it would allow the
model to easily learn to attend by relative positions, since for any
fixed offset k, $PE_{pos+k}$ can be repres |
52,794 | LMER model with uneven time points | Short answer: this should be no problem. I'm pretty sure you'll have somewhat lower power than if you had a perfectly balanced design, but there is no fundamental difficulty.
Here's an example where I subsample the (complete/balanced) sleepstudy data set and show that it works fine (and the results don't change very much).
The only specific issue that I can think of is that if you are fitting a model with autoregressive structure (e.g. in lme with the correlation argument), you'll have to switch from an "AR1" specification to a "continuous AR" or "Ornstein-Uhlenbeck" or "exponential correlation" specification (in lme from corAR1 to corCAR: in glmmTMB from ar() to ou()).
This analysis of carbon exchange in tundra ecosystems shows an example of mixed models applied to an unevenly sampled time series.
subsample data set
library(lme4)
set.seed(101)
table(sleepstudy$Subject)
ss <- do.call("rbind",
lapply(split(sleepstudy,sleepstudy$Subject),
function(x) {
x[sort(sample(1:10,
size=rbinom(1,size=10,prob=0.7),
replace=FALSE)),]
}))
lapply(split(ss,ss$Subject), function(x) x$Days)
fit unbalanced/subsampled data set
lmer(Reaction~Days+(Days|Subject), data=ss)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Reaction ~ Days + (Days | Subject)
## Data: ss
## REML criterion at convergence: 1130.586
## Random effects:
## Groups Name Std.Dev. Corr
## Subject (Intercept) 22.634
## Days 6.303 0.15
## Residual 21.628
## Number of obs: 119, groups: Subject, 18
## Fixed Effects:
## (Intercept) Days
## 250.30 10.44
fit full data set
lmer(Reaction~Days+(Days|Subject), data=sleepstudy)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Reaction ~ Days + (Days | Subject)
## Data: sleepstudy
## REML criterion at convergence: 1743.628
## Random effects:
## Groups Name Std.Dev. Corr
## Subject (Intercept) 24.737
## Days 5.923 0.07
## Residual 25.592
## Number of obs: 180, groups: Subject, 18
## Fixed Effects:
## (Intercept) Days
## 251.41 10.47 | LMER model with uneven time points | Short answer: this should be no problem. I'm pretty sure you'll have somewhat lower power than if you had a perfectly balanced design, but there is no fundamental difficulty.
Here's an example where I | LMER model with uneven time points
Short answer: this should be no problem. I'm pretty sure you'll have somewhat lower power than if you had a perfectly balanced design, but there is no fundamental difficulty.
Here's an example where I subsample the (complete/balanced) sleepstudy data set and show that it works fine (and the results don't change very much).
The only specific issue that I can think of is that if you are fitting a model with autoregressive structure (e.g. in lme with the correlation argument), you'll have to switch from an "AR1" specification to a "continuous AR" or "Ornstein-Uhlenbeck" or "exponential correlation" specification (in lme from corAR1 to corCAR: in glmmTMB from ar() to ou()).
This analysis of carbon exchange in tundra ecosystems shows an example of mixed models applied to an unevenly sampled time series.
subsample data set
library(lme4)
set.seed(101)
table(sleepstudy$Subject)
ss <- do.call("rbind",
lapply(split(sleepstudy,sleepstudy$Subject),
function(x) {
x[sort(sample(1:10,
size=rbinom(1,size=10,prob=0.7),
replace=FALSE)),]
}))
lapply(split(ss,ss$Subject), function(x) x$Days)
fit unbalanced/subsampled data set
lmer(Reaction~Days+(Days|Subject), data=ss)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Reaction ~ Days + (Days | Subject)
## Data: ss
## REML criterion at convergence: 1130.586
## Random effects:
## Groups Name Std.Dev. Corr
## Subject (Intercept) 22.634
## Days 6.303 0.15
## Residual 21.628
## Number of obs: 119, groups: Subject, 18
## Fixed Effects:
## (Intercept) Days
## 250.30 10.44
fit full data set
lmer(Reaction~Days+(Days|Subject), data=sleepstudy)
## Linear mixed model fit by REML ['lmerMod']
## Formula: Reaction ~ Days + (Days | Subject)
## Data: sleepstudy
## REML criterion at convergence: 1743.628
## Random effects:
## Groups Name Std.Dev. Corr
## Subject (Intercept) 24.737
## Days 5.923 0.07
## Residual 25.592
## Number of obs: 180, groups: Subject, 18
## Fixed Effects:
## (Intercept) Days
## 251.41 10.47 | LMER model with uneven time points
Short answer: this should be no problem. I'm pretty sure you'll have somewhat lower power than if you had a perfectly balanced design, but there is no fundamental difficulty.
Here's an example where I |
52,795 | LMER model with uneven time points | I was intrigued by the comment of @BenBolker that the power will be lower in an unbalanced design compared to a perfectly balanced one. The following simulation study seems to suggest that the power is the same:
simulate_mixed <- function (design = c("balanced", "unbalanced")) {
design <- match.arg(design)
n <- 100 # number of subjects
K <- 8 # number of measurements per subject
t_max <- 15 # maximum follow-up time
# we constuct a data frame with the design:
DF <- data.frame(id = rep(seq_len(n), each = K),
sex = rep(gl(2, n/2, labels = c("male", "female")), each = K))
DF$time <- if (design == "unbalanced") {
runif(n * K, 0, t_max)
} else {
rep(seq(0, t_max, length.out = K), n)
}
X <- model.matrix(~ sex * time, data = DF)
Z <- model.matrix(~ time, data = DF)
betas <- c(-2.13, 0.5, 1, -0.5) # fixed effects coefficients
D11 <- 2 # variance of random intercepts
D22 <- 1 # variance of random slopes
D12 <- 0.8 # covariance random intercepts random slopes
D <- matrix(c(D11, D12, D12, D22), 2, 2)
# we simulate random effects
b <- MASS::mvrnorm(n, rep(0, ncol(Z)), D)
# linear predictor
eta_y <- drop(X %*% betas + rowSums(Z * b[DF$id, ]))
# we simulate normal longitudinal data
DF$y <- rnorm(n * K, mean = eta_y, sd = 1.5)
DF
}
run_simulation <- function (design, M = 2000L) {
library("lmerTest")
opt <- options(warn = (-1))
on.exit(options(opt))
p_values <- numeric(M)
for (i in seq_len(M)) {
set.seed(i + 2019)
data_i <- simulate_mixed(design = design)
fm <- lmer(y ~ sex * time + (time | id), data = data_i)
p_values[i] <- anova(fm)$`Pr(>F)`[3L]
}
mean(p_values < 0.05)
}
#####################################################################
#####################################################################
run_simulation("balanced")
#> [1] 0.691
run_simulation("unbalanced")
#> [1] 0.69 | LMER model with uneven time points | I was intrigued by the comment of @BenBolker that the power will be lower in an unbalanced design compared to a perfectly balanced one. The following simulation study seems to suggest that the power i | LMER model with uneven time points
I was intrigued by the comment of @BenBolker that the power will be lower in an unbalanced design compared to a perfectly balanced one. The following simulation study seems to suggest that the power is the same:
simulate_mixed <- function (design = c("balanced", "unbalanced")) {
design <- match.arg(design)
n <- 100 # number of subjects
K <- 8 # number of measurements per subject
t_max <- 15 # maximum follow-up time
# we constuct a data frame with the design:
DF <- data.frame(id = rep(seq_len(n), each = K),
sex = rep(gl(2, n/2, labels = c("male", "female")), each = K))
DF$time <- if (design == "unbalanced") {
runif(n * K, 0, t_max)
} else {
rep(seq(0, t_max, length.out = K), n)
}
X <- model.matrix(~ sex * time, data = DF)
Z <- model.matrix(~ time, data = DF)
betas <- c(-2.13, 0.5, 1, -0.5) # fixed effects coefficients
D11 <- 2 # variance of random intercepts
D22 <- 1 # variance of random slopes
D12 <- 0.8 # covariance random intercepts random slopes
D <- matrix(c(D11, D12, D12, D22), 2, 2)
# we simulate random effects
b <- MASS::mvrnorm(n, rep(0, ncol(Z)), D)
# linear predictor
eta_y <- drop(X %*% betas + rowSums(Z * b[DF$id, ]))
# we simulate normal longitudinal data
DF$y <- rnorm(n * K, mean = eta_y, sd = 1.5)
DF
}
run_simulation <- function (design, M = 2000L) {
library("lmerTest")
opt <- options(warn = (-1))
on.exit(options(opt))
p_values <- numeric(M)
for (i in seq_len(M)) {
set.seed(i + 2019)
data_i <- simulate_mixed(design = design)
fm <- lmer(y ~ sex * time + (time | id), data = data_i)
p_values[i] <- anova(fm)$`Pr(>F)`[3L]
}
mean(p_values < 0.05)
}
#####################################################################
#####################################################################
run_simulation("balanced")
#> [1] 0.691
run_simulation("unbalanced")
#> [1] 0.69 | LMER model with uneven time points
I was intrigued by the comment of @BenBolker that the power will be lower in an unbalanced design compared to a perfectly balanced one. The following simulation study seems to suggest that the power i |
52,796 | In a multilevel linear regression, how does the reference level affect other levels/factors and which reference level ought to be selected? | Terminology
The model you fitted with the lm() function in R is actually a linear regression model, not a multilevel linear regression model. In statistics, we reserve the multilevel terminology for situations where the data exhibit a natural form of nesting (e.g., students nested in schools).
Factors in R
The Smoke predictor appears to be declared as a factor in your data (i.e., a categorical variable). Moreover, R thinks this factor is unordered (i.e., its categories are NOT assumed to follow a natural ordering). You can check this with the command:
str(surveyNA$Smoke)
Dummy Variables
For the reasons described above, when you include the predictor Smoke as a factor in your model, its effect on Height will be encoded using k - 1 = 4 - 1 = 3 dummy variables, where k = 4 represents the number of categories (or levels) of Smoke. These dummy variables are denoted by R as SmokeNever, SmokeOccas and SmokeRegul. As you can see, the dummy variable SmokeHeavy was excluded from the model to avoid collinearity. You could have included this dummy variable in the model but then you would have had to exclude the intercept:
survfit0 <- lm(Height~ -1 + Smoke, data = surveyNA)
When excluding the intercept from the model, summary(survfit0) will report the estimated mean Height for each category/level of Smoke.
When including the intercept in the model, summary(survfit0) will report the difference in estimated mean Height values between each non-reference category of Smoke (in your case, Never, Occas and Regul) and the reference category of Smoke (Heavy).
Alphabetical Ordering of Factor Levels
You will notice that, unless you were explicit about which dummy variable you wanted to exclude from your model, R made that choice for you. Indeed, R will always - by default - exclude the dummy variable corresponding to the category/level of the factor of interest which appears first in an alphabetical list. If you list the categories/levels of Smoke, you will see that Heavy is the first in this alphabetical list: Heavy, Never, Occas, Regul.
More Thoughtful Ordering of Factor Levels
Can you override R's default? Absolutely! How you do the overriding will depend on both your research questions and your data. In this particular example, perhaps you might be interested in determining whether the mean height for the subjects you are studying tends to increase with frequency of smoking. Then you would want these dummy variables in your model: SmokeOccas, SmokeRegul, SmokeHeavy. Having these in your model would enable you to see how, compared to never smoking, the mean height goes up as you move up the ladder from one frequency of smoking to another. To fit such a model, you would have to rearrange the levels of your Smoke factor (currently listed alphabetically):
surveyNA$Smoke <- factor(surveyNA$Smoke,
levels = c("Never",
"Occas",
"Regul",
"Heavy"))
However, before you fit your model with the re-arranged factor:
survfit1 <- lm(Height ~ 1 + Smoke, data = surveyNA)
there is one more thing you should do. You should check how many subjects in your data fall into each of your category of Smoke and keep an eye in particular on the reference category Never, whose dummy variable would be excluded from your current model:
table(surveyNA$Smoke)
If it turns out that your reference category, Never, has the fewest number of subjects represented in the data (much smaller than the number of subjects in all other categories), you could change your reference category to a richer category to avoid model estimation problems. This won't affect your ability to answer your research question - you'll be able to interrogate the model more after you fit it.
If it also turns out that one of the other categories (say Heavy) has only 3 subjects in it, then you could decide to merge it with its neighbouring category, Regul, to create a new category named Regul/Heavy. If you don't do this merging, the standard errors for the effect of SmokeHeavy will likely be very large, reflecting the uncertainty involved in estimating this effect with such few data points.
Impact of Re-Ordered Factor Levels
What happens to your model when you re-order the categories of Smoke so that the reference category changes?
The answer is that your model parametrization changes, which in turns impacts the inference you can conduct based on the output reported by R in the model summary.
To illustrate this, assume you consider two competing models. Model 1 uses Heavy as a reference level for the Smoke predictor, while Model 2 uses Never as a reference level. The two models clearly have different formulations, as they include different dummy variables.
Model 1:
$Height = \beta_0 + \beta_1*SmokeNever + \beta_2*SmokeOccas + \beta_3*SmokeRegul + \epsilon$
Model 2:
$Height = \beta_0^* + \beta_1^**SmokeReorderedHeavy + \beta_2^**SmokeReorderedOccas + \beta_3^**SmokeReorderedRegul + \epsilon^*$
In Model 1, for the target population, the mean Height for heavy smokers is quantified by the parameter $\beta_0$, the mean Height for those who never smoke is quantified by the parameter $\beta_0 + \beta_1$, the mean Height of occasional smokers is quantified by the parameter $\beta_0 + \beta_2$ and the mean Height for regular smokers is quantified by the parameter $\beta_0 + \beta_3$. In the summary for Model 1, R will report estimated values for the parameters $\beta_0$, $\beta_1$, $\beta_2$ and $\beta_3$, along with tests of hypotheses of the form:
Ho: $\beta_0 = 0$ versus Ha: $\beta_0 \neq 0$;
Ho: $\beta_1 = 0$ versus Ha: $\beta_1 \neq 0$;
Ho: $\beta_2 = 0$ versus Ha: $\beta_2 \neq 0$;
Ho: $\beta_3 = 0$ versus Ha: $\beta_3 \neq 0$.
In other words, R will estimate the mean value of Height for heavy smokers (quantified by $\beta_0$) and test whether it is different from 0. It will also (i) estimate the difference in mean Height values between those who never smoke and heavy smokers (quantified by $\beta_1$) and test whether this difference is different from 0, (ii) estimate the difference in mean Height values between those who occasionally smoke and heavy smokers (quantified by $\beta_2$) and test whether it is different from 0 and (iii) estimate the difference in mean Height values between those who regularly smoke and heavy smokers (quantified by $\beta_3$) and test whether it is different from 0.
In Model 2, the mean Height for those who never smoke is quantified by the parameter $\beta_0^*$, the mean Height for those who are heavy smokers is quantified by the parameter $\beta_0^*+ \beta_1^*$, the mean Height of occasional smokers is quantified by the parameter $\beta_0^* + \beta_2^*$ and the mean Height for regular smokers is quantified by the parameter $\beta_0^* + \beta_3^*$. In the summary for Model 2, R will report estimated values for the parameters $\beta_0*$, $\beta_1^*$, $\beta_2^*$ and $\beta_3^*$, along with tests of hypotheses of the form:
Ho: $\beta_0^* = 0$ versus Ha: $\beta_0^* \neq 0$;
Ho: $\beta_1^* = 0$ versus Ha: $\beta_1^* \neq 0$;
Ho: $\beta_2^* = 0$ versus Ha: $\beta_2^* \neq 0$;
Ho: $\beta_3^* = 0$ versus Ha: $\beta_3^* \neq 0$.
Thus, for Model 2, R will estimate the mean value of Height for those who never smoke (quantified by $\beta_0^*$) and test whether it is different from 0. It will also (i) estimate the difference in mean Height values between heavy smokers and those who never smoke (quantified by $\beta_1^*$) and test whether it is different from 0, (ii) estimate the difference in mean Height values between those who occasionally smoke and those who never smoke (quantified by $\beta_2^*$) and test whether it is different from 0 and (iii) estimate the difference in mean Height values between those who regularly smoke and those who never smoker (quantified by $\beta_3^*$) and test whether it is different from 0.
This is the reason why I added my comment to this answer:
When you look at the significance of REGULAR in a model where HEAVY is treated as the reference, you are testing for a difference in the mean Height between subjects in your target population who are REGULAR smokers and those who are HEAVY smokers. When you look at the significance of REGULAR in a model where NEVER is treated as the reference, you are testing for a difference in the mean Height between subjects in your target population who are REGULAR smokers and those who are not smokers.
No wonder you get different p-values - you are performing different tests of hypotheses based on your choice of reference level!
However, as pointed out by @whuber in his comment, changing the model parametrization will not impact the estimated mean Height values for the four categories of Smoke reported by R. That is because:
$\beta_0$ in Model 1 and $\beta_0^*+ \beta_1^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of heavy smokers in the target population;
$\beta_0 + \beta_1$ in Model 1 and $\beta_0^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of those who never smoke in the target population;
$\beta_0 + \beta_2$ in Model 1 and $\beta_0^*+ \beta_2^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of occasional smokers in the target population;
$\beta_0 + \beta_3$ in Model 1 and $\beta_0^*+ \beta_3^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of regular smokers in the target population. | In a multilevel linear regression, how does the reference level affect other levels/factors and whic | Terminology
The model you fitted with the lm() function in R is actually a linear regression model, not a multilevel linear regression model. In statistics, we reserve the multilevel terminology for | In a multilevel linear regression, how does the reference level affect other levels/factors and which reference level ought to be selected?
Terminology
The model you fitted with the lm() function in R is actually a linear regression model, not a multilevel linear regression model. In statistics, we reserve the multilevel terminology for situations where the data exhibit a natural form of nesting (e.g., students nested in schools).
Factors in R
The Smoke predictor appears to be declared as a factor in your data (i.e., a categorical variable). Moreover, R thinks this factor is unordered (i.e., its categories are NOT assumed to follow a natural ordering). You can check this with the command:
str(surveyNA$Smoke)
Dummy Variables
For the reasons described above, when you include the predictor Smoke as a factor in your model, its effect on Height will be encoded using k - 1 = 4 - 1 = 3 dummy variables, where k = 4 represents the number of categories (or levels) of Smoke. These dummy variables are denoted by R as SmokeNever, SmokeOccas and SmokeRegul. As you can see, the dummy variable SmokeHeavy was excluded from the model to avoid collinearity. You could have included this dummy variable in the model but then you would have had to exclude the intercept:
survfit0 <- lm(Height~ -1 + Smoke, data = surveyNA)
When excluding the intercept from the model, summary(survfit0) will report the estimated mean Height for each category/level of Smoke.
When including the intercept in the model, summary(survfit0) will report the difference in estimated mean Height values between each non-reference category of Smoke (in your case, Never, Occas and Regul) and the reference category of Smoke (Heavy).
Alphabetical Ordering of Factor Levels
You will notice that, unless you were explicit about which dummy variable you wanted to exclude from your model, R made that choice for you. Indeed, R will always - by default - exclude the dummy variable corresponding to the category/level of the factor of interest which appears first in an alphabetical list. If you list the categories/levels of Smoke, you will see that Heavy is the first in this alphabetical list: Heavy, Never, Occas, Regul.
More Thoughtful Ordering of Factor Levels
Can you override R's default? Absolutely! How you do the overriding will depend on both your research questions and your data. In this particular example, perhaps you might be interested in determining whether the mean height for the subjects you are studying tends to increase with frequency of smoking. Then you would want these dummy variables in your model: SmokeOccas, SmokeRegul, SmokeHeavy. Having these in your model would enable you to see how, compared to never smoking, the mean height goes up as you move up the ladder from one frequency of smoking to another. To fit such a model, you would have to rearrange the levels of your Smoke factor (currently listed alphabetically):
surveyNA$Smoke <- factor(surveyNA$Smoke,
levels = c("Never",
"Occas",
"Regul",
"Heavy"))
However, before you fit your model with the re-arranged factor:
survfit1 <- lm(Height ~ 1 + Smoke, data = surveyNA)
there is one more thing you should do. You should check how many subjects in your data fall into each of your category of Smoke and keep an eye in particular on the reference category Never, whose dummy variable would be excluded from your current model:
table(surveyNA$Smoke)
If it turns out that your reference category, Never, has the fewest number of subjects represented in the data (much smaller than the number of subjects in all other categories), you could change your reference category to a richer category to avoid model estimation problems. This won't affect your ability to answer your research question - you'll be able to interrogate the model more after you fit it.
If it also turns out that one of the other categories (say Heavy) has only 3 subjects in it, then you could decide to merge it with its neighbouring category, Regul, to create a new category named Regul/Heavy. If you don't do this merging, the standard errors for the effect of SmokeHeavy will likely be very large, reflecting the uncertainty involved in estimating this effect with such few data points.
Impact of Re-Ordered Factor Levels
What happens to your model when you re-order the categories of Smoke so that the reference category changes?
The answer is that your model parametrization changes, which in turns impacts the inference you can conduct based on the output reported by R in the model summary.
To illustrate this, assume you consider two competing models. Model 1 uses Heavy as a reference level for the Smoke predictor, while Model 2 uses Never as a reference level. The two models clearly have different formulations, as they include different dummy variables.
Model 1:
$Height = \beta_0 + \beta_1*SmokeNever + \beta_2*SmokeOccas + \beta_3*SmokeRegul + \epsilon$
Model 2:
$Height = \beta_0^* + \beta_1^**SmokeReorderedHeavy + \beta_2^**SmokeReorderedOccas + \beta_3^**SmokeReorderedRegul + \epsilon^*$
In Model 1, for the target population, the mean Height for heavy smokers is quantified by the parameter $\beta_0$, the mean Height for those who never smoke is quantified by the parameter $\beta_0 + \beta_1$, the mean Height of occasional smokers is quantified by the parameter $\beta_0 + \beta_2$ and the mean Height for regular smokers is quantified by the parameter $\beta_0 + \beta_3$. In the summary for Model 1, R will report estimated values for the parameters $\beta_0$, $\beta_1$, $\beta_2$ and $\beta_3$, along with tests of hypotheses of the form:
Ho: $\beta_0 = 0$ versus Ha: $\beta_0 \neq 0$;
Ho: $\beta_1 = 0$ versus Ha: $\beta_1 \neq 0$;
Ho: $\beta_2 = 0$ versus Ha: $\beta_2 \neq 0$;
Ho: $\beta_3 = 0$ versus Ha: $\beta_3 \neq 0$.
In other words, R will estimate the mean value of Height for heavy smokers (quantified by $\beta_0$) and test whether it is different from 0. It will also (i) estimate the difference in mean Height values between those who never smoke and heavy smokers (quantified by $\beta_1$) and test whether this difference is different from 0, (ii) estimate the difference in mean Height values between those who occasionally smoke and heavy smokers (quantified by $\beta_2$) and test whether it is different from 0 and (iii) estimate the difference in mean Height values between those who regularly smoke and heavy smokers (quantified by $\beta_3$) and test whether it is different from 0.
In Model 2, the mean Height for those who never smoke is quantified by the parameter $\beta_0^*$, the mean Height for those who are heavy smokers is quantified by the parameter $\beta_0^*+ \beta_1^*$, the mean Height of occasional smokers is quantified by the parameter $\beta_0^* + \beta_2^*$ and the mean Height for regular smokers is quantified by the parameter $\beta_0^* + \beta_3^*$. In the summary for Model 2, R will report estimated values for the parameters $\beta_0*$, $\beta_1^*$, $\beta_2^*$ and $\beta_3^*$, along with tests of hypotheses of the form:
Ho: $\beta_0^* = 0$ versus Ha: $\beta_0^* \neq 0$;
Ho: $\beta_1^* = 0$ versus Ha: $\beta_1^* \neq 0$;
Ho: $\beta_2^* = 0$ versus Ha: $\beta_2^* \neq 0$;
Ho: $\beta_3^* = 0$ versus Ha: $\beta_3^* \neq 0$.
Thus, for Model 2, R will estimate the mean value of Height for those who never smoke (quantified by $\beta_0^*$) and test whether it is different from 0. It will also (i) estimate the difference in mean Height values between heavy smokers and those who never smoke (quantified by $\beta_1^*$) and test whether it is different from 0, (ii) estimate the difference in mean Height values between those who occasionally smoke and those who never smoke (quantified by $\beta_2^*$) and test whether it is different from 0 and (iii) estimate the difference in mean Height values between those who regularly smoke and those who never smoker (quantified by $\beta_3^*$) and test whether it is different from 0.
This is the reason why I added my comment to this answer:
When you look at the significance of REGULAR in a model where HEAVY is treated as the reference, you are testing for a difference in the mean Height between subjects in your target population who are REGULAR smokers and those who are HEAVY smokers. When you look at the significance of REGULAR in a model where NEVER is treated as the reference, you are testing for a difference in the mean Height between subjects in your target population who are REGULAR smokers and those who are not smokers.
No wonder you get different p-values - you are performing different tests of hypotheses based on your choice of reference level!
However, as pointed out by @whuber in his comment, changing the model parametrization will not impact the estimated mean Height values for the four categories of Smoke reported by R. That is because:
$\beta_0$ in Model 1 and $\beta_0^*+ \beta_1^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of heavy smokers in the target population;
$\beta_0 + \beta_1$ in Model 1 and $\beta_0^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of those who never smoke in the target population;
$\beta_0 + \beta_2$ in Model 1 and $\beta_0^*+ \beta_2^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of occasional smokers in the target population;
$\beta_0 + \beta_3$ in Model 1 and $\beta_0^*+ \beta_3^*$ in Model 2 are different parametrizations for the same thing, namely the mean Height of regular smokers in the target population. | In a multilevel linear regression, how does the reference level affect other levels/factors and whic
Terminology
The model you fitted with the lm() function in R is actually a linear regression model, not a multilevel linear regression model. In statistics, we reserve the multilevel terminology for |
52,797 | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$? | You are missing the point that the definition of the standard deviation is the square root of the variance $V$ which is defined as
$$V = \frac 1n \sum_{i=1}^n (x_i-\mu)^2.$$
So why is $V$ defined the way it is? Well, a standard model is that the $x_i$ are a random sample from a distribution with known mean $\mu$ and unknown variance $\sigma^2 = E[(X-\mu)^2]$ and so
$$V = \frac 1n \sum_{i=1}^n (X_i-\mu)^2\\$$
is a random variable whose expected value $E[V]$ is just $\sigma^2$.
Today, the random variable $V$ happens to have value $\frac 1n \sum_{i=1}^n (x_i-\mu)^2$ and we are using this value as an estimate of $\sigma^2$. We are also calling $V$ as the variance of the random sample.
Be that as it may, you are averaging too late by dividing the square root of the sum $\sum_{i=1}^n (x_i-\mu)^2$ by $n$; you need to do the averaging right after the sum has been computed rather than after the square-rooting has been done. | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$? | You are missing the point that the definition of the standard deviation is the square root of the variance $V$ which is defined as
$$V = \frac 1n \sum_{i=1}^n (x_i-\mu)^2.$$
So why is $V$ defined the | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$?
You are missing the point that the definition of the standard deviation is the square root of the variance $V$ which is defined as
$$V = \frac 1n \sum_{i=1}^n (x_i-\mu)^2.$$
So why is $V$ defined the way it is? Well, a standard model is that the $x_i$ are a random sample from a distribution with known mean $\mu$ and unknown variance $\sigma^2 = E[(X-\mu)^2]$ and so
$$V = \frac 1n \sum_{i=1}^n (X_i-\mu)^2\\$$
is a random variable whose expected value $E[V]$ is just $\sigma^2$.
Today, the random variable $V$ happens to have value $\frac 1n \sum_{i=1}^n (x_i-\mu)^2$ and we are using this value as an estimate of $\sigma^2$. We are also calling $V$ as the variance of the random sample.
Be that as it may, you are averaging too late by dividing the square root of the sum $\sum_{i=1}^n (x_i-\mu)^2$ by $n$; you need to do the averaging right after the sum has been computed rather than after the square-rooting has been done. | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$?
You are missing the point that the definition of the standard deviation is the square root of the variance $V$ which is defined as
$$V = \frac 1n \sum_{i=1}^n (x_i-\mu)^2.$$
So why is $V$ defined the |
52,798 | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$? | An example might illustrate the point
Suppose we have a population where the relative frequency of being
value $20$ is $0.1$
value $25$ is $0.6$
value $30$ is $0.3$
I can tell you that the mean is $\mu = \sum_j x_j p_j =26$, the variance is $\sigma^2=\sum_j (x_j-\mu)^2 p_j =9$ and the standard deviation is $\sigma=3$ no matter what the total population is. For example
with a population of $n=100$, there are $10$ copies of $20$, $60$ copies of $25$ and $30$ copies of $30$. Using $\sqrt{\frac{1}{n}\sum^n_{i=1}(x_i - \mu)^2}$ I would get $3$ while with $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$ you would get $0.3$
with a population of $n=10000$, there are $1000$ copies of $20$, $6000$ copies of $25$ and $3000$ copies of $30$. Using $\sqrt{\frac{1}{n}\sum^n_{i=1}(x_i - \mu)^2}$ I would still get $3$ while with $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$ you would get $0.03$
It is desirable to get the same answer both times as the relative frequencies and the dispersion in the population is the same; all that has changed is the size of the population.
Your suggestion starts to make sense if we start talking about taking samples and estimating the mean from the sample and want a measure of the possible error in the estimate of the estimate of the mean. But that is a deeper question, and the statistic in question is then called the standard error of the mean rather than the standard deviation of the population or of the sample. | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$? | An example might illustrate the point
Suppose we have a population where the relative frequency of being
value $20$ is $0.1$
value $25$ is $0.6$
value $30$ is $0.3$
I can tell you that the mean is | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$?
An example might illustrate the point
Suppose we have a population where the relative frequency of being
value $20$ is $0.1$
value $25$ is $0.6$
value $30$ is $0.3$
I can tell you that the mean is $\mu = \sum_j x_j p_j =26$, the variance is $\sigma^2=\sum_j (x_j-\mu)^2 p_j =9$ and the standard deviation is $\sigma=3$ no matter what the total population is. For example
with a population of $n=100$, there are $10$ copies of $20$, $60$ copies of $25$ and $30$ copies of $30$. Using $\sqrt{\frac{1}{n}\sum^n_{i=1}(x_i - \mu)^2}$ I would get $3$ while with $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$ you would get $0.3$
with a population of $n=10000$, there are $1000$ copies of $20$, $6000$ copies of $25$ and $3000$ copies of $30$. Using $\sqrt{\frac{1}{n}\sum^n_{i=1}(x_i - \mu)^2}$ I would still get $3$ while with $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$ you would get $0.03$
It is desirable to get the same answer both times as the relative frequencies and the dispersion in the population is the same; all that has changed is the size of the population.
Your suggestion starts to make sense if we start talking about taking samples and estimating the mean from the sample and want a measure of the possible error in the estimate of the estimate of the mean. But that is a deeper question, and the statistic in question is then called the standard error of the mean rather than the standard deviation of the population or of the sample. | Why isn't standard deviation $\frac{1}{n}\sqrt{\sum^n_{i=1}(x_i - \mu)^2}$?
An example might illustrate the point
Suppose we have a population where the relative frequency of being
value $20$ is $0.1$
value $25$ is $0.6$
value $30$ is $0.3$
I can tell you that the mean is |
52,799 | Should I cross-validate metrics that were not optimised? | It is a good idea to bootstrap or cross-validate (e.g., 100 repeats of 10-fold cross-validation) indexes that were not optimized. For example, I recommend optimizing on a gold standard such as log-likelihood, penalized log-likelihood, or in a Bayesian model log-likelihood + log-prior. You can report measures such as pseudo $R^2$ that are just transformations of the gold standard objective function, and in addition do resampling validation on helpful indexes such as the $c$-index (concordance probability = AUROC), Brier score, and most of all, the full calibration curve. I do validation of smooth nonparametric calibration curves by bootstrapping 99 predicted values when using a probability model, i.e., to validate the absolute accuracy of predicted probabilities of 0.01, 0.02, ..., 0.99. Likewise you can show overfitting-corrected estimates of Brier score, calibration slope, mean squared error, and many other quantities. Details are in my RMS book and course notes. | Should I cross-validate metrics that were not optimised? | It is a good idea to bootstrap or cross-validate (e.g., 100 repeats of 10-fold cross-validation) indexes that were not optimized. For example, I recommend optimizing on a gold standard such as log-li | Should I cross-validate metrics that were not optimised?
It is a good idea to bootstrap or cross-validate (e.g., 100 repeats of 10-fold cross-validation) indexes that were not optimized. For example, I recommend optimizing on a gold standard such as log-likelihood, penalized log-likelihood, or in a Bayesian model log-likelihood + log-prior. You can report measures such as pseudo $R^2$ that are just transformations of the gold standard objective function, and in addition do resampling validation on helpful indexes such as the $c$-index (concordance probability = AUROC), Brier score, and most of all, the full calibration curve. I do validation of smooth nonparametric calibration curves by bootstrapping 99 predicted values when using a probability model, i.e., to validate the absolute accuracy of predicted probabilities of 0.01, 0.02, ..., 0.99. Likewise you can show overfitting-corrected estimates of Brier score, calibration slope, mean squared error, and many other quantities. Details are in my RMS book and course notes. | Should I cross-validate metrics that were not optimised?
It is a good idea to bootstrap or cross-validate (e.g., 100 repeats of 10-fold cross-validation) indexes that were not optimized. For example, I recommend optimizing on a gold standard such as log-li |
52,800 | Should I cross-validate metrics that were not optimised? | You will anyways need a validation (verification) of the performance of the optimized model. Regardless of the testing scheme you employ for this (resampling/[outer] cross validation/[outer] out-of-bootstrap, single train/test split, validation study), this is where you evaluate the performance for all parameters of interest, i.e. $f$ and $g$.
A slight exception are parameters that are not calculated from test cases but rather fromt the model itself (say, some measure of model complexity). These are of course calculated directly for the final model (in your case: final model for each of the algorithms). Nevertheless, I'd also calculate them for out-of-bootstrap or cross validation surrogate models in order to check whether they are stable and possibly different between surrogate models and final model.
In addition, it may be interesting/important to study how $g$ evolves alongside the $f$ opimization, so it may be worth while to compute $g$ also during the [inner] cross validation inside the optimization of $f$. (That is, if that computation is feasible). | Should I cross-validate metrics that were not optimised? | You will anyways need a validation (verification) of the performance of the optimized model. Regardless of the testing scheme you employ for this (resampling/[outer] cross validation/[outer] out-of-bo | Should I cross-validate metrics that were not optimised?
You will anyways need a validation (verification) of the performance of the optimized model. Regardless of the testing scheme you employ for this (resampling/[outer] cross validation/[outer] out-of-bootstrap, single train/test split, validation study), this is where you evaluate the performance for all parameters of interest, i.e. $f$ and $g$.
A slight exception are parameters that are not calculated from test cases but rather fromt the model itself (say, some measure of model complexity). These are of course calculated directly for the final model (in your case: final model for each of the algorithms). Nevertheless, I'd also calculate them for out-of-bootstrap or cross validation surrogate models in order to check whether they are stable and possibly different between surrogate models and final model.
In addition, it may be interesting/important to study how $g$ evolves alongside the $f$ opimization, so it may be worth while to compute $g$ also during the [inner] cross validation inside the optimization of $f$. (That is, if that computation is feasible). | Should I cross-validate metrics that were not optimised?
You will anyways need a validation (verification) of the performance of the optimized model. Regardless of the testing scheme you employ for this (resampling/[outer] cross validation/[outer] out-of-bo |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.