idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
54,701 | Flawed multiple linear regression in academia? Heteroscedasticity's effect on p-value? | This isn't heteroscedasticity you are looking at, but truncation.
You can see this very clearly in the first plot: No combination of the fitted + residual exceeds a certain number, causing this sudden imaginary diagonal line, past which no observations exist. In the scale-location plot, this strange shape reveals that the data are truncated at $1$.
It is easy to simulate some truncated data and show that the diagnostic plots indeed display this diagonal cutoff, as well as the strange V-shape in the scale-location plot:
set.seed(1234)
n <- 1000
beta_0 <- 1.5
beta_1 <- 0.5
x <- rnorm(n)
y <- beta_0 + beta_1 * x + rnorm(n, 0, 0.5)
y <- pmin(y, 1)
plot(lm(y ~ x))
The real question isn't what to conclude from these diagnostic plots, but rather what these data are. If you include a reference to the paper you read, we could see why the data are bounded, and whether that renders their conclusions invalid or not.
Edit: In the comments you explained these are ratios. That gives you the actual answer to whether their approach is flawed (it probably is). Rather than an ordinary linear model, the authors should probably have used e.g. logistic regression using the original values that made up these ratios. | Flawed multiple linear regression in academia? Heteroscedasticity's effect on p-value? | This isn't heteroscedasticity you are looking at, but truncation.
You can see this very clearly in the first plot: No combination of the fitted + residual exceeds a certain number, causing this sudden | Flawed multiple linear regression in academia? Heteroscedasticity's effect on p-value?
This isn't heteroscedasticity you are looking at, but truncation.
You can see this very clearly in the first plot: No combination of the fitted + residual exceeds a certain number, causing this sudden imaginary diagonal line, past which no observations exist. In the scale-location plot, this strange shape reveals that the data are truncated at $1$.
It is easy to simulate some truncated data and show that the diagnostic plots indeed display this diagonal cutoff, as well as the strange V-shape in the scale-location plot:
set.seed(1234)
n <- 1000
beta_0 <- 1.5
beta_1 <- 0.5
x <- rnorm(n)
y <- beta_0 + beta_1 * x + rnorm(n, 0, 0.5)
y <- pmin(y, 1)
plot(lm(y ~ x))
The real question isn't what to conclude from these diagnostic plots, but rather what these data are. If you include a reference to the paper you read, we could see why the data are bounded, and whether that renders their conclusions invalid or not.
Edit: In the comments you explained these are ratios. That gives you the actual answer to whether their approach is flawed (it probably is). Rather than an ordinary linear model, the authors should probably have used e.g. logistic regression using the original values that made up these ratios. | Flawed multiple linear regression in academia? Heteroscedasticity's effect on p-value?
This isn't heteroscedasticity you are looking at, but truncation.
You can see this very clearly in the first plot: No combination of the fitted + residual exceeds a certain number, causing this sudden |
54,702 | Is classification using linear regression called logistic regression or linear disriminant analysis? | They are both close, but in different ways
If you run ordinary least-squares regression with a binary class variable as the outcome (label) variable, you get exactly the 2-class case of linear discriminant analysis. So LDA (in the 2-class case) is linear regression run on a classification problem. It's conceptually different from linear regression in that the original derivation of LDA uses assumptions about the distribution of the predictor (feature) variables, which regression does not.
Logistic regression is a natural generalisation of linear regression to binary data, in which you model the mean of the outcome variable (which is the probability that it is 1 vs 0) using a linear combination of predictors, but with a 'link' function in between so that the probability stays between 0 and 1. Like linear regression, it's a special case of the generalised linear model, and wasn't derived based on assumptions about the distributions of predictor variables.
So, LDA is computationally just linear regression as applied to a classification problem, but it's quite different as a model; logistic regression is closer as a model, but less similar computationally. | Is classification using linear regression called logistic regression or linear disriminant analysis? | They are both close, but in different ways
If you run ordinary least-squares regression with a binary class variable as the outcome (label) variable, you get exactly the 2-class case of linear discri | Is classification using linear regression called logistic regression or linear disriminant analysis?
They are both close, but in different ways
If you run ordinary least-squares regression with a binary class variable as the outcome (label) variable, you get exactly the 2-class case of linear discriminant analysis. So LDA (in the 2-class case) is linear regression run on a classification problem. It's conceptually different from linear regression in that the original derivation of LDA uses assumptions about the distribution of the predictor (feature) variables, which regression does not.
Logistic regression is a natural generalisation of linear regression to binary data, in which you model the mean of the outcome variable (which is the probability that it is 1 vs 0) using a linear combination of predictors, but with a 'link' function in between so that the probability stays between 0 and 1. Like linear regression, it's a special case of the generalised linear model, and wasn't derived based on assumptions about the distributions of predictor variables.
So, LDA is computationally just linear regression as applied to a classification problem, but it's quite different as a model; logistic regression is closer as a model, but less similar computationally. | Is classification using linear regression called logistic regression or linear disriminant analysis?
They are both close, but in different ways
If you run ordinary least-squares regression with a binary class variable as the outcome (label) variable, you get exactly the 2-class case of linear discri |
54,703 | Bernouilli ML parameter estimation from indirect observations | The model is a mixture of Bernoullis, with likelihood
$$L(p)=\prod_{t=1}^n \{pa_t+(1-p)b_t\}$$
a polynomial of degree $n$ in $p$.
Since this distribution is not an exponential family, there is no sufficient statistic of fixed dimension and hence no way to update the maximum likelihood estimator in the way you describe.
As an aside, the Bayesian estimation of $p$ allows for a sequential update of the posterior distribution, if one uses a particle filter. Bernardo and Giròn (1988) have an updating mecchanism that is quite simple but also very approximate:
@InCollection{ bernardo:giron:1988,
author = "J.M. Bernardo and F.J. Giròn",
title = "A {B}ayesian analysis of simple mixture problems",
booktitle = "{B}ayesian Statistics 3",
pages = "67--78",
publisher = "Oxford University Press",
year = 1988,
editor = "J.M. Bernardo and M.H. DeGroot and D.V. Lindley and A.F.M. Smith"} | Bernouilli ML parameter estimation from indirect observations | The model is a mixture of Bernoullis, with likelihood
$$L(p)=\prod_{t=1}^n \{pa_t+(1-p)b_t\}$$
a polynomial of degree $n$ in $p$.
Since this distribution is not an exponential family, there is no suf | Bernouilli ML parameter estimation from indirect observations
The model is a mixture of Bernoullis, with likelihood
$$L(p)=\prod_{t=1}^n \{pa_t+(1-p)b_t\}$$
a polynomial of degree $n$ in $p$.
Since this distribution is not an exponential family, there is no sufficient statistic of fixed dimension and hence no way to update the maximum likelihood estimator in the way you describe.
As an aside, the Bayesian estimation of $p$ allows for a sequential update of the posterior distribution, if one uses a particle filter. Bernardo and Giròn (1988) have an updating mecchanism that is quite simple but also very approximate:
@InCollection{ bernardo:giron:1988,
author = "J.M. Bernardo and F.J. Giròn",
title = "A {B}ayesian analysis of simple mixture problems",
booktitle = "{B}ayesian Statistics 3",
pages = "67--78",
publisher = "Oxford University Press",
year = 1988,
editor = "J.M. Bernardo and M.H. DeGroot and D.V. Lindley and A.F.M. Smith"} | Bernouilli ML parameter estimation from indirect observations
The model is a mixture of Bernoullis, with likelihood
$$L(p)=\prod_{t=1}^n \{pa_t+(1-p)b_t\}$$
a polynomial of degree $n$ in $p$.
Since this distribution is not an exponential family, there is no suf |
54,704 | KL divergence for joint probability distributions? | KL divergence is defined between two distributions, period. If this is marginal or joint distributions is immaterial. You want them to have the same support. So you do the same as in single dimension. Asker in a comment says
Thanks for your useful answer. In the KL divergence, we must calculate
$\log p/q$ for probability distributions $p$ and $q$; how can this
calculation be done for joint distributions? (Sorry if this is a basic
question!)
$p$, $q$ are density functions, so they have values that are non-negative real numbers. So the quotient $p/q$ (assuming it is defined where needed, which it will be when the support of $p$ is included in the support of $q$) is a non-negative real number.
That conclusion does not at all depend on how many arguments the density functions $p, q$ have (they will normally have the same number of arguments). So the calculation of KL divergence is in principle the same for marginal and joint distributions, although the multivariate case might in practice be more involved.
For some examples see
KL divergence between two multivariate Gaussians
Kullback-Leibler divergence between multivariate t and the multivariate normal?
Efficiently computing pairwise KL divergence between multiple diagonal-covariance Gaussian distributions | KL divergence for joint probability distributions? | KL divergence is defined between two distributions, period. If this is marginal or joint distributions is immaterial. You want them to have the same support. So you do the same as in single dimension. | KL divergence for joint probability distributions?
KL divergence is defined between two distributions, period. If this is marginal or joint distributions is immaterial. You want them to have the same support. So you do the same as in single dimension. Asker in a comment says
Thanks for your useful answer. In the KL divergence, we must calculate
$\log p/q$ for probability distributions $p$ and $q$; how can this
calculation be done for joint distributions? (Sorry if this is a basic
question!)
$p$, $q$ are density functions, so they have values that are non-negative real numbers. So the quotient $p/q$ (assuming it is defined where needed, which it will be when the support of $p$ is included in the support of $q$) is a non-negative real number.
That conclusion does not at all depend on how many arguments the density functions $p, q$ have (they will normally have the same number of arguments). So the calculation of KL divergence is in principle the same for marginal and joint distributions, although the multivariate case might in practice be more involved.
For some examples see
KL divergence between two multivariate Gaussians
Kullback-Leibler divergence between multivariate t and the multivariate normal?
Efficiently computing pairwise KL divergence between multiple diagonal-covariance Gaussian distributions | KL divergence for joint probability distributions?
KL divergence is defined between two distributions, period. If this is marginal or joint distributions is immaterial. You want them to have the same support. So you do the same as in single dimension. |
54,705 | What does it mean for OLS residuals to be independent from the fitted values? | First consider what the definition of independence of two vector-valued random variables $x$ and $y$ comes down to: the probability that $x$ is in some event $\mathcal A$ and $y$ is in some event $\mathcal B$ is the product of the chances of these events.
It helps to recast this in terms of conditional probabilities: independence means the chance that $y \in \mathcal B,$ conditional on $x\in\mathcal A,$ does not depend on $\mathcal A,$ no matter what $\mathcal B$ might be. (For technical reasons it is best to restrict this criterion to events where there is a nonzero chance $x\in\mathcal A.$)
When $x$ and $y$ are $1$-vectors -- that is, they are ordinary numerical random variables -- we can check independence by drawing a sample $(x_i,y_i),i=1,2,\ldots, n$ from the joint distribution of $(x,y)$ and looking at the scatterplot of the sample. The foregoing characterization in terms of conditional distributions means that as you scan left to right across this scatterplot, the vertical distributions of the points do not change. Be careful, though: the visual impression of the distribution can change because in some of these vertical windows there won't be many points and you will have a hard time seeing the distribution. You need to pay attention to the relative densities of the points within their vertical strips.
The archetypal example of independence is an uncorrelated bivariate Normal variable $(x,y).$ Here is a scatterplot of such a variable.
(The axis labels will be explained below.)
I have drawn it with an aspect ratio that gives the $x$ values the same amount of spread (horizontally) as the $y$ values (vertically). The lack of correlation is evidenced by the relatively even, circular cloud. In any narrow vertical strip, the relative numbers of points are given by the same Normal distribution -- same mean, same variance -- regardless of where the strip might be located.
Independence of longer vectors $x$ and $y$ implies independence of any (measurable) functions of them, say $f(x)$ and $g(y).$ Taking $f$ to be the function $\pi_i$ giving the value of coordinate $i$ and $g$ to be the function $\pi_j$ giving the value of coordinate $j$ ("projection functions"), we may look for lack of independence by reviewing a scatterplot of $(\pi_i(x), \pi_j(y)).$ The logic is this: when the scatterplot reveals lack of independence, that implies $(x,y)$ cannot be independent, either. When all such scatterplots suggest independence, that does not demonstrate independence of $(x,y)$ (but it does suggest it).
The first figure was drawn in this fashion. The two vectors involved are $(\hat y, \hat e)$ for an ordinary least squares regression with the model $y = \varepsilon$ using four observations with a single explanatory variable set to the values $1,2,3,$ and $4.$ The errors $\varepsilon$ were iid standard Normal. That figure plots the first predicted value $\hat y_1$ and the first residual $\hat e_1$ for $2000$ random instances of this model. To show that its appearance is no fluke, here is the plot of $\hat e_4$ against $\hat y_3$ from the same sample:
The theory implies all such scatterplots look like these nice even circular clouds when (a) the errors are iid Normal and (b) the model is fit using ordinary least squares.
On the other hand, after repeating this exercise but taking the error distribution to be iid $\Gamma(1)-1$ (a shifted Exponential with mean $0$), I obtained this version of the first figure:
This is an example of obvious lack of independence. For instance, when $\hat y_1$ is close to $-2$ (at the left of the figure), the residual $\hat e_1$ tends to be close to $1;$ but when $\hat y_1$ is much larger than $-2,$ the distribution of the residual $\hat e_1$ is much more spread out. The analog of the second figure, comparing the fourth residual to the third predicted value, also exhibits obvious lack of independence:
I wish to emphasize that the explanatory variables, the least squares parameters (intercept and slope), and the fitting method (least squares) are the same for these two models: the only thing that changed between the first two and second two figures was the shape of the error distributions from Normal to a shifted Gamma. (See the code below.) That change alone destroyed the independence between the predictions $\hat y$ and residuals $\hat e.$
After studying such scatterplots for the second model, given any of the predicted values $\hat y_i,$ you would be able to make predictions about any of the residuals $\hat e_j$ and they would be more accurate, on average, then not knowing $\hat y_i.$ For the first (Normal) model, though, information about any (or even all!) of the predicted values $\hat y_i$ by themselves would not give you useful information about any of the residuals, and vice versa: the residuals are not informative about the predicted values associated with them.
If you would like to create figures like these, here is the R code that produced them.
n <- 4 # Amount of data
x <- seq.int(n) # Explanatory variable values
y.0 <- 0*x/n # The model
k <- 1 # Gamma parameter (if used)
method <- "Normal"
# method <- paste0("Gamma(", k, ")")
#
# Sample the joint distribution.
#
n.sim <- 2000 # Sample size from the joint distribution of (explanatory, observed)
sim <- replicate(n.sim, {
if (method != "Normal") {
y <- y.0 + rgamma(length(y.0), k, k) - 1
} else {
y <- y.0 + rnorm(length(y.0))
}
fit <- lm(y ~ x )
c(predict(fit), residuals(fit))
})
sim <- array(sim, c(n, 2, n.sim)) # Each `sim[,,k]` has 2 columns of predictions and residuals
#
# Make a scatterplot of predictions and residuals from this sample.
#
i <- 3 # Component of the prediction vector to plot
j <- 4 # Component of the residual vector to plot
plot(sim[i,1,], sim[j,2,], las=2, pch=21, bg="#00000010", col="#00000020",
main=paste(method, "Errors"), cex.main=1.1,
xlab = bquote(hat(y)[.(i)]), ylab = "")
mtext(bquote(hat(e)[.(j)]), side=2, line=2.5, las=2) | What does it mean for OLS residuals to be independent from the fitted values? | First consider what the definition of independence of two vector-valued random variables $x$ and $y$ comes down to: the probability that $x$ is in some event $\mathcal A$ and $y$ is in some event $\ma | What does it mean for OLS residuals to be independent from the fitted values?
First consider what the definition of independence of two vector-valued random variables $x$ and $y$ comes down to: the probability that $x$ is in some event $\mathcal A$ and $y$ is in some event $\mathcal B$ is the product of the chances of these events.
It helps to recast this in terms of conditional probabilities: independence means the chance that $y \in \mathcal B,$ conditional on $x\in\mathcal A,$ does not depend on $\mathcal A,$ no matter what $\mathcal B$ might be. (For technical reasons it is best to restrict this criterion to events where there is a nonzero chance $x\in\mathcal A.$)
When $x$ and $y$ are $1$-vectors -- that is, they are ordinary numerical random variables -- we can check independence by drawing a sample $(x_i,y_i),i=1,2,\ldots, n$ from the joint distribution of $(x,y)$ and looking at the scatterplot of the sample. The foregoing characterization in terms of conditional distributions means that as you scan left to right across this scatterplot, the vertical distributions of the points do not change. Be careful, though: the visual impression of the distribution can change because in some of these vertical windows there won't be many points and you will have a hard time seeing the distribution. You need to pay attention to the relative densities of the points within their vertical strips.
The archetypal example of independence is an uncorrelated bivariate Normal variable $(x,y).$ Here is a scatterplot of such a variable.
(The axis labels will be explained below.)
I have drawn it with an aspect ratio that gives the $x$ values the same amount of spread (horizontally) as the $y$ values (vertically). The lack of correlation is evidenced by the relatively even, circular cloud. In any narrow vertical strip, the relative numbers of points are given by the same Normal distribution -- same mean, same variance -- regardless of where the strip might be located.
Independence of longer vectors $x$ and $y$ implies independence of any (measurable) functions of them, say $f(x)$ and $g(y).$ Taking $f$ to be the function $\pi_i$ giving the value of coordinate $i$ and $g$ to be the function $\pi_j$ giving the value of coordinate $j$ ("projection functions"), we may look for lack of independence by reviewing a scatterplot of $(\pi_i(x), \pi_j(y)).$ The logic is this: when the scatterplot reveals lack of independence, that implies $(x,y)$ cannot be independent, either. When all such scatterplots suggest independence, that does not demonstrate independence of $(x,y)$ (but it does suggest it).
The first figure was drawn in this fashion. The two vectors involved are $(\hat y, \hat e)$ for an ordinary least squares regression with the model $y = \varepsilon$ using four observations with a single explanatory variable set to the values $1,2,3,$ and $4.$ The errors $\varepsilon$ were iid standard Normal. That figure plots the first predicted value $\hat y_1$ and the first residual $\hat e_1$ for $2000$ random instances of this model. To show that its appearance is no fluke, here is the plot of $\hat e_4$ against $\hat y_3$ from the same sample:
The theory implies all such scatterplots look like these nice even circular clouds when (a) the errors are iid Normal and (b) the model is fit using ordinary least squares.
On the other hand, after repeating this exercise but taking the error distribution to be iid $\Gamma(1)-1$ (a shifted Exponential with mean $0$), I obtained this version of the first figure:
This is an example of obvious lack of independence. For instance, when $\hat y_1$ is close to $-2$ (at the left of the figure), the residual $\hat e_1$ tends to be close to $1;$ but when $\hat y_1$ is much larger than $-2,$ the distribution of the residual $\hat e_1$ is much more spread out. The analog of the second figure, comparing the fourth residual to the third predicted value, also exhibits obvious lack of independence:
I wish to emphasize that the explanatory variables, the least squares parameters (intercept and slope), and the fitting method (least squares) are the same for these two models: the only thing that changed between the first two and second two figures was the shape of the error distributions from Normal to a shifted Gamma. (See the code below.) That change alone destroyed the independence between the predictions $\hat y$ and residuals $\hat e.$
After studying such scatterplots for the second model, given any of the predicted values $\hat y_i,$ you would be able to make predictions about any of the residuals $\hat e_j$ and they would be more accurate, on average, then not knowing $\hat y_i.$ For the first (Normal) model, though, information about any (or even all!) of the predicted values $\hat y_i$ by themselves would not give you useful information about any of the residuals, and vice versa: the residuals are not informative about the predicted values associated with them.
If you would like to create figures like these, here is the R code that produced them.
n <- 4 # Amount of data
x <- seq.int(n) # Explanatory variable values
y.0 <- 0*x/n # The model
k <- 1 # Gamma parameter (if used)
method <- "Normal"
# method <- paste0("Gamma(", k, ")")
#
# Sample the joint distribution.
#
n.sim <- 2000 # Sample size from the joint distribution of (explanatory, observed)
sim <- replicate(n.sim, {
if (method != "Normal") {
y <- y.0 + rgamma(length(y.0), k, k) - 1
} else {
y <- y.0 + rnorm(length(y.0))
}
fit <- lm(y ~ x )
c(predict(fit), residuals(fit))
})
sim <- array(sim, c(n, 2, n.sim)) # Each `sim[,,k]` has 2 columns of predictions and residuals
#
# Make a scatterplot of predictions and residuals from this sample.
#
i <- 3 # Component of the prediction vector to plot
j <- 4 # Component of the residual vector to plot
plot(sim[i,1,], sim[j,2,], las=2, pch=21, bg="#00000010", col="#00000020",
main=paste(method, "Errors"), cex.main=1.1,
xlab = bquote(hat(y)[.(i)]), ylab = "")
mtext(bquote(hat(e)[.(j)]), side=2, line=2.5, las=2) | What does it mean for OLS residuals to be independent from the fitted values?
First consider what the definition of independence of two vector-valued random variables $x$ and $y$ comes down to: the probability that $x$ is in some event $\mathcal A$ and $y$ is in some event $\ma |
54,706 | What does it mean for OLS residuals to be independent from the fitted values? | Geometrically, it means the error is orthogonal to the prediction.
Roughly, OLS finds a point in the column space of $X$ which is closest to $y$ (assuming $y$ is not already in the column space). In the included picture, the vector in the grey plane is the prediction, and the vector outside the plane (connected to the prediction via the dotted line) is the data.
The dotted line is the error. It is quite literally $y - \hat{y}$ and you'll notice that the error is orthogonal to the prediction. Independence in this case manifests as orthogonality. | What does it mean for OLS residuals to be independent from the fitted values? | Geometrically, it means the error is orthogonal to the prediction.
Roughly, OLS finds a point in the column space of $X$ which is closest to $y$ (assuming $y$ is not already in the column space). In | What does it mean for OLS residuals to be independent from the fitted values?
Geometrically, it means the error is orthogonal to the prediction.
Roughly, OLS finds a point in the column space of $X$ which is closest to $y$ (assuming $y$ is not already in the column space). In the included picture, the vector in the grey plane is the prediction, and the vector outside the plane (connected to the prediction via the dotted line) is the data.
The dotted line is the error. It is quite literally $y - \hat{y}$ and you'll notice that the error is orthogonal to the prediction. Independence in this case manifests as orthogonality. | What does it mean for OLS residuals to be independent from the fitted values?
Geometrically, it means the error is orthogonal to the prediction.
Roughly, OLS finds a point in the column space of $X$ which is closest to $y$ (assuming $y$ is not already in the column space). In |
54,707 | Is there no such thing as a multivariate generalized linear mixed model? | Yes, there is such a thing as a Multivariate (multi-response) Generalized Linear Mixed Model (MGLMM)
Many popular software packages for fitting GLMMs are unable to handle multiple responses, especially those that work utilise the frequentist paradigm. However if you adopt a Bayesian approach then there are a number of options, such as BUGS, JAGS, Stan and the R package MCMCglmm. The latter even has a good vignette: "MCMC Methods for Multi-response Generalized Linear Mixed Models: The MCMCglmm R Package":
https://cran.r-project.org/web/packages/MCMCglmm/vignettes/Overview.pdf
There are also a number of relevant journal papers:
Bailey, T.C. and Hewson, P.J., 2004. Simultaneous modelling of multiple traffic safety performance indicators by using a multivariate generalized linear mixed model. Journal of the Royal Statistical Society: Series A (Statistics in Society), 167(3), pp.501-517.
https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-985X.2004.0apm7.x
Gueorguieva, R., 2001. A multivariate generalized linear mixed model for joint modelling of clustered outcomes in the exponential family. Statistical Modelling, 1(3), pp.177-193.
https://journals.sagepub.com/doi/abs/10.1177/1471082x0100100302
Madsen, P.S.G.L., Sørensen, P., Su, G., Damgaard, L.H., Thomsen, H. and Labouriau, R., 2006, August. DMU-a package for analyzing multivariate mixed models. In 8th World Congress on Genetics Applied to Livestock Production (Vol. 247). Belo Horizonte.
http://wcgalp.org/system/files/proceedings/2010/dmu-package-analyzing-multivariate-mixed-models.pdf
[Not paywalled] | Is there no such thing as a multivariate generalized linear mixed model? | Yes, there is such a thing as a Multivariate (multi-response) Generalized Linear Mixed Model (MGLMM)
Many popular software packages for fitting GLMMs are unable to handle multiple responses, especiall | Is there no such thing as a multivariate generalized linear mixed model?
Yes, there is such a thing as a Multivariate (multi-response) Generalized Linear Mixed Model (MGLMM)
Many popular software packages for fitting GLMMs are unable to handle multiple responses, especially those that work utilise the frequentist paradigm. However if you adopt a Bayesian approach then there are a number of options, such as BUGS, JAGS, Stan and the R package MCMCglmm. The latter even has a good vignette: "MCMC Methods for Multi-response Generalized Linear Mixed Models: The MCMCglmm R Package":
https://cran.r-project.org/web/packages/MCMCglmm/vignettes/Overview.pdf
There are also a number of relevant journal papers:
Bailey, T.C. and Hewson, P.J., 2004. Simultaneous modelling of multiple traffic safety performance indicators by using a multivariate generalized linear mixed model. Journal of the Royal Statistical Society: Series A (Statistics in Society), 167(3), pp.501-517.
https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-985X.2004.0apm7.x
Gueorguieva, R., 2001. A multivariate generalized linear mixed model for joint modelling of clustered outcomes in the exponential family. Statistical Modelling, 1(3), pp.177-193.
https://journals.sagepub.com/doi/abs/10.1177/1471082x0100100302
Madsen, P.S.G.L., Sørensen, P., Su, G., Damgaard, L.H., Thomsen, H. and Labouriau, R., 2006, August. DMU-a package for analyzing multivariate mixed models. In 8th World Congress on Genetics Applied to Livestock Production (Vol. 247). Belo Horizonte.
http://wcgalp.org/system/files/proceedings/2010/dmu-package-analyzing-multivariate-mixed-models.pdf
[Not paywalled] | Is there no such thing as a multivariate generalized linear mixed model?
Yes, there is such a thing as a Multivariate (multi-response) Generalized Linear Mixed Model (MGLMM)
Many popular software packages for fitting GLMMs are unable to handle multiple responses, especiall |
54,708 | Understand a statement about P value | If your data is $\mathcal{D}$, and your hypothesis $H_0$ then the p-value is $ p = \mathbb{P}(\mathcal{D}\mid H_0)$.
The $p$ value tells you the following:
If $H_0$ is true, how likely is the data I'm currently observing ?
So if $p$ is very low, it only means the data cannot easily happen in a world in which $H_0$ is true. This does not mean $H_0$ is wrong: the data itself $\mathcal{D}$ could be wrong. You have a choice: you either reject the theory $H_0$ or the data $\mathcal{D}$. If you instead want to compute $\mathbb{P}(H_0\mid\mathcal{D})$, you need to apply Bayes' rule
$$ \mathbb{P}(H_0\mid\mathcal{D}) \propto \mathbb{P}(\mathcal{D}\mid H_0)\mathbb{P}(H_0) =p\,\mathbb{P}(H_0)$$
"Working backwards" means that $\mathbb{P}(H_0\mid\mathcal{D}) = \mathbb{P}(\mathcal{D}\mid H_0)$ which is wrong in almost every scenario. To apply the formula, you need to compute $\mathbb{P}(H_0)$, this is what is meant by
The odds that a real effect was there in the first place
You however never have the value of $\mathbb{P}(H_0)$. | Understand a statement about P value | If your data is $\mathcal{D}$, and your hypothesis $H_0$ then the p-value is $ p = \mathbb{P}(\mathcal{D}\mid H_0)$.
The $p$ value tells you the following:
If $H_0$ is true, how likely is the data I' | Understand a statement about P value
If your data is $\mathcal{D}$, and your hypothesis $H_0$ then the p-value is $ p = \mathbb{P}(\mathcal{D}\mid H_0)$.
The $p$ value tells you the following:
If $H_0$ is true, how likely is the data I'm currently observing ?
So if $p$ is very low, it only means the data cannot easily happen in a world in which $H_0$ is true. This does not mean $H_0$ is wrong: the data itself $\mathcal{D}$ could be wrong. You have a choice: you either reject the theory $H_0$ or the data $\mathcal{D}$. If you instead want to compute $\mathbb{P}(H_0\mid\mathcal{D})$, you need to apply Bayes' rule
$$ \mathbb{P}(H_0\mid\mathcal{D}) \propto \mathbb{P}(\mathcal{D}\mid H_0)\mathbb{P}(H_0) =p\,\mathbb{P}(H_0)$$
"Working backwards" means that $\mathbb{P}(H_0\mid\mathcal{D}) = \mathbb{P}(\mathcal{D}\mid H_0)$ which is wrong in almost every scenario. To apply the formula, you need to compute $\mathbb{P}(H_0)$, this is what is meant by
The odds that a real effect was there in the first place
You however never have the value of $\mathbb{P}(H_0)$. | Understand a statement about P value
If your data is $\mathcal{D}$, and your hypothesis $H_0$ then the p-value is $ p = \mathbb{P}(\mathcal{D}\mid H_0)$.
The $p$ value tells you the following:
If $H_0$ is true, how likely is the data I' |
54,709 | Understand a statement about P value | Answering without equations:
p value is a measure of surprise. Given that the null hypothesis is true (Design of the experiment) what is the chances that you stumble upon a value at least this extreme in your data.
You compute p on the test data. You never can include the actual data. If you can get the actual data there is no need of hypothesis testing itself in the first place. So the test data even is a random sample and it can come from any part of the actual distribution. Given this randomness you cannot use the p value that you computed say that there was just a p*100% chance of his result being a false alarm.
Hope this answers your question. | Understand a statement about P value | Answering without equations:
p value is a measure of surprise. Given that the null hypothesis is true (Design of the experiment) what is the chances that you stumble upon a value at least this extreme | Understand a statement about P value
Answering without equations:
p value is a measure of surprise. Given that the null hypothesis is true (Design of the experiment) what is the chances that you stumble upon a value at least this extreme in your data.
You compute p on the test data. You never can include the actual data. If you can get the actual data there is no need of hypothesis testing itself in the first place. So the test data even is a random sample and it can come from any part of the actual distribution. Given this randomness you cannot use the p value that you computed say that there was just a p*100% chance of his result being a false alarm.
Hope this answers your question. | Understand a statement about P value
Answering without equations:
p value is a measure of surprise. Given that the null hypothesis is true (Design of the experiment) what is the chances that you stumble upon a value at least this extreme |
54,710 | How is this connection between Beta and Binomial possible? | The function you have plotted is the kernel of a beta density function (i.e., it is a positive multiple of the beta density). Since you have really just plotted the binomial likelihood function for a particular observed outcome, from a Bayesian perspective your plotted function is proportionate to the posterior density that emerges from using a uniform prior for $p$. In Bayesian modelling terms, what you have illustrated here is that:
$$\underbrace{\text{Beta}(p | 15, 28)}_\text{Posterior} \ \propto \ \underbrace{\text{Bin}(14|41, p)}_\text{Likelihood} \times \underbrace{\text{U}(p|0,1)}_\text{Prior}.$$
(Note also that your last plot just appears to be a repetition of the first plot. Your code for the beta distribution uses the wrong parameters; see the parameterisation of the beta distribution to see why the parameters should be as shown here.) To plot the posterior density in this problem we can use the following R code:
#Plot posterior density
library(ggplot2)
XX <- seq(0, 1, by = 0.001)
DD <- dbeta(XX, shape1 = 15, shape2 = 28)
qplot(XX, DD, geom = 'line') + xlab('p') + ylab('Posterior Density') | How is this connection between Beta and Binomial possible? | The function you have plotted is the kernel of a beta density function (i.e., it is a positive multiple of the beta density). Since you have really just plotted the binomial likelihood function for a | How is this connection between Beta and Binomial possible?
The function you have plotted is the kernel of a beta density function (i.e., it is a positive multiple of the beta density). Since you have really just plotted the binomial likelihood function for a particular observed outcome, from a Bayesian perspective your plotted function is proportionate to the posterior density that emerges from using a uniform prior for $p$. In Bayesian modelling terms, what you have illustrated here is that:
$$\underbrace{\text{Beta}(p | 15, 28)}_\text{Posterior} \ \propto \ \underbrace{\text{Bin}(14|41, p)}_\text{Likelihood} \times \underbrace{\text{U}(p|0,1)}_\text{Prior}.$$
(Note also that your last plot just appears to be a repetition of the first plot. Your code for the beta distribution uses the wrong parameters; see the parameterisation of the beta distribution to see why the parameters should be as shown here.) To plot the posterior density in this problem we can use the following R code:
#Plot posterior density
library(ggplot2)
XX <- seq(0, 1, by = 0.001)
DD <- dbeta(XX, shape1 = 15, shape2 = 28)
qplot(XX, DD, geom = 'line') + xlab('p') + ylab('Posterior Density') | How is this connection between Beta and Binomial possible?
The function you have plotted is the kernel of a beta density function (i.e., it is a positive multiple of the beta density). Since you have really just plotted the binomial likelihood function for a |
54,711 | How to use a for loop in this Bernoulli exercise in R? [closed] | The instruction in the exercise to use a loop is bad advice. The rbinom function is already capable of simulating vectors of values, so there is no need for a loop. The simplest thing to do here is to create an $r \times N$ matrix of simulated Bernoulli random variables taking $N=100$ so that you have enough sample size to meet the requirements of the four specified values of $n$. Assuming you don't mind having nested simulations (which is not a problem) you then have all the simulated values you need to construct the figures. Here is some simple code to generate a "reproducible" matrix of simulated Bernoulli random variables. (Note that you can also use the sample.int function to efficiently simulate Bernoulli random variables.) It will not give you the exact outcomes used in the graphs in that book, but it will nonetheless give you reproducible simulated values.
#Set parameters
N <- 100
r <- 10000
PROB <- 0.78
#Create matrix for simulated values
set.seed(1)
SIMULATIONS <- matrix(rbinom(r*N, size = 1, prob = PROB), nrow = r, ncol = N)
colnames(SIMULATIONS) <- sprintf('Sample[%s]', 1:N)
Once you have simulated the matrix SIMULATIONS you then have $r$ rows of simulated values for $N$ sample values. You can obtain the relevant simulations of the sample means by using the rowMeans function on the relevant subsets of the matrix. You can then use appropriate plotting functions to construct the required plots. This gives similar results to the graphs you have shown.
#Create matrix of standardised sample means
STD.MEANS <- matrix(0, nrow = R, ncol = 4)
colnames(MEANS) <- c('n[2]', 'n[5]', 'n[25]', 'n[100]')
STD.MEANS[, 1] <- sqrt(2)*(rowMeans(SIMULATIONS[, 1:2]) - PROB)/sqrt(PROB*(1-PROB))
STD.MEANS[, 2] <- sqrt(5)*(rowMeans(SIMULATIONS[, 1:5]) - PROB)/sqrt(PROB*(1-PROB))
STD.MEANS[, 3] <- sqrt(25)*(rowMeans(SIMULATIONS[, 1:25]) - PROB)/sqrt(PROB*(1-PROB))
STD.MEANS[, 4] <- sqrt(100)*(rowMeans(SIMULATIONS[, 1:100]) - PROB)/sqrt(PROB*(1-PROB))
#Plot the histograms
par(mfrow = c(2,2))
hist(STD.MEANS[, 1], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 2)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2)
hist(STD.MEANS[, 2], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 5)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2)
hist(STD.MEANS[, 3], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 25)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2)
hist(STD.MEANS[, 4], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 100)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2) | How to use a for loop in this Bernoulli exercise in R? [closed] | The instruction in the exercise to use a loop is bad advice. The rbinom function is already capable of simulating vectors of values, so there is no need for a loop. The simplest thing to do here is | How to use a for loop in this Bernoulli exercise in R? [closed]
The instruction in the exercise to use a loop is bad advice. The rbinom function is already capable of simulating vectors of values, so there is no need for a loop. The simplest thing to do here is to create an $r \times N$ matrix of simulated Bernoulli random variables taking $N=100$ so that you have enough sample size to meet the requirements of the four specified values of $n$. Assuming you don't mind having nested simulations (which is not a problem) you then have all the simulated values you need to construct the figures. Here is some simple code to generate a "reproducible" matrix of simulated Bernoulli random variables. (Note that you can also use the sample.int function to efficiently simulate Bernoulli random variables.) It will not give you the exact outcomes used in the graphs in that book, but it will nonetheless give you reproducible simulated values.
#Set parameters
N <- 100
r <- 10000
PROB <- 0.78
#Create matrix for simulated values
set.seed(1)
SIMULATIONS <- matrix(rbinom(r*N, size = 1, prob = PROB), nrow = r, ncol = N)
colnames(SIMULATIONS) <- sprintf('Sample[%s]', 1:N)
Once you have simulated the matrix SIMULATIONS you then have $r$ rows of simulated values for $N$ sample values. You can obtain the relevant simulations of the sample means by using the rowMeans function on the relevant subsets of the matrix. You can then use appropriate plotting functions to construct the required plots. This gives similar results to the graphs you have shown.
#Create matrix of standardised sample means
STD.MEANS <- matrix(0, nrow = R, ncol = 4)
colnames(MEANS) <- c('n[2]', 'n[5]', 'n[25]', 'n[100]')
STD.MEANS[, 1] <- sqrt(2)*(rowMeans(SIMULATIONS[, 1:2]) - PROB)/sqrt(PROB*(1-PROB))
STD.MEANS[, 2] <- sqrt(5)*(rowMeans(SIMULATIONS[, 1:5]) - PROB)/sqrt(PROB*(1-PROB))
STD.MEANS[, 3] <- sqrt(25)*(rowMeans(SIMULATIONS[, 1:25]) - PROB)/sqrt(PROB*(1-PROB))
STD.MEANS[, 4] <- sqrt(100)*(rowMeans(SIMULATIONS[, 1:100]) - PROB)/sqrt(PROB*(1-PROB))
#Plot the histograms
par(mfrow = c(2,2))
hist(STD.MEANS[, 1], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 2)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2)
hist(STD.MEANS[, 2], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 5)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2)
hist(STD.MEANS[, 3], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 25)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2)
hist(STD.MEANS[, 4], prob = TRUE, col = "skyblue2", xlim = c(-5, 5),
main = '(n = 100)', xlab = 'Standardised Sample Mean')
curve(dnorm(x), add = TRUE, lwd = 2) | How to use a for loop in this Bernoulli exercise in R? [closed]
The instruction in the exercise to use a loop is bad advice. The rbinom function is already capable of simulating vectors of values, so there is no need for a loop. The simplest thing to do here is |
54,712 | How to use a for loop in this Bernoulli exercise in R? [closed] | To begin, I agree with @Ben's(+1) statements about avoiding explicit
loops when possible. I have used for loops because they seem
to be required for your exercise.
Standardization is done outside the for loop, using means and standard deviations from the 10,000 averages a.
Here is a simulation in R for the case $n = 25.$
set.seed(121)
n = 25; p = 0.78
r = 10^4; a = numeric(r)
for(i in 1:r) {
a[i] = mean(rbinom(n, 1, .78))
}
mean(a); sd(a)
z = (a-mean(a))/sd(a)
cp = seq(-5.75, 5.75, length=13)
hdr = "n=25: Standardized Value of Sample Average"
hist(z, prob=T, br=cp, ylim=c(0,.4), col="skyblue2", main=hdr)
curve(dnorm(x), add=T, lwd=2)
Note about making histograms: I have used standard graphics from the base of R to make
the histogram and superimpose the standard normal density curve.
Even though $r = 10,000$ means have been generated so that a
has $r$ values, there are not many unique values in a--fifteen
in my simulation (some relatively rare). If the tied values are not equitably proportioned
among the histogram bins, you get some strange-looking histograms
that make the values of z look far from normal. By choosing 13 bins
I got a nice plot. (Roughly speaking, there are two z-values per bin, with some empty bins at the ends.)
length(unique(a))
[1] 15
table(a)
a
0.44 0.48 0.52 0.56 0.6 0.64 0.68 0.72 0.76 0.8 0.84 0.88 0.92 0.96 1
1 12 22 86 212 459 847 1381 1795 1914 1591 1037 495 123 25 | How to use a for loop in this Bernoulli exercise in R? [closed] | To begin, I agree with @Ben's(+1) statements about avoiding explicit
loops when possible. I have used for loops because they seem
to be required for your exercise.
Standardization is done outside the | How to use a for loop in this Bernoulli exercise in R? [closed]
To begin, I agree with @Ben's(+1) statements about avoiding explicit
loops when possible. I have used for loops because they seem
to be required for your exercise.
Standardization is done outside the for loop, using means and standard deviations from the 10,000 averages a.
Here is a simulation in R for the case $n = 25.$
set.seed(121)
n = 25; p = 0.78
r = 10^4; a = numeric(r)
for(i in 1:r) {
a[i] = mean(rbinom(n, 1, .78))
}
mean(a); sd(a)
z = (a-mean(a))/sd(a)
cp = seq(-5.75, 5.75, length=13)
hdr = "n=25: Standardized Value of Sample Average"
hist(z, prob=T, br=cp, ylim=c(0,.4), col="skyblue2", main=hdr)
curve(dnorm(x), add=T, lwd=2)
Note about making histograms: I have used standard graphics from the base of R to make
the histogram and superimpose the standard normal density curve.
Even though $r = 10,000$ means have been generated so that a
has $r$ values, there are not many unique values in a--fifteen
in my simulation (some relatively rare). If the tied values are not equitably proportioned
among the histogram bins, you get some strange-looking histograms
that make the values of z look far from normal. By choosing 13 bins
I got a nice plot. (Roughly speaking, there are two z-values per bin, with some empty bins at the ends.)
length(unique(a))
[1] 15
table(a)
a
0.44 0.48 0.52 0.56 0.6 0.64 0.68 0.72 0.76 0.8 0.84 0.88 0.92 0.96 1
1 12 22 86 212 459 847 1381 1795 1914 1591 1037 495 123 25 | How to use a for loop in this Bernoulli exercise in R? [closed]
To begin, I agree with @Ben's(+1) statements about avoiding explicit
loops when possible. I have used for loops because they seem
to be required for your exercise.
Standardization is done outside the |
54,713 | Is the following textbook definition of $p$-value correct? | The issue I have with this is that as it stands it is not a definition, as long as there is no formal definition what "in favour of $H_1$" actually means. Furthermore, as you probably know, Fisher and others have defined tests and p-values without specifying a $H_1$.
Here's an attempt to make the "definition" correct. A test generally is defined by a test statistic $T$ and a "discrepancy" $d$ (see below), and a p-value is $P_{H_0}\{d(T,H_0)\ge d(t,H_0)\}$, where $d$ is a suitably defined discrepancy function between a value of the test statistic $T$ (where $t$ is the actual value observed in the data) and what is "expected" under the $H_0$.
One way of defining $T$ and $d$ is to set up an alternative $H_1$ and to choose $T$ and $d$ so that optimal rejection probability at any fixed level $\alpha$ is achieved under $H_1$. This is Neyman and Pearson's approach, and it may require side conditions such as the test being unbiased, because for example in the two-sided case otherwise uniform optimality under $H_1$ cannot be achieved.
Using the concept of unbiasedness, given $T$ and $d$ (which may or may not have been derived using a specific alternative), one can define an implicit (composite) alternative $H_1$ of any given test as all distributions $Q$ so that $Q\{d(T,H_0)\ge d(t,H_0)\}>P_{H_0}\{d(T,H_0)\ge d(t,H_0)\}$. I assume here that this can be fulfilled uniformly over all possible values of $t$ (probably it's good enough to relax this a bit by asking for "$\ge$" instead of "$>$", and "$>$" for at least one $t$ or something). Note that if we don't think about p-values but rather about $\alpha$-level testing for fixed $\alpha$, one can define an "implicit alternative" based on the critical value $t_\alpha$, which should always be possible; I haven't thought much about how much more restrictive the uniformity assumption is, but it seems to me that this is what is needed to make the definition in question valid.
Using this definition, it is simply the case that $t$ can be seen more "in favour of $H_1$" if $d(t,H_0)$ is larger, and this makes the definition in the question correct. (The issue with composite $H_1$ such as $H_1:\ \mu\neq \mu_0$ when testing $H_0:\ \mu=\mu_0$ is just to define $d$ accordingly, for example using $d(T,H_0)=|T-\mu_0|$ rather than $T-\mu_0$ (or maybe $d(T,H_0)=(T-\mu_0)1(T-\mu_0>0)$, $T$ here being an estimator of $\mu$, if we insist on a discrepancy being non-negative) for $H_1:\ \mu>\mu_0$.
See also my answer here. | Is the following textbook definition of $p$-value correct? | The issue I have with this is that as it stands it is not a definition, as long as there is no formal definition what "in favour of $H_1$" actually means. Furthermore, as you probably know, Fisher and | Is the following textbook definition of $p$-value correct?
The issue I have with this is that as it stands it is not a definition, as long as there is no formal definition what "in favour of $H_1$" actually means. Furthermore, as you probably know, Fisher and others have defined tests and p-values without specifying a $H_1$.
Here's an attempt to make the "definition" correct. A test generally is defined by a test statistic $T$ and a "discrepancy" $d$ (see below), and a p-value is $P_{H_0}\{d(T,H_0)\ge d(t,H_0)\}$, where $d$ is a suitably defined discrepancy function between a value of the test statistic $T$ (where $t$ is the actual value observed in the data) and what is "expected" under the $H_0$.
One way of defining $T$ and $d$ is to set up an alternative $H_1$ and to choose $T$ and $d$ so that optimal rejection probability at any fixed level $\alpha$ is achieved under $H_1$. This is Neyman and Pearson's approach, and it may require side conditions such as the test being unbiased, because for example in the two-sided case otherwise uniform optimality under $H_1$ cannot be achieved.
Using the concept of unbiasedness, given $T$ and $d$ (which may or may not have been derived using a specific alternative), one can define an implicit (composite) alternative $H_1$ of any given test as all distributions $Q$ so that $Q\{d(T,H_0)\ge d(t,H_0)\}>P_{H_0}\{d(T,H_0)\ge d(t,H_0)\}$. I assume here that this can be fulfilled uniformly over all possible values of $t$ (probably it's good enough to relax this a bit by asking for "$\ge$" instead of "$>$", and "$>$" for at least one $t$ or something). Note that if we don't think about p-values but rather about $\alpha$-level testing for fixed $\alpha$, one can define an "implicit alternative" based on the critical value $t_\alpha$, which should always be possible; I haven't thought much about how much more restrictive the uniformity assumption is, but it seems to me that this is what is needed to make the definition in question valid.
Using this definition, it is simply the case that $t$ can be seen more "in favour of $H_1$" if $d(t,H_0)$ is larger, and this makes the definition in the question correct. (The issue with composite $H_1$ such as $H_1:\ \mu\neq \mu_0$ when testing $H_0:\ \mu=\mu_0$ is just to define $d$ accordingly, for example using $d(T,H_0)=|T-\mu_0|$ rather than $T-\mu_0$ (or maybe $d(T,H_0)=(T-\mu_0)1(T-\mu_0>0)$, $T$ here being an estimator of $\mu$, if we insist on a discrepancy being non-negative) for $H_1:\ \mu>\mu_0$.
See also my answer here. | Is the following textbook definition of $p$-value correct?
The issue I have with this is that as it stands it is not a definition, as long as there is no formal definition what "in favour of $H_1$" actually means. Furthermore, as you probably know, Fisher and |
54,714 | Is the following textbook definition of $p$-value correct? | That is the correct definition for a test with a simple null hypothesis. For a test with a composite null hypothesis (i.e., more than one possible parameter value in the null space) things are complicated a bit by the fact that the p-value is the supremum over the conditional probabilities over the parameters in the null space. | Is the following textbook definition of $p$-value correct? | That is the correct definition for a test with a simple null hypothesis. For a test with a composite null hypothesis (i.e., more than one possible parameter value in the null space) things are compli | Is the following textbook definition of $p$-value correct?
That is the correct definition for a test with a simple null hypothesis. For a test with a composite null hypothesis (i.e., more than one possible parameter value in the null space) things are complicated a bit by the fact that the p-value is the supremum over the conditional probabilities over the parameters in the null space. | Is the following textbook definition of $p$-value correct?
That is the correct definition for a test with a simple null hypothesis. For a test with a composite null hypothesis (i.e., more than one possible parameter value in the null space) things are compli |
54,715 | Is the following textbook definition of $p$-value correct? | The more general definition of a p-value is
the p-value is the probability of getting a result that is at least as extreme as the observed result, provided that $H_0$ is correct.
The definition is not clear about what 'extreme' means. One example of a p-value is a p-value that defines the degree of extremeness as values that are more in favour of $H_1$. This gives the definition in your question
the p-value is the probability of getting a result that is at least as much in favour of $H_1$ as the observed result, provided that $H_0$ is correct
This is not the definition of a p-value but a definition of a p-value.
It is a bit difficult to see what they mean by
at least as much in favour of $H_1$
One could view this definition in terms of the likelihood ratio test which is (for simplicity we use simple hypotheses):
$$P \left ( \frac{\mathcal{L}(H_1|X)}{\mathcal{L}(H_0|X)} \geq \frac{\mathcal{L}(H_1|x_{observed})}{\mathcal{L}(H_0|x_{observed})} \right)$$
The $p$-value (in a likelihood ratio test) is the probability of getting a result for which the likelihood ratio of the hypotheses $H_1$ and $H_0$ is at least as much as the observed result, provided that $H_0$ is correct.
I call it not clear what they mean with 'at least as much in favour' because I had initially a different thought about it than the likelihood ratio
- I would prefer to use phrasing in terms of that likelihood. The term 'at least as much in favour of $H_1$' confused me initially and made me think of the wrong $P \left ( {\mathcal{L}(H_1|X)}>{\mathcal{L}(H_1|x_{observed})} \right)$
Example, say we have a sample $X \sim N(\mu,1)$ to test the hypotheses that are $H_0:\mu = 0$ and $H_1: \mu =2$. Let the observation be $x = 3$, then the values that are at least as much in favour for $H_1$ are in between $1$ and $3$ and the probability for that under $H_0$ is $\Phi(3)-\Phi(1) \approx 0.157$. But with the likelihood ratio test we would not consider the values between $1$ and $3$ that are more in favour of $H_1$ and instead we would consider the values $>3$ for which the outcome is relatively more in favour of $H_1$ in comparison to $H_0$.
- The term 'in favour' also initially confused me because it implies that the observed result must be in favour of $H_1$ but that does not need to be the case. It can be that the values are in favour of $H_0$.
Example, say we have a sample $X \sim N(\mu,1)$ to test the hypotheses that are $H_0:\mu = 0$ and $H_1: \mu =10$. Let the observation be $x = 3$, then this is a value that is not in favour of $H_1$ (at least not compared to $H_0$). | Is the following textbook definition of $p$-value correct? | The more general definition of a p-value is
the p-value is the probability of getting a result that is at least as extreme as the observed result, provided that $H_0$ is correct.
The definition is n | Is the following textbook definition of $p$-value correct?
The more general definition of a p-value is
the p-value is the probability of getting a result that is at least as extreme as the observed result, provided that $H_0$ is correct.
The definition is not clear about what 'extreme' means. One example of a p-value is a p-value that defines the degree of extremeness as values that are more in favour of $H_1$. This gives the definition in your question
the p-value is the probability of getting a result that is at least as much in favour of $H_1$ as the observed result, provided that $H_0$ is correct
This is not the definition of a p-value but a definition of a p-value.
It is a bit difficult to see what they mean by
at least as much in favour of $H_1$
One could view this definition in terms of the likelihood ratio test which is (for simplicity we use simple hypotheses):
$$P \left ( \frac{\mathcal{L}(H_1|X)}{\mathcal{L}(H_0|X)} \geq \frac{\mathcal{L}(H_1|x_{observed})}{\mathcal{L}(H_0|x_{observed})} \right)$$
The $p$-value (in a likelihood ratio test) is the probability of getting a result for which the likelihood ratio of the hypotheses $H_1$ and $H_0$ is at least as much as the observed result, provided that $H_0$ is correct.
I call it not clear what they mean with 'at least as much in favour' because I had initially a different thought about it than the likelihood ratio
- I would prefer to use phrasing in terms of that likelihood. The term 'at least as much in favour of $H_1$' confused me initially and made me think of the wrong $P \left ( {\mathcal{L}(H_1|X)}>{\mathcal{L}(H_1|x_{observed})} \right)$
Example, say we have a sample $X \sim N(\mu,1)$ to test the hypotheses that are $H_0:\mu = 0$ and $H_1: \mu =2$. Let the observation be $x = 3$, then the values that are at least as much in favour for $H_1$ are in between $1$ and $3$ and the probability for that under $H_0$ is $\Phi(3)-\Phi(1) \approx 0.157$. But with the likelihood ratio test we would not consider the values between $1$ and $3$ that are more in favour of $H_1$ and instead we would consider the values $>3$ for which the outcome is relatively more in favour of $H_1$ in comparison to $H_0$.
- The term 'in favour' also initially confused me because it implies that the observed result must be in favour of $H_1$ but that does not need to be the case. It can be that the values are in favour of $H_0$.
Example, say we have a sample $X \sim N(\mu,1)$ to test the hypotheses that are $H_0:\mu = 0$ and $H_1: \mu =10$. Let the observation be $x = 3$, then this is a value that is not in favour of $H_1$ (at least not compared to $H_0$). | Is the following textbook definition of $p$-value correct?
The more general definition of a p-value is
the p-value is the probability of getting a result that is at least as extreme as the observed result, provided that $H_0$ is correct.
The definition is n |
54,716 | Is the following textbook definition of $p$-value correct? | It's obviously a translation, but logically correct. P-value is probability, under $H_0$, of a more extreme result of the test statistic in the direction(s) of the alternative hypothesis than the observed value of the test statistic. [For a two-sided alternative, two probabilities are added to get the P-value.]
Consider the following normal samples (from R) and a Welch 2-sample t test to see
whether their sample means are significantly different; specifically to test
$H_0: \mu_1 = \mu_2$ against $H_a: \mu_1 < \mu_2.$
set.seed(1234)
x1 = rnorm(20, 100, 10)
x2 = rnorm(25, 110, 12)
Welch Two Sample t-test
data: x1 and x2
t = -2.0301, df = 40.54, p-value = 0.02447
alternative hypothesis:
true difference in means is less than 0
95 percent confidence interval:
-Inf -1.046619
sample estimates:
mean of x mean of y
97.49336 103.62038
Under $H_0,$ the test statistic is approximately distributed as Student's t
distribution with 41 degrees of freedom. So one would reject $H_0$ at the
5% level if $T \le -1.683.$
qt(.05, 41)
[1] -1.682878
However, the $T = -2.0301$ is even smaller than this 'critical value'. The P=value
is the probability $P(T \le -2.0301) \approx 0.0244,$ computed under $H_0.$
pt(-2.0301, 41)
[1] 0.0244342
In the figure below the P-value is the area under the density curve to the left of the vertical red line.
By contrast, if this were a two-sided test $H_0: \mu_1=\mu_2$ against $H_a: \mu_1 \ne \mu_2,$ then the P-value would be
$P(|T| \ge 2.0301) \approx 2(0.244) = 0.0488.$ So the sample means differ significantly at the 5% level of significance.
t.test(x1, x2)
Welch Two Sample t-test
data: x1 and x2
t = -2.0301, df = 40.54, p-value = 0.04894
alternative hypothesis:
true difference in means is not equal to 0
95 percent confidence interval:
-12.22426952 -0.02977364
sample estimates:
mean of x mean of y
97.49336 103.62038
In the figure below the P-value is the sum of the areas outside
the vertical red lines.
Note: If this were a pooled two-sample t test, then the degrees of freedom for the t statistic under $H_0$ would be $\nu = n_1+n_2 - 2 = 43.$ Because this is a Welch t test and sample variances are
not exactly equal, the degrees of freedom are computed according to a formula that involves $n_1, n_2, S_1^2,$ and $S_2^2.$ giving
$\min(n_1-1,n_2-1) \le \nu \le n_1+n_2-2.$
For the current data, $\nu = 40.54.$ R shows fractional degrees of freedom; printed tables of t distributions and some software program use only integer degrees of freedom. | Is the following textbook definition of $p$-value correct? | It's obviously a translation, but logically correct. P-value is probability, under $H_0$, of a more extreme result of the test statistic in the direction(s) of the alternative hypothesis than the obse | Is the following textbook definition of $p$-value correct?
It's obviously a translation, but logically correct. P-value is probability, under $H_0$, of a more extreme result of the test statistic in the direction(s) of the alternative hypothesis than the observed value of the test statistic. [For a two-sided alternative, two probabilities are added to get the P-value.]
Consider the following normal samples (from R) and a Welch 2-sample t test to see
whether their sample means are significantly different; specifically to test
$H_0: \mu_1 = \mu_2$ against $H_a: \mu_1 < \mu_2.$
set.seed(1234)
x1 = rnorm(20, 100, 10)
x2 = rnorm(25, 110, 12)
Welch Two Sample t-test
data: x1 and x2
t = -2.0301, df = 40.54, p-value = 0.02447
alternative hypothesis:
true difference in means is less than 0
95 percent confidence interval:
-Inf -1.046619
sample estimates:
mean of x mean of y
97.49336 103.62038
Under $H_0,$ the test statistic is approximately distributed as Student's t
distribution with 41 degrees of freedom. So one would reject $H_0$ at the
5% level if $T \le -1.683.$
qt(.05, 41)
[1] -1.682878
However, the $T = -2.0301$ is even smaller than this 'critical value'. The P=value
is the probability $P(T \le -2.0301) \approx 0.0244,$ computed under $H_0.$
pt(-2.0301, 41)
[1] 0.0244342
In the figure below the P-value is the area under the density curve to the left of the vertical red line.
By contrast, if this were a two-sided test $H_0: \mu_1=\mu_2$ against $H_a: \mu_1 \ne \mu_2,$ then the P-value would be
$P(|T| \ge 2.0301) \approx 2(0.244) = 0.0488.$ So the sample means differ significantly at the 5% level of significance.
t.test(x1, x2)
Welch Two Sample t-test
data: x1 and x2
t = -2.0301, df = 40.54, p-value = 0.04894
alternative hypothesis:
true difference in means is not equal to 0
95 percent confidence interval:
-12.22426952 -0.02977364
sample estimates:
mean of x mean of y
97.49336 103.62038
In the figure below the P-value is the sum of the areas outside
the vertical red lines.
Note: If this were a pooled two-sample t test, then the degrees of freedom for the t statistic under $H_0$ would be $\nu = n_1+n_2 - 2 = 43.$ Because this is a Welch t test and sample variances are
not exactly equal, the degrees of freedom are computed according to a formula that involves $n_1, n_2, S_1^2,$ and $S_2^2.$ giving
$\min(n_1-1,n_2-1) \le \nu \le n_1+n_2-2.$
For the current data, $\nu = 40.54.$ R shows fractional degrees of freedom; printed tables of t distributions and some software program use only integer degrees of freedom. | Is the following textbook definition of $p$-value correct?
It's obviously a translation, but logically correct. P-value is probability, under $H_0$, of a more extreme result of the test statistic in the direction(s) of the alternative hypothesis than the obse |
54,717 | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution? | A simple counterexample:
$$
P(X = 1, Y = 2) = \frac{1}{3}\\
P(X = 2, Y = 3) = \frac{1}{3}\\
P(X = 3, Y = 1) = \frac{1}{3}
$$
Then $P(X \le 1 | Y = 2) = 1$, but $P(Y \le 1 | X = 2) = 0$. | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d | A simple counterexample:
$$
P(X = 1, Y = 2) = \frac{1}{3}\\
P(X = 2, Y = 3) = \frac{1}{3}\\
P(X = 3, Y = 1) = \frac{1}{3}
$$
Then $P(X \le 1 | Y = 2) = 1$, but $P(Y \le 1 | X = 2) = 0$. | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution?
A simple counterexample:
$$
P(X = 1, Y = 2) = \frac{1}{3}\\
P(X = 2, Y = 3) = \frac{1}{3}\\
P(X = 3, Y = 1) = \frac{1}{3}
$$
Then $P(X \le 1 | Y = 2) = 1$, but $P(Y \le 1 | X = 2) = 0$. | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d
A simple counterexample:
$$
P(X = 1, Y = 2) = \frac{1}{3}\\
P(X = 2, Y = 3) = \frac{1}{3}\\
P(X = 3, Y = 1) = \frac{1}{3}
$$
Then $P(X \le 1 | Y = 2) = 1$, but $P(Y \le 1 | X = 2) = 0$. |
54,718 | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution? | How about $X=-Y = \begin{cases} 0 \\ 1 \end{cases}\quad$ each with probability $1/2.$ | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d | How about $X=-Y = \begin{cases} 0 \\ 1 \end{cases}\quad$ each with probability $1/2.$ | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution?
How about $X=-Y = \begin{cases} 0 \\ 1 \end{cases}\quad$ each with probability $1/2.$ | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d
How about $X=-Y = \begin{cases} 0 \\ 1 \end{cases}\quad$ each with probability $1/2.$ |
54,719 | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution? | There are already answers with simple examples, so why one more? Because it is interesting to look at the general pattern. The wanted property is a kind of symmetry, so we should look for some asymmetrical joint distribution for $X,Y)$. Si if $(X,Y)$ has a permutable distribution in the sense that $(X,Y)$ and $(Y,X)$ have the same distribution, so that for the joint cumulative we have $F(x,y)=F(y,x)$ for all $(x,y)$, then the sought-after property will hold.
Let us use copulas. Let $F$ be the joint cdf (cumulative distribution function) and
$$ \DeclareMathOperator{\P}{\mathbb{P}}
C(u,v)= \P(F(x) \le u, F(Y)\le v)
$$
By the Fréchet–Hoeffding copula bounds (see linked wiki article above) we have
$$ W(u,v) \le C(u,v) \le M(u,v) $$
where $W(u,v)= \max(u+v-1,0)$ and $M(u,v)=\min(u,v)$. Both $W,M$ are copulas. $W$ describes the anti-monotonic case $X=U, Y=1-U$ for $U$ some uniform random variable. Now you can check that $W$ gives a counterexample. $M$ corresponds to $X=U, Y=U$ which is not a counterexample. | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d | There are already answers with simple examples, so why one more? Because it is interesting to look at the general pattern. The wanted property is a kind of symmetry, so we should look for some asymmet | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution?
There are already answers with simple examples, so why one more? Because it is interesting to look at the general pattern. The wanted property is a kind of symmetry, so we should look for some asymmetrical joint distribution for $X,Y)$. Si if $(X,Y)$ has a permutable distribution in the sense that $(X,Y)$ and $(Y,X)$ have the same distribution, so that for the joint cumulative we have $F(x,y)=F(y,x)$ for all $(x,y)$, then the sought-after property will hold.
Let us use copulas. Let $F$ be the joint cdf (cumulative distribution function) and
$$ \DeclareMathOperator{\P}{\mathbb{P}}
C(u,v)= \P(F(x) \le u, F(Y)\le v)
$$
By the Fréchet–Hoeffding copula bounds (see linked wiki article above) we have
$$ W(u,v) \le C(u,v) \le M(u,v) $$
where $W(u,v)= \max(u+v-1,0)$ and $M(u,v)=\min(u,v)$. Both $W,M$ are copulas. $W$ describes the anti-monotonic case $X=U, Y=1-U$ for $U$ some uniform random variable. Now you can check that $W$ gives a counterexample. $M$ corresponds to $X=U, Y=U$ which is not a counterexample. | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d
There are already answers with simple examples, so why one more? Because it is interesting to look at the general pattern. The wanted property is a kind of symmetry, so we should look for some asymmet |
54,720 | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution? | Not necessarily true. Let $X$ and $Y$ be discrete random variables that take the values in {1, 2, 3} each with probability $\frac{1}3$; i.e. they have a discrete uniform distribution. Consider the joint probability mass function represented by the matrix below where the element in row $i$ and column $j$ is $P[X=i, Y=j]$:
$$\frac{1}{60}\begin{bmatrix} 3 & 6 & 11 \\ 3 & 12 & 5 \\ 14 & 2 & 4 \end{bmatrix}$$
Note that all rows and columns add up to $\frac{1}3$ and therefore the marginal distributions are the discrete uniform as stated. Now calculate the conditional probability $$P[X\leq2\mid Y=1]=\frac{\frac{3}{60}+\frac{3}{60}}{\frac{3}{60}+\frac{3}{60}+\frac{14}{60}}=\frac{3}{10}.$$ On the other hand, $$P[Y\leq2\mid X=1]=\frac{\frac{3}{60}+\frac{6}{60}}{\frac{3}{60}+\frac{6}{60}+\frac{11}{60}}=\frac{9}{20}.$$ | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d | Not necessarily true. Let $X$ and $Y$ be discrete random variables that take the values in {1, 2, 3} each with probability $\frac{1}3$; i.e. they have a discrete uniform distribution. Consider the jo | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional distribution?
Not necessarily true. Let $X$ and $Y$ be discrete random variables that take the values in {1, 2, 3} each with probability $\frac{1}3$; i.e. they have a discrete uniform distribution. Consider the joint probability mass function represented by the matrix below where the element in row $i$ and column $j$ is $P[X=i, Y=j]$:
$$\frac{1}{60}\begin{bmatrix} 3 & 6 & 11 \\ 3 & 12 & 5 \\ 14 & 2 & 4 \end{bmatrix}$$
Note that all rows and columns add up to $\frac{1}3$ and therefore the marginal distributions are the discrete uniform as stated. Now calculate the conditional probability $$P[X\leq2\mid Y=1]=\frac{\frac{3}{60}+\frac{3}{60}}{\frac{3}{60}+\frac{3}{60}+\frac{14}{60}}=\frac{3}{10}.$$ On the other hand, $$P[Y\leq2\mid X=1]=\frac{\frac{3}{60}+\frac{6}{60}}{\frac{3}{60}+\frac{6}{60}+\frac{11}{60}}=\frac{9}{20}.$$ | If $X$ and $Y$ have the same marginal distribution, then do they have to have the same conditional d
Not necessarily true. Let $X$ and $Y$ be discrete random variables that take the values in {1, 2, 3} each with probability $\frac{1}3$; i.e. they have a discrete uniform distribution. Consider the jo |
54,721 | Generating text from language model | "Weighted choice sampling" means that you sample each category with some predefined probability, so you basically sample from a categorical distribution. If each category has fixed probability, there is not much else that you can do about. There are some methods for special cases like Gumbel-max trick when the probabilities are unnormalized, tricks to speed up sampling for large number of categories, etc., but those do not seem to be related to your problem, as you do not mention any technical problems that need solving.
What we usually do when sampling from such language models, is we use softmax with temperature (see e.g. the blog post by Andrej Karpathy, this TensorFlow tutorial, or the Deep Learning with Python book by François Chollet for more details). The idea is pretty simple. When you make predictions from the model, you take the logits predicted by the final layer $z_i$ and pass them through softmax function, to transform them to probabilities i.e. $p_i = \exp(z_i) / \sum_j \exp (z_j)$. The thing that we change is we introduce a hyperparameter, the temperature $T$, so that the softmax function becomes:
$$
p_i = \frac{\exp( z_i\,/\,T )}{\sum_j \exp( z_j\,/\,T )}
$$
where $T=1$ leads to standard softmax, decreasing it makes the probabilities more extreme, hence more certain, so the samples are closer to the optimal values that would be predicted by the model, while increasing it leads to more diverse, "random" samples. Quoting Karpathy:
Temperature. We can also play with the temperature of the Softmax
during sampling. Decreasing the temperature from 1 to some lower
number (e.g. 0.5) makes the RNN more confident, but also more
conservative in its samples. Conversely, higher temperatures will give
more diversity but at cost of more mistakes (e.g. spelling mistakes,
etc).
Then, you use those probabilities same way as you would do with the raw probabilities.
See also the paper by Hinton, Vinyals, and Dean for another example of how temperature is used for other purposes. | Generating text from language model | "Weighted choice sampling" means that you sample each category with some predefined probability, so you basically sample from a categorical distribution. If each category has fixed probability, there | Generating text from language model
"Weighted choice sampling" means that you sample each category with some predefined probability, so you basically sample from a categorical distribution. If each category has fixed probability, there is not much else that you can do about. There are some methods for special cases like Gumbel-max trick when the probabilities are unnormalized, tricks to speed up sampling for large number of categories, etc., but those do not seem to be related to your problem, as you do not mention any technical problems that need solving.
What we usually do when sampling from such language models, is we use softmax with temperature (see e.g. the blog post by Andrej Karpathy, this TensorFlow tutorial, or the Deep Learning with Python book by François Chollet for more details). The idea is pretty simple. When you make predictions from the model, you take the logits predicted by the final layer $z_i$ and pass them through softmax function, to transform them to probabilities i.e. $p_i = \exp(z_i) / \sum_j \exp (z_j)$. The thing that we change is we introduce a hyperparameter, the temperature $T$, so that the softmax function becomes:
$$
p_i = \frac{\exp( z_i\,/\,T )}{\sum_j \exp( z_j\,/\,T )}
$$
where $T=1$ leads to standard softmax, decreasing it makes the probabilities more extreme, hence more certain, so the samples are closer to the optimal values that would be predicted by the model, while increasing it leads to more diverse, "random" samples. Quoting Karpathy:
Temperature. We can also play with the temperature of the Softmax
during sampling. Decreasing the temperature from 1 to some lower
number (e.g. 0.5) makes the RNN more confident, but also more
conservative in its samples. Conversely, higher temperatures will give
more diversity but at cost of more mistakes (e.g. spelling mistakes,
etc).
Then, you use those probabilities same way as you would do with the raw probabilities.
See also the paper by Hinton, Vinyals, and Dean for another example of how temperature is used for other purposes. | Generating text from language model
"Weighted choice sampling" means that you sample each category with some predefined probability, so you basically sample from a categorical distribution. If each category has fixed probability, there |
54,722 | Generating text from language model | There are multiple methods for sampling utterances from a trained language model (LM). What you're doing is certainly a valid approach, and fairly modern. Having, I'll just outline a few more approaches here that people have found empirically useful. These are commonly used in large LM such as those found in GPT-2 or RoBERTa.
As a formalism, we assume a max-probability decoding objective. There isn't usually a single best approach; good approaches are heavily task-dependent as well. However, each of these techniques targets failure modes in neural text generation, which can serve as a good heuristic for your own experimentation.
The Old.
Greedy decoding. At each time step, select the token with the highest probability. Fast, but trivially leads to non-diverse (and often suboptimal!) responses.
Beam search. At each time step, take the top $k$ generated utterances so far; use those as the starting point for search in the next iteration. Addresses the limitations of greedy decoding without blowing up the search space. Can lead to pathologically repetitive/non-diverse responses.
Pure sampling. This is what you're doing -- take a random choice weighted by the probability density generated at each time step. This actually results (empirically) in generated text with a similar token distribution as human-generated text (+ slightly higher perplexity) -- however, there is no guarantee of syntactic/grammatical coherence.
Newer approaches.
Softmax with temperature. Not a decoding algorithm, but a common trick. This is an extension to the above approaches that redistribute the probability mass used to sample tokens; @Tim has covered it extensively already on this thread.
Top-$k$ sampling. Builds off of weighted-choice sampling by only retaining $k$ words with the highest probability mass at each timestep and then sampling within that distribution. Lower $k$ leads to more generic output; higher $k$ leads to more diverse output. Pure sampling can be thought of as top-$V$ sampling ($V$ = size of vocabulary); greedy decoding is top-$1$ sampling.
Nucleus sampling. Based on a parameter $0 <= p <= 1$, aggregates the smallest set of words that have summed probability mass $p$. Can be thought of as a variation of top-k sampling with dynamic $k$.
Combinations of these are also valid -- top-k sampling is sometimes used with nucleus sampling, for example. You might also notice that softmax with temperature and nucleus sampling are both methods of redistributing the probability mass over the distribution of tokens; as a toy example, temperature decreases and lower p both have the effect of "sharpening" the distribution by dampening (or removing!) the likelihood of sampling rarer tokens.
Other variables to tune:
Length penalty. You can weight the probability scores of a sentence by a function of the length; without this weighting, max-probability decoding methods will favor shorter sentences (joint log-likelihood monotonically decreases as you add more tokens). This is a common scoring function:
$$\text{length_penalty}(Y) = \frac{(5 + |Y|)^\alpha}{(5 + 1)^\alpha}$$
where $0 < \alpha < 1$, with $\alpha = 0$ reverting to vanilla beam search. "Famously" used in Google Translate.
Repetition penalty. Lower the chance of repetition by discounting the scores of previously-generated tokens. Proposed here; a little finnicky empirically.
Min/max length. This is a quick-and-dirty way to ensure that your model generates text of an appropriate length; I used this personally to tune a summarization model.
Further reading.
Beam search Wikipedia page.
Holtzman, A. et. al. The curious case of neural text degeneration. (HIGHLY recommended)
Stewart, Russell. Maximum Likelihood Decoding with RNNs - the
good, the bad, and the ugly
See, Abigail. Natural Language Generation (CS224N Lecture 15,
Stanford)
Von Platen, Patrick. How to generate text: using
different decoding methods for language generation with
Transformers | Generating text from language model | There are multiple methods for sampling utterances from a trained language model (LM). What you're doing is certainly a valid approach, and fairly modern. Having, I'll just outline a few more approach | Generating text from language model
There are multiple methods for sampling utterances from a trained language model (LM). What you're doing is certainly a valid approach, and fairly modern. Having, I'll just outline a few more approaches here that people have found empirically useful. These are commonly used in large LM such as those found in GPT-2 or RoBERTa.
As a formalism, we assume a max-probability decoding objective. There isn't usually a single best approach; good approaches are heavily task-dependent as well. However, each of these techniques targets failure modes in neural text generation, which can serve as a good heuristic for your own experimentation.
The Old.
Greedy decoding. At each time step, select the token with the highest probability. Fast, but trivially leads to non-diverse (and often suboptimal!) responses.
Beam search. At each time step, take the top $k$ generated utterances so far; use those as the starting point for search in the next iteration. Addresses the limitations of greedy decoding without blowing up the search space. Can lead to pathologically repetitive/non-diverse responses.
Pure sampling. This is what you're doing -- take a random choice weighted by the probability density generated at each time step. This actually results (empirically) in generated text with a similar token distribution as human-generated text (+ slightly higher perplexity) -- however, there is no guarantee of syntactic/grammatical coherence.
Newer approaches.
Softmax with temperature. Not a decoding algorithm, but a common trick. This is an extension to the above approaches that redistribute the probability mass used to sample tokens; @Tim has covered it extensively already on this thread.
Top-$k$ sampling. Builds off of weighted-choice sampling by only retaining $k$ words with the highest probability mass at each timestep and then sampling within that distribution. Lower $k$ leads to more generic output; higher $k$ leads to more diverse output. Pure sampling can be thought of as top-$V$ sampling ($V$ = size of vocabulary); greedy decoding is top-$1$ sampling.
Nucleus sampling. Based on a parameter $0 <= p <= 1$, aggregates the smallest set of words that have summed probability mass $p$. Can be thought of as a variation of top-k sampling with dynamic $k$.
Combinations of these are also valid -- top-k sampling is sometimes used with nucleus sampling, for example. You might also notice that softmax with temperature and nucleus sampling are both methods of redistributing the probability mass over the distribution of tokens; as a toy example, temperature decreases and lower p both have the effect of "sharpening" the distribution by dampening (or removing!) the likelihood of sampling rarer tokens.
Other variables to tune:
Length penalty. You can weight the probability scores of a sentence by a function of the length; without this weighting, max-probability decoding methods will favor shorter sentences (joint log-likelihood monotonically decreases as you add more tokens). This is a common scoring function:
$$\text{length_penalty}(Y) = \frac{(5 + |Y|)^\alpha}{(5 + 1)^\alpha}$$
where $0 < \alpha < 1$, with $\alpha = 0$ reverting to vanilla beam search. "Famously" used in Google Translate.
Repetition penalty. Lower the chance of repetition by discounting the scores of previously-generated tokens. Proposed here; a little finnicky empirically.
Min/max length. This is a quick-and-dirty way to ensure that your model generates text of an appropriate length; I used this personally to tune a summarization model.
Further reading.
Beam search Wikipedia page.
Holtzman, A. et. al. The curious case of neural text degeneration. (HIGHLY recommended)
Stewart, Russell. Maximum Likelihood Decoding with RNNs - the
good, the bad, and the ugly
See, Abigail. Natural Language Generation (CS224N Lecture 15,
Stanford)
Von Platen, Patrick. How to generate text: using
different decoding methods for language generation with
Transformers | Generating text from language model
There are multiple methods for sampling utterances from a trained language model (LM). What you're doing is certainly a valid approach, and fairly modern. Having, I'll just outline a few more approach |
54,723 | Separating $X$ from $Y$ in $E[(X^T Y))^p]$ for $p = 3$ and $4$? | The answer is that for nonnegative integer values $P$, it is in principle possible, but gets progressively more complicated as $P$ increases. To this end, one can write
\begin{align}
\mathbf{E} \left[ \left( X^T Y \right)^P\right] &= \mathbf{E} \left[ \left( \sum_{i = 1}^D X_i Y_i \right)^P \right] \\
&= \mathbf{E} \left[ \sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \cdots \sum_{i_P = 1}^D \prod_{p = 1}^P X_{i_p} Y_{i_p} \right] \\
&= \sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \cdots \sum_{i_P = 1}^D \mathbf{E} \left[ \prod_{p = 1}^P X_{i_p} Y_{i_p} \right] \\
&= \sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \cdots \sum_{i_P = 1}^D \mathbf{E} \left[ \prod_{p = 1}^P X_{i_p} \right] \cdot \mathbf{E} \left[ \prod_{p = 1}^P Y_{i_p} \right].
\end{align}
However, the expressions like $\prod_{p = 1}^P X_{i_p}$ become more challenging to characterize concisely in closed form as $P$ grows.
For a concrete example, consider when $P = 3$, and assume that the coordinates of $X$ and $Y$ are each iid. The above expression can then be simplified to
\begin{align}
\sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \sum_{i_3 = 1}^D \mathbf{E} \left[ \prod_{p = 1}^3 X_{i_p} \right] \cdot \mathbf{E} \left[ \prod_{p = 1}^3 Y_{i_p} \right] &= D \cdot \mathbf{E} \left[ X^3 \right] \cdot \mathbf{E} \left[ Y^3 \right] \\
&+ 3D(D - 1) \cdot \mathbf{E} \left[ X \right] \cdot \mathbf{E} \left[ X^2 \right] \cdot \mathbf{E} \left[ Y \right] \cdot \mathbf{E} \left[ Y^2 \right] \\
&+ D(D-1)(D-2) \cdot \mathbf{E} \left[ X \right] ^3 \cdot \mathbf{E} \left[ Y \right]^3.
\end{align} | Separating $X$ from $Y$ in $E[(X^T Y))^p]$ for $p = 3$ and $4$? | The answer is that for nonnegative integer values $P$, it is in principle possible, but gets progressively more complicated as $P$ increases. To this end, one can write
\begin{align}
\mathbf{E} \left[ | Separating $X$ from $Y$ in $E[(X^T Y))^p]$ for $p = 3$ and $4$?
The answer is that for nonnegative integer values $P$, it is in principle possible, but gets progressively more complicated as $P$ increases. To this end, one can write
\begin{align}
\mathbf{E} \left[ \left( X^T Y \right)^P\right] &= \mathbf{E} \left[ \left( \sum_{i = 1}^D X_i Y_i \right)^P \right] \\
&= \mathbf{E} \left[ \sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \cdots \sum_{i_P = 1}^D \prod_{p = 1}^P X_{i_p} Y_{i_p} \right] \\
&= \sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \cdots \sum_{i_P = 1}^D \mathbf{E} \left[ \prod_{p = 1}^P X_{i_p} Y_{i_p} \right] \\
&= \sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \cdots \sum_{i_P = 1}^D \mathbf{E} \left[ \prod_{p = 1}^P X_{i_p} \right] \cdot \mathbf{E} \left[ \prod_{p = 1}^P Y_{i_p} \right].
\end{align}
However, the expressions like $\prod_{p = 1}^P X_{i_p}$ become more challenging to characterize concisely in closed form as $P$ grows.
For a concrete example, consider when $P = 3$, and assume that the coordinates of $X$ and $Y$ are each iid. The above expression can then be simplified to
\begin{align}
\sum_{i_1 = 1}^D \sum_{i_2 = 1}^D \sum_{i_3 = 1}^D \mathbf{E} \left[ \prod_{p = 1}^3 X_{i_p} \right] \cdot \mathbf{E} \left[ \prod_{p = 1}^3 Y_{i_p} \right] &= D \cdot \mathbf{E} \left[ X^3 \right] \cdot \mathbf{E} \left[ Y^3 \right] \\
&+ 3D(D - 1) \cdot \mathbf{E} \left[ X \right] \cdot \mathbf{E} \left[ X^2 \right] \cdot \mathbf{E} \left[ Y \right] \cdot \mathbf{E} \left[ Y^2 \right] \\
&+ D(D-1)(D-2) \cdot \mathbf{E} \left[ X \right] ^3 \cdot \mathbf{E} \left[ Y \right]^3.
\end{align} | Separating $X$ from $Y$ in $E[(X^T Y))^p]$ for $p = 3$ and $4$?
The answer is that for nonnegative integer values $P$, it is in principle possible, but gets progressively more complicated as $P$ increases. To this end, one can write
\begin{align}
\mathbf{E} \left[ |
54,724 | Deriving the marginal multivariate Dirichlet distribution | Since$$p(\theta_1,\ldots,\theta_{k-1})\propto (1-\theta_1-\cdots-\theta_{k-1})^{\alpha_{k}-1}\prod_{i=1}^{k-1} \theta_i^{\alpha_i-1}$$over the $\mathbb R^k$-simplex,
$$\mathfrak S = \left\{(\theta_1,\ldots,\theta_{k-1})\in\mathbb R^{k-1}_+\,;\,\sum_{i=1}^{k-1} \theta_i\le 1\right\}$$
integrating out $\theta_{k-1}$ produces the marginal density of $(\theta_1,\ldots,\theta_{k-2})$:
\begin{align}
p(\theta_1,\ldots,\theta_{k-2})&\propto \int_0^{1-\theta_1-\cdots-\theta_{k-2}}
(1-\theta_1-\cdots-\theta_{k-1})^{\alpha_k-1}\prod_{i=1}^{k-1} \theta_i^{\alpha_i-1}\,\text d\theta_{k-1}\\
&= \prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\, \int_0^{1-\theta_1-\cdots-\theta_{k-2}}
(1-\theta_1-\cdots-\theta_{k-1})^{\alpha_{k}-1}\,\theta_{k-1}^{\alpha_{k-1}-1}\,\text d\theta_{k-1}\\
\end{align}
Given the upper bound on $\theta_{k-1}$ found in the integral, one can rather naturally consider the change of variable
$$\theta_{k-1}=(1-\theta_1-\cdots-\theta_{k-2})\eta$$
leading to
\begin{align}
p(\theta_1,\ldots,\theta_{k-2})
&= \prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\, \int_0^1
(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}-1}(1-\eta)^{\alpha_{k-1}-1}\\
&\qquad\times(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}-1}\eta^{\alpha_{k-1}-1}\, (1-\theta_1-\cdots-\theta_{k-2})\text d\eta\\
&=\prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\,(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}+\alpha_k-1}\,
\underbrace{\int_0^1 (1-\eta)^{\alpha_{k}-1}\eta^{\alpha_{k-1}-1}\,\text d\eta}_\text{constant in $\theta$}\\
&\propto\prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\,(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}+\alpha_k-1}\,
\end{align} | Deriving the marginal multivariate Dirichlet distribution | Since$$p(\theta_1,\ldots,\theta_{k-1})\propto (1-\theta_1-\cdots-\theta_{k-1})^{\alpha_{k}-1}\prod_{i=1}^{k-1} \theta_i^{\alpha_i-1}$$over the $\mathbb R^k$-simplex,
$$\mathfrak S = \left\{(\theta_1,\ | Deriving the marginal multivariate Dirichlet distribution
Since$$p(\theta_1,\ldots,\theta_{k-1})\propto (1-\theta_1-\cdots-\theta_{k-1})^{\alpha_{k}-1}\prod_{i=1}^{k-1} \theta_i^{\alpha_i-1}$$over the $\mathbb R^k$-simplex,
$$\mathfrak S = \left\{(\theta_1,\ldots,\theta_{k-1})\in\mathbb R^{k-1}_+\,;\,\sum_{i=1}^{k-1} \theta_i\le 1\right\}$$
integrating out $\theta_{k-1}$ produces the marginal density of $(\theta_1,\ldots,\theta_{k-2})$:
\begin{align}
p(\theta_1,\ldots,\theta_{k-2})&\propto \int_0^{1-\theta_1-\cdots-\theta_{k-2}}
(1-\theta_1-\cdots-\theta_{k-1})^{\alpha_k-1}\prod_{i=1}^{k-1} \theta_i^{\alpha_i-1}\,\text d\theta_{k-1}\\
&= \prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\, \int_0^{1-\theta_1-\cdots-\theta_{k-2}}
(1-\theta_1-\cdots-\theta_{k-1})^{\alpha_{k}-1}\,\theta_{k-1}^{\alpha_{k-1}-1}\,\text d\theta_{k-1}\\
\end{align}
Given the upper bound on $\theta_{k-1}$ found in the integral, one can rather naturally consider the change of variable
$$\theta_{k-1}=(1-\theta_1-\cdots-\theta_{k-2})\eta$$
leading to
\begin{align}
p(\theta_1,\ldots,\theta_{k-2})
&= \prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\, \int_0^1
(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}-1}(1-\eta)^{\alpha_{k-1}-1}\\
&\qquad\times(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}-1}\eta^{\alpha_{k-1}-1}\, (1-\theta_1-\cdots-\theta_{k-2})\text d\eta\\
&=\prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\,(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}+\alpha_k-1}\,
\underbrace{\int_0^1 (1-\eta)^{\alpha_{k}-1}\eta^{\alpha_{k-1}-1}\,\text d\eta}_\text{constant in $\theta$}\\
&\propto\prod_{i=1}^{k-2} \theta_i^{\alpha_i-1}\,(1-\theta_1-\cdots-\theta_{k-2})^{\alpha_{k-1}+\alpha_k-1}\,
\end{align} | Deriving the marginal multivariate Dirichlet distribution
Since$$p(\theta_1,\ldots,\theta_{k-1})\propto (1-\theta_1-\cdots-\theta_{k-1})^{\alpha_{k}-1}\prod_{i=1}^{k-1} \theta_i^{\alpha_i-1}$$over the $\mathbb R^k$-simplex,
$$\mathfrak S = \left\{(\theta_1,\ |
54,725 | causal impact estimation | Thank you for including a causal diagram!
Answer: Simply regress $Y$ on $T$ like this:
$$Y=aT+b.$$
There is no backdoor path from $T$ to $Y,$ so you don't need to condition on anything. In fact, if you want the full causal effect of $T$ on $Y,$ you need to NOT condition on $x_2.$
You have a mediation situation, so there are other numbers in which you might be interested. You can consult Causal Inference in Statistics: A Primer, by Pearl, Glymour, and Jewell, for more information on mediation. | causal impact estimation | Thank you for including a causal diagram!
Answer: Simply regress $Y$ on $T$ like this:
$$Y=aT+b.$$
There is no backdoor path from $T$ to $Y,$ so you don't need to condition on anything. In fact, if yo | causal impact estimation
Thank you for including a causal diagram!
Answer: Simply regress $Y$ on $T$ like this:
$$Y=aT+b.$$
There is no backdoor path from $T$ to $Y,$ so you don't need to condition on anything. In fact, if you want the full causal effect of $T$ on $Y,$ you need to NOT condition on $x_2.$
You have a mediation situation, so there are other numbers in which you might be interested. You can consult Causal Inference in Statistics: A Primer, by Pearl, Glymour, and Jewell, for more information on mediation. | causal impact estimation
Thank you for including a causal diagram!
Answer: Simply regress $Y$ on $T$ like this:
$$Y=aT+b.$$
There is no backdoor path from $T$ to $Y,$ so you don't need to condition on anything. In fact, if yo |
54,726 | causal impact estimation | To simplify, I am going to make the problem linear in parameters. You have a structural-form equation for the outcome $y$, the intermediate outcome equation for $x$, and an independence assumption:
$$ \begin{align*} y_i &=\beta_1+\beta_t \cdot t_i + \beta_x \cdot x_i + \varepsilon_i \\ x_i &= \alpha_1+\alpha_t \cdot t_i + u_i \\
(t,x) & \perp \!\!\! \perp \varepsilon \\ \end{align*}$$
Plugging the second into the first gets you the reduced-form equation for the outcome:
$$ y_i = (\beta_1 + \beta_x \cdot \alpha_1) + (\beta_t +\beta_x \cdot \alpha_t) \cdot t_i + (\beta_x \cdot u_i + \varepsilon_i)
$$
You have two effects:
$$\begin{align*} \text{Total Effect: }& E[y \vert t=1]-E[y \vert t=0] = \beta_t +\beta_x \cdot \alpha_t \\ \text{Direct Effect: }& E[y \vert t=1,w]-E[y \vert t=0, w] = \beta_t \\ \end{align*}$$
You can use the reduced-form outcome equation to estimate the first, and you can use the structural-form equation to estimate the second. A difference of the two recovers the indirect effect.
Here's a toy example using Stata where the indirect effect dominates:
. clear
. sysuse auto, clear
(1978 Automobile Data)
. quietly reg price i.foreign
. estimates store rf
. quietly reg price i.foreign c.mpg
. estimates store sf
. suest rf sf
Simultaneous results for rf, sf
Number of obs = 74
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
rf_mean |
foreign |
Foreign | 312.2587 696.9581 0.45 0.654 -1053.754 1678.271
_cons | 6072.423 428.2447 14.18 0.000 5233.079 6911.767
-------------+----------------------------------------------------------------
rf_lnvar |
_cons | 15.9902 .2260545 70.74 0.000 15.54714 16.43325
-------------+----------------------------------------------------------------
sf_mean |
foreign |
Foreign | 1767.292 599.3555 2.95 0.003 592.5771 2942.007
mpg | -294.1955 59.50419 -4.94 0.000 -410.8216 -177.5695
_cons | 11905.42 1343.753 8.86 0.000 9271.709 14539.12
-------------+----------------------------------------------------------------
sf_lnvar |
_cons | 15.6727 .2476991 63.27 0.000 15.18722 16.15818
------------------------------------------------------------------------------
. nlcom indirect_effect:[rf_mean]_b[1.foreign] - [sf_mean]_b[1.foreign]
indirect_e~t: [rf_mean]_b[1.foreign] - [sf_mean]_b[1.foreign]
---------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
----------------+----------------------------------------------------------------
indirect_effect | -1455.034 488.1763 -2.98 0.003 -2411.841 -498.2255
---------------------------------------------------------------------------------
If you don't care about the standard errors, this can be done with two separate regressions rather than Seemingly Unrelated Estimation. | causal impact estimation | To simplify, I am going to make the problem linear in parameters. You have a structural-form equation for the outcome $y$, the intermediate outcome equation for $x$, and an independence assumption:
$$ | causal impact estimation
To simplify, I am going to make the problem linear in parameters. You have a structural-form equation for the outcome $y$, the intermediate outcome equation for $x$, and an independence assumption:
$$ \begin{align*} y_i &=\beta_1+\beta_t \cdot t_i + \beta_x \cdot x_i + \varepsilon_i \\ x_i &= \alpha_1+\alpha_t \cdot t_i + u_i \\
(t,x) & \perp \!\!\! \perp \varepsilon \\ \end{align*}$$
Plugging the second into the first gets you the reduced-form equation for the outcome:
$$ y_i = (\beta_1 + \beta_x \cdot \alpha_1) + (\beta_t +\beta_x \cdot \alpha_t) \cdot t_i + (\beta_x \cdot u_i + \varepsilon_i)
$$
You have two effects:
$$\begin{align*} \text{Total Effect: }& E[y \vert t=1]-E[y \vert t=0] = \beta_t +\beta_x \cdot \alpha_t \\ \text{Direct Effect: }& E[y \vert t=1,w]-E[y \vert t=0, w] = \beta_t \\ \end{align*}$$
You can use the reduced-form outcome equation to estimate the first, and you can use the structural-form equation to estimate the second. A difference of the two recovers the indirect effect.
Here's a toy example using Stata where the indirect effect dominates:
. clear
. sysuse auto, clear
(1978 Automobile Data)
. quietly reg price i.foreign
. estimates store rf
. quietly reg price i.foreign c.mpg
. estimates store sf
. suest rf sf
Simultaneous results for rf, sf
Number of obs = 74
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
rf_mean |
foreign |
Foreign | 312.2587 696.9581 0.45 0.654 -1053.754 1678.271
_cons | 6072.423 428.2447 14.18 0.000 5233.079 6911.767
-------------+----------------------------------------------------------------
rf_lnvar |
_cons | 15.9902 .2260545 70.74 0.000 15.54714 16.43325
-------------+----------------------------------------------------------------
sf_mean |
foreign |
Foreign | 1767.292 599.3555 2.95 0.003 592.5771 2942.007
mpg | -294.1955 59.50419 -4.94 0.000 -410.8216 -177.5695
_cons | 11905.42 1343.753 8.86 0.000 9271.709 14539.12
-------------+----------------------------------------------------------------
sf_lnvar |
_cons | 15.6727 .2476991 63.27 0.000 15.18722 16.15818
------------------------------------------------------------------------------
. nlcom indirect_effect:[rf_mean]_b[1.foreign] - [sf_mean]_b[1.foreign]
indirect_e~t: [rf_mean]_b[1.foreign] - [sf_mean]_b[1.foreign]
---------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
----------------+----------------------------------------------------------------
indirect_effect | -1455.034 488.1763 -2.98 0.003 -2411.841 -498.2255
---------------------------------------------------------------------------------
If you don't care about the standard errors, this can be done with two separate regressions rather than Seemingly Unrelated Estimation. | causal impact estimation
To simplify, I am going to make the problem linear in parameters. You have a structural-form equation for the outcome $y$, the intermediate outcome equation for $x$, and an independence assumption:
$$ |
54,727 | Confidence interval for the difference of two fitted values from a linear regression model | Taking the difference of the two predicted values gives:
$$
(\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 5 + \hat{\beta_3} 5^2) - (\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 2 + \hat{\beta_3} 2^2) = (5 - 2)\beta_2 + (5^2 - 2^2)\beta_3 = 3\beta_2 + 21\beta_3.
$$
This is a linear combination of the coefficients, for which we can use the variance-covariance matrix of the model to calculate the standard error (see this Wikipedia article and this post). Specifically, let $c$ be a column vector of scalars of the same size as the coefficients in the model. Then, $c^\intercal\beta$ is a linear combination of the coefficients. The variance of $c^\intercal\beta$ is then given by:
$$
\mathrm{Var}(c^\intercal\beta) = c^\intercal\Sigma c
$$
where $\Sigma$ is the variance-covariance matrix of the coefficients. Taking the square root of the variance gives the standard error.
For the specific example shown in the question, we have ($c^\intercal = (0, 0, 3, 21)$) and thus:
# Reproducibility
set.seed(142857)
# Simulate some data
n <- 100
x1 <- rnorm(n, 100, 15)
x2 <- runif(n, 0, 10)
y <- 1.15 + 0.05*x1 + 0.05*x2^2 - 0.5*x2 + rnorm(100, 0, 0.5)
dat <- data.frame(y = y, x1 = x1, x2 = x2)
# Fit linear regression
mod <- lm(y~x1 + poly(x2, 2, raw = TRUE), data = dat)
summary(mod)
# Linear combination of the coefficients
a <- matrix(c(0, 0, 5 - 2, 5^2 - 2^2), ncol = 1)
# Standard error of the linear combination
sqrt(t(a)%*%vcov(mod)%*%a)
[,1]
[1,] 0.1003602
We can check this using the emmeans package:
library(emmeans)
contrast(emmeans(mod, "x2", at = list(x1 = 90, x2 = c(2, 5))), "revpairwise", infer = c(TRUE, TRUE))
contrast estimate SE df lower.CL upper.CL t.ratio p.value
5 - 2 -0.4764677 0.1003602 96 -0.6756811 -0.2772542 -4.748 <.0001
The standard error is identical. | Confidence interval for the difference of two fitted values from a linear regression model | Taking the difference of the two predicted values gives:
$$
(\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 5 + \hat{\beta_3} 5^2) - (\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 2 + \hat{\beta_ | Confidence interval for the difference of two fitted values from a linear regression model
Taking the difference of the two predicted values gives:
$$
(\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 5 + \hat{\beta_3} 5^2) - (\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 2 + \hat{\beta_3} 2^2) = (5 - 2)\beta_2 + (5^2 - 2^2)\beta_3 = 3\beta_2 + 21\beta_3.
$$
This is a linear combination of the coefficients, for which we can use the variance-covariance matrix of the model to calculate the standard error (see this Wikipedia article and this post). Specifically, let $c$ be a column vector of scalars of the same size as the coefficients in the model. Then, $c^\intercal\beta$ is a linear combination of the coefficients. The variance of $c^\intercal\beta$ is then given by:
$$
\mathrm{Var}(c^\intercal\beta) = c^\intercal\Sigma c
$$
where $\Sigma$ is the variance-covariance matrix of the coefficients. Taking the square root of the variance gives the standard error.
For the specific example shown in the question, we have ($c^\intercal = (0, 0, 3, 21)$) and thus:
# Reproducibility
set.seed(142857)
# Simulate some data
n <- 100
x1 <- rnorm(n, 100, 15)
x2 <- runif(n, 0, 10)
y <- 1.15 + 0.05*x1 + 0.05*x2^2 - 0.5*x2 + rnorm(100, 0, 0.5)
dat <- data.frame(y = y, x1 = x1, x2 = x2)
# Fit linear regression
mod <- lm(y~x1 + poly(x2, 2, raw = TRUE), data = dat)
summary(mod)
# Linear combination of the coefficients
a <- matrix(c(0, 0, 5 - 2, 5^2 - 2^2), ncol = 1)
# Standard error of the linear combination
sqrt(t(a)%*%vcov(mod)%*%a)
[,1]
[1,] 0.1003602
We can check this using the emmeans package:
library(emmeans)
contrast(emmeans(mod, "x2", at = list(x1 = 90, x2 = c(2, 5))), "revpairwise", infer = c(TRUE, TRUE))
contrast estimate SE df lower.CL upper.CL t.ratio p.value
5 - 2 -0.4764677 0.1003602 96 -0.6756811 -0.2772542 -4.748 <.0001
The standard error is identical. | Confidence interval for the difference of two fitted values from a linear regression model
Taking the difference of the two predicted values gives:
$$
(\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 5 + \hat{\beta_3} 5^2) - (\hat{\beta_0} + \hat{\beta_1} 90 + \hat{\beta_2} 2 + \hat{\beta_ |
54,728 | Confidence interval for the difference of two fitted values from a linear regression model | An alternative approach (I agree it is devious, bit it is also interesting) is to transform your function
$$y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3x_2^2 + \epsilon$$
into
$$y=\beta_0 + \beta_1 x_1 + \beta_2 \frac{x_2}{3} + \beta_3(x_2-2)(x_2-5) + \epsilon$$
This is the same quadratic polynomial but now you have $\hat{y}_{x_2=5} - \hat{y}_{x_2=2} = \beta_2$ and you can directly use the standard error for the coefficient $\beta_2$. | Confidence interval for the difference of two fitted values from a linear regression model | An alternative approach (I agree it is devious, bit it is also interesting) is to transform your function
$$y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3x_2^2 + \epsilon$$
into
$$y=\beta_0 + \beta_1 | Confidence interval for the difference of two fitted values from a linear regression model
An alternative approach (I agree it is devious, bit it is also interesting) is to transform your function
$$y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3x_2^2 + \epsilon$$
into
$$y=\beta_0 + \beta_1 x_1 + \beta_2 \frac{x_2}{3} + \beta_3(x_2-2)(x_2-5) + \epsilon$$
This is the same quadratic polynomial but now you have $\hat{y}_{x_2=5} - \hat{y}_{x_2=2} = \beta_2$ and you can directly use the standard error for the coefficient $\beta_2$. | Confidence interval for the difference of two fitted values from a linear regression model
An alternative approach (I agree it is devious, bit it is also interesting) is to transform your function
$$y=\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3x_2^2 + \epsilon$$
into
$$y=\beta_0 + \beta_1 |
54,729 | Given two normal populations,, classifying a given data point | Consider the two possible normal populations as models, $p(\mathcal{M_1}), p(\mathcal{M_2})$, that is, two hypothesis that compete to explain your datum $X$.
The prior information tells which is the prior odds,
$$\frac{p(\mathcal{M_1})}{p(\mathcal{M_2})} = \frac{\pi_1}{\pi_2}$$
You can apply the standard formula
$$\underbrace{ \frac{P({\cal M}_1\mid X)}{P({\cal M}_2\mid X)} }_{\text{posterior odds}} = \underbrace{ \frac{P(X \mid {\cal M}_1)}{P(X\mid{\cal M}_2)} }_{\text{Bayes Factor}} \times\underbrace{\frac{P({\cal M}_1)}{P({\cal M}_2)}}_{\text{prior odds}}$$
To compute the likelihoods $P({X \mid \cal M}_i)$ use the density of the Normal, which is another initial assumption.
An example in R:
likelihood <- function(mu, sigma) {
function(x) dnorm(x, mu, sigma)
}
lik.M1 <- likelihood(0.0,0.5)
lik.M2 <- likelihood(2.0,0.5)
pi1 <- 0.2
pi2 <- 1-pi1
x <- 0.5
prior.odds <- pi1/pi2
bayes.factor <- lik.M1(x) / lik.M2(x)
posterior.odds <- bayes.factor * prior.odds
posterior.odds
With these values, it returns odds 13.7:1 | Given two normal populations,, classifying a given data point | Consider the two possible normal populations as models, $p(\mathcal{M_1}), p(\mathcal{M_2})$, that is, two hypothesis that compete to explain your datum $X$.
The prior information tells which is the | Given two normal populations,, classifying a given data point
Consider the two possible normal populations as models, $p(\mathcal{M_1}), p(\mathcal{M_2})$, that is, two hypothesis that compete to explain your datum $X$.
The prior information tells which is the prior odds,
$$\frac{p(\mathcal{M_1})}{p(\mathcal{M_2})} = \frac{\pi_1}{\pi_2}$$
You can apply the standard formula
$$\underbrace{ \frac{P({\cal M}_1\mid X)}{P({\cal M}_2\mid X)} }_{\text{posterior odds}} = \underbrace{ \frac{P(X \mid {\cal M}_1)}{P(X\mid{\cal M}_2)} }_{\text{Bayes Factor}} \times\underbrace{\frac{P({\cal M}_1)}{P({\cal M}_2)}}_{\text{prior odds}}$$
To compute the likelihoods $P({X \mid \cal M}_i)$ use the density of the Normal, which is another initial assumption.
An example in R:
likelihood <- function(mu, sigma) {
function(x) dnorm(x, mu, sigma)
}
lik.M1 <- likelihood(0.0,0.5)
lik.M2 <- likelihood(2.0,0.5)
pi1 <- 0.2
pi2 <- 1-pi1
x <- 0.5
prior.odds <- pi1/pi2
bayes.factor <- lik.M1(x) / lik.M2(x)
posterior.odds <- bayes.factor * prior.odds
posterior.odds
With these values, it returns odds 13.7:1 | Given two normal populations,, classifying a given data point
Consider the two possible normal populations as models, $p(\mathcal{M_1}), p(\mathcal{M_2})$, that is, two hypothesis that compete to explain your datum $X$.
The prior information tells which is the |
54,730 | Interpreting coefficients from ordinal regression R `polr` function | Nicely laid out question, Dylan. I'll take a stab at answering it but will keep my answer practical (i.e., without using mathematical equations).
Will you change the sign of the hxcopd coefficient for reporting purposes?
The first thing you need to determine when looking at the Coefficients output produced by polr is whether you are going to change the sign of the reported coefficient for the purposes of your interpretation or not. In your case, are you going to interpret the coefficient of hxcopdTRUE directly (i.e., 0.331) without changing its sign or are you going to interpret the changed-sign coefficient of -0.331?
What groupings of values for your response variable are you really interested in comparing?
If you are NOT going to change the sign of the reported coefficient by multiplying that coefficient by -1 (i.e., if you are going to interpret 0.331), the ensuing interpretation will allow you to compare these groupings of values for your response variable in terms of log odds:
5 versus 0, 1, 2, 3 or 4
4 or 5 versus 0, 1, 2 or 3
3, 4 or 5 versus 0, 1 or 2
2, 3, 4 or 5 versus 0 or 1
1, 2, 3, 4 or 5 versus 0
If you ARE going to change the sign of the reported coefficient by multiplying that coefficient by -1 (i.e., if you are going to interpret -0.331), then your interpretation will involve the following groupings of values for the response variable:
0 versus 1, 2, 3, 4 or 5
0 or 1 versus 1, 2, 3, 4 or 5
0, 1 or 2 versus 3, 4 or 5
0, 1, 2 or 3 versus 4 or 5
0, 1, 2, 3 or 4 versus 5
In the latter case, you are comparing more versus less severity; in the former, you are comparing less versus more severity. Thus, you have to be careful which case you choose so that your interpretation appropriately conveys the underlying comparisons.
No change in sign for coefficient of hxcopd
Say you choose to interpret the coefficient of hxcopdTRUE of 0.331 without changing its sign. That coefficient tells you the following:
The odds of having a severity rating of 5 rather than 0, 1, 2, 3 or 4 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 4 or 5 rather than 0, 1, 2 or 3 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 3, 4 or 5 rather than 0, 1 or 2 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 2, 3, 4 or 5 rather than 0 or 1 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 1, 2, 3, 4 or 5 rather than 0 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD.
Other language you may see people use in this context would be "the odds are 1.39-fold higher" or "the odds are higher by a multiplicative factor of 1.39".
The above interpretations are repetitive so you would most likely want to consolidate them in a single statement along these lines (or whatever makes sense in your specific setting):
The odds of having a higher rather than a lower severity rating (e.g., 1, 2, 3, 4 or 5 rather than 0; ) are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD.
Change in sign for coefficient of hxcopd
Now, if you DO change the sign of your coefficient for hxcopd, your interpretation will also change since you have to interpret -0.331 or exp(-0.331) instead of 0.331 or exp(0.331).
On the log odds scale, you would have this type of interpretation:
The log odds of having a severity rating of 0 rather than 1, 2, 3 or 4 are estimated to be 0.331 points lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0 or 1 rather than 2, 3, 4 or 5 are estimated to be 0.331 points lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0, 1 or 2 rather than 3, 4 or 5 are estimated to be 0.331 times lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0, 1, 2 or 3 rather than 4 or 5 are estimated to be 0.331 points lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0, 1, 2, 3 or 4 rather than 5 are estimated to be 0.331 times lower for those with COPD than for those without COPD.
On the odds scale, you would have to say things like the ones below, since exp(-0.331) = 0.72 and (0.72-1)x100% = -28%:
The odds of having a severity rating of 0 rather than 1, 2, 3 or 4 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0 or 1 rather than 2, 3, 4 or 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0, 1 or 2 rather than 3, 4 or 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0, 1, 2 or 3 rather than 4 or 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0, 1, 2, 3 or 4 rather than 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD.
The consolidated statement for this last case could look like this:
The odds of having a lower rather than a higher severity rating (e.g., 0, 1, 2, 3 or 4 rather than 5) are estimated to be 0.72 times lower (or 28% lower) for those with COPD than for those without COPD.
In a manuscript, you would most likely have to report a consolidated statement and add the 95% confidence intervals to the reported points (on the log scale) or to the reported odds ratios (on the odds scale). You would also have to explain that you checked whether the proportional odds assumptions holds for your data. Finally, you would need to be clear about what groupings of values for your response variable you are reporting, as explained above.
I assume you have already read this post: https://stats.idre.ucla.edu/r/faq/ologit-coefficients/. It's worth going through it to convince yourself that you are indeed reporting the appropriate quantities in your case. In particular, after fitting your model, look at the following:
unadjfit <- MASS::polr(formula = outcome ~ hxcopd, data = df)
newdat <- data.frame(hccopd=c("FALSE","TRUE"))
phat <- predict(object = m, newdat, type="p")
phat
The phat object will report the probability that your response variable takes on a particular value among 0, 1, 2, 3, 4 or 5, separately for those without COPD and those with COPD.
Then, if you want to compute the odds of having a rating of 5 rather than 0,1,2,3 or 4, say, among those with COPD, you would just divide the reported probability for a rating of 5 in the "with COPD row" (i.e., the second row of phat) by the sum of the reported probabilities for ratings of 0, 1, 2, 3 or 4 in the same row. The same odds among those without COPD would be derived by dividing the reported probability for a rating of 5 in the "without COPD row" (i.e., the first row of phat) by the sum of the reported probabilities for ratings of 0, 1, 2, 3 or 4 in the same row. The ratio of the two odds will give you the odds ratio of having a rating of 5 rather than 0,1,2,3 or 4 for those with COPD relative to those without COPD. If this coincides with what comes out of R via the interpretation process described above, you are on the right path!
Addendum
Brant's Wald test is used by some to verify the reasonableness of the proportional odds assumption for each predictor variable in your model and for all of them together (as explained, for instance, in this article by Richard Williams on Understanding and interpreting generalized ordered logit models:
https://www3.nd.edu/~rwilliam/gologit2/UnderStandingGologit2016.pdf.
R has a brant package for this: https://medium.com/evangelinelee/brant-test-for-proportional-odds-in-r-b0b373a93aa2.
There is also the possibility of using a likelihood ratio test for testing the proportionality of odds assumption, as mentioned for instance in this article: Assessing proportionality assumption
in the adjacent category logistic regression model by Dolgun et al.: https://www.intlpress.com/site/pub/files/_fulltext/journals/sii/2014/0007/0002/SII-2014-0007-0002-a012.pdf. The likelihood ratio test is an omnibus test of proportionality of odds (hence it considers all predictor variables together). See here, for example: https://stat.ethz.ch/pipermail/r-help/2014-November/423706.html.
You can also check this assumption visually in addition to using formal statistical tests.
One thing you may find helpful in addition to checking assumptions is visualizing the results of your modelling using the effects package in R, as explained here in the post Visualizing the Effects of Proportional-Odds Logistic Regression: https://data.library.virginia.edu/visualizing-the-effects-of-proportional-odds-logistic-regression/. | Interpreting coefficients from ordinal regression R `polr` function | Nicely laid out question, Dylan. I'll take a stab at answering it but will keep my answer practical (i.e., without using mathematical equations).
Will you change the sign of the hxcopd coefficient for | Interpreting coefficients from ordinal regression R `polr` function
Nicely laid out question, Dylan. I'll take a stab at answering it but will keep my answer practical (i.e., without using mathematical equations).
Will you change the sign of the hxcopd coefficient for reporting purposes?
The first thing you need to determine when looking at the Coefficients output produced by polr is whether you are going to change the sign of the reported coefficient for the purposes of your interpretation or not. In your case, are you going to interpret the coefficient of hxcopdTRUE directly (i.e., 0.331) without changing its sign or are you going to interpret the changed-sign coefficient of -0.331?
What groupings of values for your response variable are you really interested in comparing?
If you are NOT going to change the sign of the reported coefficient by multiplying that coefficient by -1 (i.e., if you are going to interpret 0.331), the ensuing interpretation will allow you to compare these groupings of values for your response variable in terms of log odds:
5 versus 0, 1, 2, 3 or 4
4 or 5 versus 0, 1, 2 or 3
3, 4 or 5 versus 0, 1 or 2
2, 3, 4 or 5 versus 0 or 1
1, 2, 3, 4 or 5 versus 0
If you ARE going to change the sign of the reported coefficient by multiplying that coefficient by -1 (i.e., if you are going to interpret -0.331), then your interpretation will involve the following groupings of values for the response variable:
0 versus 1, 2, 3, 4 or 5
0 or 1 versus 1, 2, 3, 4 or 5
0, 1 or 2 versus 3, 4 or 5
0, 1, 2 or 3 versus 4 or 5
0, 1, 2, 3 or 4 versus 5
In the latter case, you are comparing more versus less severity; in the former, you are comparing less versus more severity. Thus, you have to be careful which case you choose so that your interpretation appropriately conveys the underlying comparisons.
No change in sign for coefficient of hxcopd
Say you choose to interpret the coefficient of hxcopdTRUE of 0.331 without changing its sign. That coefficient tells you the following:
The odds of having a severity rating of 5 rather than 0, 1, 2, 3 or 4 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 4 or 5 rather than 0, 1, 2 or 3 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 3, 4 or 5 rather than 0, 1 or 2 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 2, 3, 4 or 5 rather than 0 or 1 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD;
The odds of having a severity rating of 1, 2, 3, 4 or 5 rather than 0 are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD.
Other language you may see people use in this context would be "the odds are 1.39-fold higher" or "the odds are higher by a multiplicative factor of 1.39".
The above interpretations are repetitive so you would most likely want to consolidate them in a single statement along these lines (or whatever makes sense in your specific setting):
The odds of having a higher rather than a lower severity rating (e.g., 1, 2, 3, 4 or 5 rather than 0; ) are estimated to be 1.39 times higher (or 39% higher) for those with COPD than for those without COPD.
Change in sign for coefficient of hxcopd
Now, if you DO change the sign of your coefficient for hxcopd, your interpretation will also change since you have to interpret -0.331 or exp(-0.331) instead of 0.331 or exp(0.331).
On the log odds scale, you would have this type of interpretation:
The log odds of having a severity rating of 0 rather than 1, 2, 3 or 4 are estimated to be 0.331 points lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0 or 1 rather than 2, 3, 4 or 5 are estimated to be 0.331 points lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0, 1 or 2 rather than 3, 4 or 5 are estimated to be 0.331 times lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0, 1, 2 or 3 rather than 4 or 5 are estimated to be 0.331 points lower for those with COPD than for those without COPD;
The log odds of having a severity rating of 0, 1, 2, 3 or 4 rather than 5 are estimated to be 0.331 times lower for those with COPD than for those without COPD.
On the odds scale, you would have to say things like the ones below, since exp(-0.331) = 0.72 and (0.72-1)x100% = -28%:
The odds of having a severity rating of 0 rather than 1, 2, 3 or 4 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0 or 1 rather than 2, 3, 4 or 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0, 1 or 2 rather than 3, 4 or 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0, 1, 2 or 3 rather than 4 or 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD;
The odds of having a severity rating of 0, 1, 2, 3 or 4 rather than 5 are 0.72 times lower (or 28% lower) for those with COPD than for those without COPD.
The consolidated statement for this last case could look like this:
The odds of having a lower rather than a higher severity rating (e.g., 0, 1, 2, 3 or 4 rather than 5) are estimated to be 0.72 times lower (or 28% lower) for those with COPD than for those without COPD.
In a manuscript, you would most likely have to report a consolidated statement and add the 95% confidence intervals to the reported points (on the log scale) or to the reported odds ratios (on the odds scale). You would also have to explain that you checked whether the proportional odds assumptions holds for your data. Finally, you would need to be clear about what groupings of values for your response variable you are reporting, as explained above.
I assume you have already read this post: https://stats.idre.ucla.edu/r/faq/ologit-coefficients/. It's worth going through it to convince yourself that you are indeed reporting the appropriate quantities in your case. In particular, after fitting your model, look at the following:
unadjfit <- MASS::polr(formula = outcome ~ hxcopd, data = df)
newdat <- data.frame(hccopd=c("FALSE","TRUE"))
phat <- predict(object = m, newdat, type="p")
phat
The phat object will report the probability that your response variable takes on a particular value among 0, 1, 2, 3, 4 or 5, separately for those without COPD and those with COPD.
Then, if you want to compute the odds of having a rating of 5 rather than 0,1,2,3 or 4, say, among those with COPD, you would just divide the reported probability for a rating of 5 in the "with COPD row" (i.e., the second row of phat) by the sum of the reported probabilities for ratings of 0, 1, 2, 3 or 4 in the same row. The same odds among those without COPD would be derived by dividing the reported probability for a rating of 5 in the "without COPD row" (i.e., the first row of phat) by the sum of the reported probabilities for ratings of 0, 1, 2, 3 or 4 in the same row. The ratio of the two odds will give you the odds ratio of having a rating of 5 rather than 0,1,2,3 or 4 for those with COPD relative to those without COPD. If this coincides with what comes out of R via the interpretation process described above, you are on the right path!
Addendum
Brant's Wald test is used by some to verify the reasonableness of the proportional odds assumption for each predictor variable in your model and for all of them together (as explained, for instance, in this article by Richard Williams on Understanding and interpreting generalized ordered logit models:
https://www3.nd.edu/~rwilliam/gologit2/UnderStandingGologit2016.pdf.
R has a brant package for this: https://medium.com/evangelinelee/brant-test-for-proportional-odds-in-r-b0b373a93aa2.
There is also the possibility of using a likelihood ratio test for testing the proportionality of odds assumption, as mentioned for instance in this article: Assessing proportionality assumption
in the adjacent category logistic regression model by Dolgun et al.: https://www.intlpress.com/site/pub/files/_fulltext/journals/sii/2014/0007/0002/SII-2014-0007-0002-a012.pdf. The likelihood ratio test is an omnibus test of proportionality of odds (hence it considers all predictor variables together). See here, for example: https://stat.ethz.ch/pipermail/r-help/2014-November/423706.html.
You can also check this assumption visually in addition to using formal statistical tests.
One thing you may find helpful in addition to checking assumptions is visualizing the results of your modelling using the effects package in R, as explained here in the post Visualizing the Effects of Proportional-Odds Logistic Regression: https://data.library.virginia.edu/visualizing-the-effects-of-proportional-odds-logistic-regression/. | Interpreting coefficients from ordinal regression R `polr` function
Nicely laid out question, Dylan. I'll take a stab at answering it but will keep my answer practical (i.e., without using mathematical equations).
Will you change the sign of the hxcopd coefficient for |
54,731 | Is it true that we can always increase statistical power/estimator precision by increasing sample size? | Fisher information increases
The Fisher information (which is not the property of an estimator) scales with the size of the sample. See for instance here: https://en.m.wikipedia.org/wiki/Fisher_information#Discrepancy_in_definition
, if the data are i.i.d. the difference between two versions is simply a factor of $n$, the number of data points in the sample.
The Fisher information of a sample of size $n$ (with i.i.d. measurements) is the Fisher information of a single measurement times $n$.
Efficient estimate precision increases
You write
any random variable ... and that contains Fisher information about a parameter of that population
So if you talk about an efficient estimate (an estimate with a precision that equals the Fisher information) then: yes the precision will increase with increasing sample size.
Similarly, any estimator whose efficiency has some non-zero minimal bound for all $n$ $$e(T_n) = \frac{1}{Var(T_n) \times n \times \mathcal{I}(\theta)} \geq e_{min} >0$$ (where $\mathcal{I}(\theta)$ is the information of a single measurement and $n\mathcal{I}(\theta)$ of $n$ measurements) will have $Var(T_n) \to 0$ for increasing $n$.
Other estimates can do anything
But note that there are many non-efficient estimators/statistics that do not scale with increasing sample size.
Pathological estimator
A well-known example is the sample mean of a Cauchy distribution as an estimator for the location parameter, which remains the same for increasing sample size (and I believe there are also examples where the variance of the sample mean even increases for larger sample size).
Oracle estimator
If you do not like the example with the Cauchy distribution, because it is a pathological distribution, then you can consider this estimator
$$\hat \theta_n = 42$$
This is an estimator that can be used for the parameter θ of a non-pathological distribution and does not improve (increase in precision) when we increase n. (I agree that it is an example that makes little practical sense, but it indicates that maybe you need to be more precise about the definition of 'estimator').
Stupid estimator
You could argue that this oracle estimator $\hat{\theta}_n = 42$ does not contain information (and in your edit you write about estimators that contain information), in that case, you can use this stupid estimator $$\hat{\theta}_n = \min\lbrace x_1, x_2, \dots, x_n \rbrace (n+1)$$ to estimate the parameter of a continuous uniform distribution between $0$ and $\theta$.
The distribution of $\min\lbrace x_1, x_2, \dots, x_n \rbrace/\theta$ follows a beta distribution $Beta(1,n)$, and so we can easily compute the mean and variance of the estimate based on the mean and variance of the beta distribution.
$$\begin{array}{rcl} E[\hat{\theta}_n] &=& \theta \\
Var[\hat{\theta}_n] &=& \theta^2 \frac{n}{(n+2)}
\end{array}$$
So the variance of this unbiased estimator will grow towards $\theta^2$ for increasing sample size.
Obviously, these examples are all silly non-pragmatic estimators. But, that is because of the issue that you are looking for. You are looking for estimators that do not work well with increasing sample size, and therefore you get silly estimators as examples.
See also: https://en.wikipedia.org/wiki/Consistent_estimator | Is it true that we can always increase statistical power/estimator precision by increasing sample si | Fisher information increases
The Fisher information (which is not the property of an estimator) scales with the size of the sample. See for instance here: https://en.m.wikipedia.org/wiki/Fisher_inform | Is it true that we can always increase statistical power/estimator precision by increasing sample size?
Fisher information increases
The Fisher information (which is not the property of an estimator) scales with the size of the sample. See for instance here: https://en.m.wikipedia.org/wiki/Fisher_information#Discrepancy_in_definition
, if the data are i.i.d. the difference between two versions is simply a factor of $n$, the number of data points in the sample.
The Fisher information of a sample of size $n$ (with i.i.d. measurements) is the Fisher information of a single measurement times $n$.
Efficient estimate precision increases
You write
any random variable ... and that contains Fisher information about a parameter of that population
So if you talk about an efficient estimate (an estimate with a precision that equals the Fisher information) then: yes the precision will increase with increasing sample size.
Similarly, any estimator whose efficiency has some non-zero minimal bound for all $n$ $$e(T_n) = \frac{1}{Var(T_n) \times n \times \mathcal{I}(\theta)} \geq e_{min} >0$$ (where $\mathcal{I}(\theta)$ is the information of a single measurement and $n\mathcal{I}(\theta)$ of $n$ measurements) will have $Var(T_n) \to 0$ for increasing $n$.
Other estimates can do anything
But note that there are many non-efficient estimators/statistics that do not scale with increasing sample size.
Pathological estimator
A well-known example is the sample mean of a Cauchy distribution as an estimator for the location parameter, which remains the same for increasing sample size (and I believe there are also examples where the variance of the sample mean even increases for larger sample size).
Oracle estimator
If you do not like the example with the Cauchy distribution, because it is a pathological distribution, then you can consider this estimator
$$\hat \theta_n = 42$$
This is an estimator that can be used for the parameter θ of a non-pathological distribution and does not improve (increase in precision) when we increase n. (I agree that it is an example that makes little practical sense, but it indicates that maybe you need to be more precise about the definition of 'estimator').
Stupid estimator
You could argue that this oracle estimator $\hat{\theta}_n = 42$ does not contain information (and in your edit you write about estimators that contain information), in that case, you can use this stupid estimator $$\hat{\theta}_n = \min\lbrace x_1, x_2, \dots, x_n \rbrace (n+1)$$ to estimate the parameter of a continuous uniform distribution between $0$ and $\theta$.
The distribution of $\min\lbrace x_1, x_2, \dots, x_n \rbrace/\theta$ follows a beta distribution $Beta(1,n)$, and so we can easily compute the mean and variance of the estimate based on the mean and variance of the beta distribution.
$$\begin{array}{rcl} E[\hat{\theta}_n] &=& \theta \\
Var[\hat{\theta}_n] &=& \theta^2 \frac{n}{(n+2)}
\end{array}$$
So the variance of this unbiased estimator will grow towards $\theta^2$ for increasing sample size.
Obviously, these examples are all silly non-pragmatic estimators. But, that is because of the issue that you are looking for. You are looking for estimators that do not work well with increasing sample size, and therefore you get silly estimators as examples.
See also: https://en.wikipedia.org/wiki/Consistent_estimator | Is it true that we can always increase statistical power/estimator precision by increasing sample si
Fisher information increases
The Fisher information (which is not the property of an estimator) scales with the size of the sample. See for instance here: https://en.m.wikipedia.org/wiki/Fisher_inform |
54,732 | Is it true that we can always increase statistical power/estimator precision by increasing sample size? | An example of a statistic that does not increase in Fisher information as sample size increases is the matching statistic. The matching statistic $m$ (Vernon, 1936) is computed between a pair of vectors of ranked scores, as the number of paired ranks that match. Gordon Rae (1987, 1991) showed that, when the population correlation between vectors is zero, $m$ has a relative asymptotic efficiency of zero. This means that, if we compute both $m$ and Spearman's rho (or another relatively efficient correlation estimator) on the same data, the Pearson correlation between $m$ and rho will be sizeable for small $n$, but will go to zero as $n$ goes to infinity. It can also be shown that, when the population correlation between vectors is greater than zero, $m$'s relative asymptotic efficiency is negative. This means that $m$'s standard error increases with increasing $n$, implying the loss of Fisher information. | Is it true that we can always increase statistical power/estimator precision by increasing sample si | An example of a statistic that does not increase in Fisher information as sample size increases is the matching statistic. The matching statistic $m$ (Vernon, 1936) is computed between a pair of vecto | Is it true that we can always increase statistical power/estimator precision by increasing sample size?
An example of a statistic that does not increase in Fisher information as sample size increases is the matching statistic. The matching statistic $m$ (Vernon, 1936) is computed between a pair of vectors of ranked scores, as the number of paired ranks that match. Gordon Rae (1987, 1991) showed that, when the population correlation between vectors is zero, $m$ has a relative asymptotic efficiency of zero. This means that, if we compute both $m$ and Spearman's rho (or another relatively efficient correlation estimator) on the same data, the Pearson correlation between $m$ and rho will be sizeable for small $n$, but will go to zero as $n$ goes to infinity. It can also be shown that, when the population correlation between vectors is greater than zero, $m$'s relative asymptotic efficiency is negative. This means that $m$'s standard error increases with increasing $n$, implying the loss of Fisher information. | Is it true that we can always increase statistical power/estimator precision by increasing sample si
An example of a statistic that does not increase in Fisher information as sample size increases is the matching statistic. The matching statistic $m$ (Vernon, 1936) is computed between a pair of vecto |
54,733 | Finding a confidence interval of a MAPE | Interesting question. I have been active in both academic and applied forecasting for quite a while, and I can't recall anyone discussing CIs for MAPEs ever.
I don't think your calculation is very helpful. As an example, assume that the true holdout actuals are lognormally distributed with log-mean $\mu=1$ and log-SD $\sigma=1$. Assume further that our point forecast is a fixed $\hat{y}=\exp\big(\mu+\frac{\sigma^2}{2}\big)$ (which is an expectation forecast, which is not the MAPE-minimal forecast for lognormal data).
Recall the definition of a CI: it is an algorithm that, when the entire experiment is repeated often, will contain the true parameter value with a prespecified frequency. (Note that this is different from "there is a 95% chance that any one given CI contains the parameter.")
We can run our experiment by simulation. I get the true MAPE by simulating $n=10^6$ actuals, then repeatedly ($10^5$ times) draw the $n=4$ observations you have. In each case, I calculate APEs, take their mean and SD and calculate a 95% CI as you did. Finally, I record whether this simulated CI contained the true MAPE or not.
The hit rate is only 76%, instead of 95%.
R code:
set.seed(2020)
fcst <- exp(mm)
actuals <- rlnorm(1e6,meanlog=mm,sdlog=sqrt(ss.sq))
true_MAPE <- mean(abs(fcst-actuals)/actuals)
n_reps <- 1e5
hit <- rep(NA,n_reps)
n_obs <- 4
pb <- winProgressBar(max=n_reps)
for ( ii in 1:n_reps ) {
setWinProgressBar(pb,ii,paste(ii,"of",n_reps))
set.seed(ii) # for replicability
actuals <- rlnorm(n_obs,meanlog=mm,sdlog=sqrt(ss.sq))
APEs <- abs(fcst-actuals)/actuals
CI <- mean(APEs)+qt(c(.025,.975),n_obs-1)*sd(APEs)/sqrt(n_obs)
hit[ii] <- CI[1]<=true_MAPE & true_MAPE<=CI[2]
}
close(pb)
summary(hit)
Incidentally, we can change the experiment as follows: instead of a fixed point forecast, we can simulate $n=100$ iid "historical" observations, calculate the point forecast as their average (which, again, is an expectation forecast and not the MAPE-minimal one), then evaluate this point forecast on $n=4$ new observations, calculating a CI as above. The hit rate is pretty much unchanged.
You may find What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? helpful. | Finding a confidence interval of a MAPE | Interesting question. I have been active in both academic and applied forecasting for quite a while, and I can't recall anyone discussing CIs for MAPEs ever.
I don't think your calculation is very hel | Finding a confidence interval of a MAPE
Interesting question. I have been active in both academic and applied forecasting for quite a while, and I can't recall anyone discussing CIs for MAPEs ever.
I don't think your calculation is very helpful. As an example, assume that the true holdout actuals are lognormally distributed with log-mean $\mu=1$ and log-SD $\sigma=1$. Assume further that our point forecast is a fixed $\hat{y}=\exp\big(\mu+\frac{\sigma^2}{2}\big)$ (which is an expectation forecast, which is not the MAPE-minimal forecast for lognormal data).
Recall the definition of a CI: it is an algorithm that, when the entire experiment is repeated often, will contain the true parameter value with a prespecified frequency. (Note that this is different from "there is a 95% chance that any one given CI contains the parameter.")
We can run our experiment by simulation. I get the true MAPE by simulating $n=10^6$ actuals, then repeatedly ($10^5$ times) draw the $n=4$ observations you have. In each case, I calculate APEs, take their mean and SD and calculate a 95% CI as you did. Finally, I record whether this simulated CI contained the true MAPE or not.
The hit rate is only 76%, instead of 95%.
R code:
set.seed(2020)
fcst <- exp(mm)
actuals <- rlnorm(1e6,meanlog=mm,sdlog=sqrt(ss.sq))
true_MAPE <- mean(abs(fcst-actuals)/actuals)
n_reps <- 1e5
hit <- rep(NA,n_reps)
n_obs <- 4
pb <- winProgressBar(max=n_reps)
for ( ii in 1:n_reps ) {
setWinProgressBar(pb,ii,paste(ii,"of",n_reps))
set.seed(ii) # for replicability
actuals <- rlnorm(n_obs,meanlog=mm,sdlog=sqrt(ss.sq))
APEs <- abs(fcst-actuals)/actuals
CI <- mean(APEs)+qt(c(.025,.975),n_obs-1)*sd(APEs)/sqrt(n_obs)
hit[ii] <- CI[1]<=true_MAPE & true_MAPE<=CI[2]
}
close(pb)
summary(hit)
Incidentally, we can change the experiment as follows: instead of a fixed point forecast, we can simulate $n=100$ iid "historical" observations, calculate the point forecast as their average (which, again, is an expectation forecast and not the MAPE-minimal one), then evaluate this point forecast on $n=4$ new observations, calculating a CI as above. The hit rate is pretty much unchanged.
You may find What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? helpful. | Finding a confidence interval of a MAPE
Interesting question. I have been active in both academic and applied forecasting for quite a while, and I can't recall anyone discussing CIs for MAPEs ever.
I don't think your calculation is very hel |
54,734 | What is the defintion of variation in a box plot? | A boxplot invites you to characterize variation in many different ways, by comparing the quantities shown on the plot: extremes, extremes of the whiskers, quartiles, and median. That gives 21 different measures of variation in each one! On that basis I can identify (with some difficulty, because many of the boxplots are similar to one another) three possible correct answers (Sat, Sun, and Mon).
To illustrate, here are sample data similar to yours:
Each boxplot depicts at least seven quantities, as labeled in the Thursday boxplot (although some may coincide): eXtremes, Fences (the tips of the whiskers), Hinges (the borders of the boxes), and the Median. The first three occur below the median ("-" subscripts) and above ("+" subscripts), for a total of seven statistics.
The absolute difference between any two distinct statistics measures some aspect of "dispersion" or "variation" in the underlying data.
For instance, $X_{+} - X_{-}$ is the range, $H_{+} - H_{-}$ is the interquartile range, and so on. Each such difference, apart from the range, focuses on a part of the data distribution. This gives you a flexible tool for choosing what aspect of the dataset you wish to characterize.
Here are plots of each of these 21 statistics for the sample data.
In each plot I have highlighted the largest of the seven values. The highlighting color is determined by the day of the week. It is evident that five of the seven weekdays can be considered, in some specific sense, to have the largest variation. (Only Tues and Fri don't show up.) For instance, the "X-,X+" chart in row 1, column 6 plots the ranges and indicates Thursday's range is the largest.
To answer your question, then, you must
Select some measure of variation.
Assess it in the graphic by systematically comparing the corresponding parts in each boxplot to estimate their vertical distances.
Select the day (or days) where those distances are the largest.
For instance, it looks to me like the largest $M$ to $X_{+}$ variation (the distance from the upper tip to the midline), which measures the spread of above-average values in the data, occurs on Sunday in your data, because there is an outlying extreme value then.
Ordinarily, one uses the IQR as measured by the box height $H_{+}-H_{-}$ as the default measure of variation in a boxplot. That is because it is relatively unaffected by extreme values, making it a robust indicator of variation, and it is symmetric in not emphasizing high or low values. So, if you are given no further guidance in the question, in the textbooks, or in your classnotes, this would be the measure to choose. | What is the defintion of variation in a box plot? | A boxplot invites you to characterize variation in many different ways, by comparing the quantities shown on the plot: extremes, extremes of the whiskers, quartiles, and median. That gives 21 differe | What is the defintion of variation in a box plot?
A boxplot invites you to characterize variation in many different ways, by comparing the quantities shown on the plot: extremes, extremes of the whiskers, quartiles, and median. That gives 21 different measures of variation in each one! On that basis I can identify (with some difficulty, because many of the boxplots are similar to one another) three possible correct answers (Sat, Sun, and Mon).
To illustrate, here are sample data similar to yours:
Each boxplot depicts at least seven quantities, as labeled in the Thursday boxplot (although some may coincide): eXtremes, Fences (the tips of the whiskers), Hinges (the borders of the boxes), and the Median. The first three occur below the median ("-" subscripts) and above ("+" subscripts), for a total of seven statistics.
The absolute difference between any two distinct statistics measures some aspect of "dispersion" or "variation" in the underlying data.
For instance, $X_{+} - X_{-}$ is the range, $H_{+} - H_{-}$ is the interquartile range, and so on. Each such difference, apart from the range, focuses on a part of the data distribution. This gives you a flexible tool for choosing what aspect of the dataset you wish to characterize.
Here are plots of each of these 21 statistics for the sample data.
In each plot I have highlighted the largest of the seven values. The highlighting color is determined by the day of the week. It is evident that five of the seven weekdays can be considered, in some specific sense, to have the largest variation. (Only Tues and Fri don't show up.) For instance, the "X-,X+" chart in row 1, column 6 plots the ranges and indicates Thursday's range is the largest.
To answer your question, then, you must
Select some measure of variation.
Assess it in the graphic by systematically comparing the corresponding parts in each boxplot to estimate their vertical distances.
Select the day (or days) where those distances are the largest.
For instance, it looks to me like the largest $M$ to $X_{+}$ variation (the distance from the upper tip to the midline), which measures the spread of above-average values in the data, occurs on Sunday in your data, because there is an outlying extreme value then.
Ordinarily, one uses the IQR as measured by the box height $H_{+}-H_{-}$ as the default measure of variation in a boxplot. That is because it is relatively unaffected by extreme values, making it a robust indicator of variation, and it is symmetric in not emphasizing high or low values. So, if you are given no further guidance in the question, in the textbooks, or in your classnotes, this would be the measure to choose. | What is the defintion of variation in a box plot?
A boxplot invites you to characterize variation in many different ways, by comparing the quantities shown on the plot: extremes, extremes of the whiskers, quartiles, and median. That gives 21 differe |
54,735 | What is the difference between Non-Negative Matrix Factorization (NMF) and Factor Analysis (FA)? | NMF/PMF are typically used to make low-rank decompositions. They can be used like a truncated SVD, just for dimension reduction. They can also be used like factor analysis, to attempt to identify latent variables that theory says underly the data.
A truncated rank-$k$ SVD asks for the best decomposition of the data matrix $X$ into $UDV^T$ where $U$ and $V$ have $k$ orthonormal columns and are chosen to minimise the sum of squared errors in reconstructing the elements of $X$. An approximate NMF decomposes $X$ as $GH^T$ where $G$ and $H$ have $k$ columns and all the entries are non-negative. There are also sparse NMF algorithms that (surprise!) additionally make the factors sparse.
One classic application of NMF/PMF is in analytic chemistry. For example, in particulate air pollution research, $X$ may be a matrix whose $(s,t)$ entry is the mass concentration of chemical species $s$ at measurement time $t$. The decomposition of rank $k$ corresponds to a model with $k$ sources of particles, with $G_{sk}$ being the percentage concentration of species $s$ in source $k$ and $H_{kt}$ the mass concentration of particles from source $k$ at time $t$. Clearly these will be non-negative. Ideally $G$ will be somewhat sparse -- you would like to measure species that are, if not unique to a source, at least specific to a group of sources
[Update: even in this application the interpretation of $G$ and $H$ does depend on how they are scaled. It's always true that $G$ is species-source information and $H$ is source-time information, but getting $H$ to be mass concentrations requires scaling the rows of $H$ to sum to total particle mass concentration]
PMF (at least, the software of that name) does a non-negative decomposition but optimises a user-specified weighted sum of squared errors in reconstruction, where the weights are based on assay error either (preferably) known previously or (typically) estimated from replicates. This is a harder problem computationally. The software also allows constraints on the estimated decomposition -- eg, that species $7$ is found only in source $3$, or that the concentration of species 2 in source 4 is greater than 5%.
In air pollution analysis PMF (especially) is often seen as estimating the true sources, the way factor analysis estimates latent variables. In some ways it does better than factor analysis, since the non-negativity constraints reduce the non-identifiability (rotational freedom) of factor analysis.
But you can run PMF/NMF on data without having any theoretical commitment to any specific model for latent variables, which would be undesirable for factor analysis. For example, NMF has been used in text mining for clustering documents without specifying cluster:word relationships in advance, and in the Netflix prize competition for clustering movies. | What is the difference between Non-Negative Matrix Factorization (NMF) and Factor Analysis (FA)? | NMF/PMF are typically used to make low-rank decompositions. They can be used like a truncated SVD, just for dimension reduction. They can also be used like factor analysis, to attempt to identify late | What is the difference between Non-Negative Matrix Factorization (NMF) and Factor Analysis (FA)?
NMF/PMF are typically used to make low-rank decompositions. They can be used like a truncated SVD, just for dimension reduction. They can also be used like factor analysis, to attempt to identify latent variables that theory says underly the data.
A truncated rank-$k$ SVD asks for the best decomposition of the data matrix $X$ into $UDV^T$ where $U$ and $V$ have $k$ orthonormal columns and are chosen to minimise the sum of squared errors in reconstructing the elements of $X$. An approximate NMF decomposes $X$ as $GH^T$ where $G$ and $H$ have $k$ columns and all the entries are non-negative. There are also sparse NMF algorithms that (surprise!) additionally make the factors sparse.
One classic application of NMF/PMF is in analytic chemistry. For example, in particulate air pollution research, $X$ may be a matrix whose $(s,t)$ entry is the mass concentration of chemical species $s$ at measurement time $t$. The decomposition of rank $k$ corresponds to a model with $k$ sources of particles, with $G_{sk}$ being the percentage concentration of species $s$ in source $k$ and $H_{kt}$ the mass concentration of particles from source $k$ at time $t$. Clearly these will be non-negative. Ideally $G$ will be somewhat sparse -- you would like to measure species that are, if not unique to a source, at least specific to a group of sources
[Update: even in this application the interpretation of $G$ and $H$ does depend on how they are scaled. It's always true that $G$ is species-source information and $H$ is source-time information, but getting $H$ to be mass concentrations requires scaling the rows of $H$ to sum to total particle mass concentration]
PMF (at least, the software of that name) does a non-negative decomposition but optimises a user-specified weighted sum of squared errors in reconstruction, where the weights are based on assay error either (preferably) known previously or (typically) estimated from replicates. This is a harder problem computationally. The software also allows constraints on the estimated decomposition -- eg, that species $7$ is found only in source $3$, or that the concentration of species 2 in source 4 is greater than 5%.
In air pollution analysis PMF (especially) is often seen as estimating the true sources, the way factor analysis estimates latent variables. In some ways it does better than factor analysis, since the non-negativity constraints reduce the non-identifiability (rotational freedom) of factor analysis.
But you can run PMF/NMF on data without having any theoretical commitment to any specific model for latent variables, which would be undesirable for factor analysis. For example, NMF has been used in text mining for clustering documents without specifying cluster:word relationships in advance, and in the Netflix prize competition for clustering movies. | What is the difference between Non-Negative Matrix Factorization (NMF) and Factor Analysis (FA)?
NMF/PMF are typically used to make low-rank decompositions. They can be used like a truncated SVD, just for dimension reduction. They can also be used like factor analysis, to attempt to identify late |
54,736 | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods | There's no such thing as universal statistical or machine learning assumptions. There are lots of different statistical/ML methods, with different assumptions among them. You might ask about what assumptions underlie a specific method, or what goes wrong if you violate an assumption of a certain method, but there's no such think as generic statistics/machine learning assumptions. Sometimes a method's assumptions are mutually exclusive of another's! The field encompasses a wide range of tools and methods, which might be appropriate in different cases. This is a feature, not a flaw, because we want to solve diverse problems.
Naïve Bayes assumes that the effect of a feature on the outcome is independent of the values of the other features. But tree-based models (to pick just one example) explicitly try to model the outcome by sub-dividing the feature space into rectangles, and predicting a different outcome for each rectangle. Which one is correct? The model that reflects reality -- the naïve Bayes model does well when the independence assumption is valid, and does poorly when it isn't.
Some data is non-independent, so using a model which supposes independence among each datum is inappropriate. The classic example of this is stock prices: an excellent predictor of an equity's price tomorrow is its price today, which means that a naïve model that just lags price by 24 hours will have small error, even though this model doesn't yield any information you didn't have already. It would be more appropriate to model stock prices using a times-series method.
A convolutional neural network assumes that nearby data (e.g. adjacent pixels) is important, while a fully-connected network does not. The sparse connections of a CNN, and the concept of a local filter applied to adjacent pixels turns out to be a good way to decide what an image contains.
Some of the things that you call "assumptions" (law of large numbers, central limit theorem, Jensen's inequality, Cauchy-Schwarz inequality) are theorems. Theorems are statements which apply a chain of reasoning from other true statements to show that a new statement is also true. Sometimes a theorem is not suitable for a certain situation; for example, the results of the CLT do not follow if the samples are drawn from a distribution with non-finite variance. It's difficult to understand what you mean about the applicability of something like CLT to deep learning, because the CLT is true in all settings where its hypotheses are satisfied. In other words, the CLT cares not whether you're using a neural network, it just cares about its hypotheses.
what if I wanted to use Deep Learning with limited data?
The main problem you'll face is pertain to model generalization: "How do I know that this model will perform well on out-of-sample data?" This is where regularization becomes important. We have a thread dedicated to this: What should I do when my neural network doesn't generalize well?
You've asked for papers about neural networks, so here's a good place to start. The AlexNet paper (Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks") used CNNs for the ImageNet task in 2012 and vastly out-performed their competitors. The authors' success in ImageNet basically kicked off the current frenzy of interest in using CNNs for image data. This paragraph from the AlexNet paper explains why CNNs are suitable for image data: the structure of the CNN encodes prior knowledge ("assumptions") about how images represent semantic data (i.e. objects). Specifically, CNNs assume stationarity of statistics and locality of pixel dependencies. They also suggest that CNNs will be easier to train than fully-connected networks because their of sparseness (fewer weights and biases to update).
To learn about thousands of objects from millions of images, we need a model with a large learning capacity. However, the immense complexity of the object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet, so our model should also have lots of prior knowledge to compensate for all the data we don’t have. Convolutional neural networks (CNNs) constitute one such class of models [16, 11, 13, 18, 15, 22, 26]. Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.
The authors include citations to these papers. These papers develop why CNNs are effective at imaging tasks in more detail.
Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–97. IEEE, 2004.
K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In International Conference on Computer Vision, pages 2146–2153. IEEE, 2009.
A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010
H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 609–616. ACM, 2009.
Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, 1990.
N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579, 2009.
S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, W. Denk, and H.S. Seung. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation, 22(2):511–538, 2010. | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods | There's no such thing as universal statistical or machine learning assumptions. There are lots of different statistical/ML methods, with different assumptions among them. You might ask about what assu | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods
There's no such thing as universal statistical or machine learning assumptions. There are lots of different statistical/ML methods, with different assumptions among them. You might ask about what assumptions underlie a specific method, or what goes wrong if you violate an assumption of a certain method, but there's no such think as generic statistics/machine learning assumptions. Sometimes a method's assumptions are mutually exclusive of another's! The field encompasses a wide range of tools and methods, which might be appropriate in different cases. This is a feature, not a flaw, because we want to solve diverse problems.
Naïve Bayes assumes that the effect of a feature on the outcome is independent of the values of the other features. But tree-based models (to pick just one example) explicitly try to model the outcome by sub-dividing the feature space into rectangles, and predicting a different outcome for each rectangle. Which one is correct? The model that reflects reality -- the naïve Bayes model does well when the independence assumption is valid, and does poorly when it isn't.
Some data is non-independent, so using a model which supposes independence among each datum is inappropriate. The classic example of this is stock prices: an excellent predictor of an equity's price tomorrow is its price today, which means that a naïve model that just lags price by 24 hours will have small error, even though this model doesn't yield any information you didn't have already. It would be more appropriate to model stock prices using a times-series method.
A convolutional neural network assumes that nearby data (e.g. adjacent pixels) is important, while a fully-connected network does not. The sparse connections of a CNN, and the concept of a local filter applied to adjacent pixels turns out to be a good way to decide what an image contains.
Some of the things that you call "assumptions" (law of large numbers, central limit theorem, Jensen's inequality, Cauchy-Schwarz inequality) are theorems. Theorems are statements which apply a chain of reasoning from other true statements to show that a new statement is also true. Sometimes a theorem is not suitable for a certain situation; for example, the results of the CLT do not follow if the samples are drawn from a distribution with non-finite variance. It's difficult to understand what you mean about the applicability of something like CLT to deep learning, because the CLT is true in all settings where its hypotheses are satisfied. In other words, the CLT cares not whether you're using a neural network, it just cares about its hypotheses.
what if I wanted to use Deep Learning with limited data?
The main problem you'll face is pertain to model generalization: "How do I know that this model will perform well on out-of-sample data?" This is where regularization becomes important. We have a thread dedicated to this: What should I do when my neural network doesn't generalize well?
You've asked for papers about neural networks, so here's a good place to start. The AlexNet paper (Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks") used CNNs for the ImageNet task in 2012 and vastly out-performed their competitors. The authors' success in ImageNet basically kicked off the current frenzy of interest in using CNNs for image data. This paragraph from the AlexNet paper explains why CNNs are suitable for image data: the structure of the CNN encodes prior knowledge ("assumptions") about how images represent semantic data (i.e. objects). Specifically, CNNs assume stationarity of statistics and locality of pixel dependencies. They also suggest that CNNs will be easier to train than fully-connected networks because their of sparseness (fewer weights and biases to update).
To learn about thousands of objects from millions of images, we need a model with a large learning capacity. However, the immense complexity of the object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet, so our model should also have lots of prior knowledge to compensate for all the data we don’t have. Convolutional neural networks (CNNs) constitute one such class of models [16, 11, 13, 18, 15, 22, 26]. Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.
The authors include citations to these papers. These papers develop why CNNs are effective at imaging tasks in more detail.
Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–97. IEEE, 2004.
K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In International Conference on Computer Vision, pages 2146–2153. IEEE, 2009.
A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010
H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 609–616. ACM, 2009.
Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, 1990.
N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579, 2009.
S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, W. Denk, and H.S. Seung. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation, 22(2):511–538, 2010. | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods
There's no such thing as universal statistical or machine learning assumptions. There are lots of different statistical/ML methods, with different assumptions among them. You might ask about what assu |
54,737 | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods | I would disagree slightly with the opening statement of Sycorax's excellent and detailed answer "There's no such thing as universal statistical or machine learning assumptions" - in supervised machine learning, in general, it is assumed that your data is drawn IID from a probability distribution, and that any test/new data presented to the model after training will be sampled from the same distribution. This applies to the term "generalization" too - how well your model generalizes refers to how well it generalizes to new data sampled from the same underlying distribution as the training data.
The first issue here is that, when deployed in the "real world," new data is usually not generated from the same distribution as the original training and test data (not to mention not being sampled IID). So model performance naturally deteriorates.
Additionally, the higher-dimensional and more complex your data, the less likely it is you have a dataset that adequately represents the underlying distribution, partly because of the complexity of the distribution and partly because of sampling difficulties (have a look at the "tench" class in ImageNet to see pretty obvious example of severe sampling bias that will lead to poor performance as soon as you move outside the ImageNet validation set for images of real-life tenches...).
I assume that this might be what the conversations you're talking about refer to - does this make sense..? | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods | I would disagree slightly with the opening statement of Sycorax's excellent and detailed answer "There's no such thing as universal statistical or machine learning assumptions" - in supervised machine | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods
I would disagree slightly with the opening statement of Sycorax's excellent and detailed answer "There's no such thing as universal statistical or machine learning assumptions" - in supervised machine learning, in general, it is assumed that your data is drawn IID from a probability distribution, and that any test/new data presented to the model after training will be sampled from the same distribution. This applies to the term "generalization" too - how well your model generalizes refers to how well it generalizes to new data sampled from the same underlying distribution as the training data.
The first issue here is that, when deployed in the "real world," new data is usually not generated from the same distribution as the original training and test data (not to mention not being sampled IID). So model performance naturally deteriorates.
Additionally, the higher-dimensional and more complex your data, the less likely it is you have a dataset that adequately represents the underlying distribution, partly because of the complexity of the distribution and partly because of sampling difficulties (have a look at the "tench" class in ImageNet to see pretty obvious example of severe sampling bias that will lead to poor performance as soon as you move outside the ImageNet validation set for images of real-life tenches...).
I assume that this might be what the conversations you're talking about refer to - does this make sense..? | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods
I would disagree slightly with the opening statement of Sycorax's excellent and detailed answer "There's no such thing as universal statistical or machine learning assumptions" - in supervised machine |
54,738 | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods | Assumptions essentially add information. This added information is more useful if you have less data. For example, contrast two OLS regression relationships
$Y \sim X + Z$
$Y \sim X + X^2 + X^3 + Z + Z^2 + Z^3 + X*Z + (X*Z)^2 + (X*Z)^3$
The first has more assumptions because it is a special case of the second. It's a special case because if the coefficients on all of the extra interaction and polynomial effects is zero, it simplifies to the first model. If you have "enough" data (enough depends on the situation) and the first relationship is the true data generating process, the second model will eventually figure out that the coefficients are zero and simplify to the first model. If you have enough data, you can fit a very general model that will eventually simplify to a simpler model.
However, if you do not have enough data things can go very wrong and you enter the world of over-fitting. With smaller data, it's more important to understand and make reasonable assumptions on your data. Simply fitting a very general model and having the model figure it out won't work.
Models like deep neural nets, tend to be very general models. With enough data, these models can simplify to simpler models if that's the true relationship. | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods | Assumptions essentially add information. This added information is more useful if you have less data. For example, contrast two OLS regression relationships
$Y \sim X + Z$
$Y \sim X + X^2 + X^3 + Z + | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods
Assumptions essentially add information. This added information is more useful if you have less data. For example, contrast two OLS regression relationships
$Y \sim X + Z$
$Y \sim X + X^2 + X^3 + Z + Z^2 + Z^3 + X*Z + (X*Z)^2 + (X*Z)^3$
The first has more assumptions because it is a special case of the second. It's a special case because if the coefficients on all of the extra interaction and polynomial effects is zero, it simplifies to the first model. If you have "enough" data (enough depends on the situation) and the first relationship is the true data generating process, the second model will eventually figure out that the coefficients are zero and simplify to the first model. If you have enough data, you can fit a very general model that will eventually simplify to a simpler model.
However, if you do not have enough data things can go very wrong and you enter the world of over-fitting. With smaller data, it's more important to understand and make reasonable assumptions on your data. Simply fitting a very general model and having the model figure it out won't work.
Models like deep neural nets, tend to be very general models. With enough data, these models can simplify to simpler models if that's the true relationship. | Mathematical/Statistical Assumptions Underlying Machine and Deep Learning Methods
Assumptions essentially add information. This added information is more useful if you have less data. For example, contrast two OLS regression relationships
$Y \sim X + Z$
$Y \sim X + X^2 + X^3 + Z + |
54,739 | Is the MA($\infty$) process with i.i.d. noise strictly stationary? | This process is always strictly stationary by definition.
Recall that the process is (strictly) stationary when all $n$-variate distributions formed by selecting any pattern $(s_1,s_2,\ldots,s_{n-1})$ of (integral) indexes are identical: that is, for all $n\ge 1$ and integral $s$ and $t,$
$$(X_s, X_{s+s_1}, \ldots, X_{s+s_{n-1}}) \sim (X_t, X_{t+s_1}, \ldots, X_{t+s_{n-1}}).$$
But that is trivially the case due to the iid assumption on the $\epsilon_t.$ One merely substitutes "$\epsilon_{s-k}$" for "$\epsilon_{t-k}$" in the definition of the process $(X_t).$ | Is the MA($\infty$) process with i.i.d. noise strictly stationary? | This process is always strictly stationary by definition.
Recall that the process is (strictly) stationary when all $n$-variate distributions formed by selecting any pattern $(s_1,s_2,\ldots,s_{n-1})$ | Is the MA($\infty$) process with i.i.d. noise strictly stationary?
This process is always strictly stationary by definition.
Recall that the process is (strictly) stationary when all $n$-variate distributions formed by selecting any pattern $(s_1,s_2,\ldots,s_{n-1})$ of (integral) indexes are identical: that is, for all $n\ge 1$ and integral $s$ and $t,$
$$(X_s, X_{s+s_1}, \ldots, X_{s+s_{n-1}}) \sim (X_t, X_{t+s_1}, \ldots, X_{t+s_{n-1}}).$$
But that is trivially the case due to the iid assumption on the $\epsilon_t.$ One merely substitutes "$\epsilon_{s-k}$" for "$\epsilon_{t-k}$" in the definition of the process $(X_t).$ | Is the MA($\infty$) process with i.i.d. noise strictly stationary?
This process is always strictly stationary by definition.
Recall that the process is (strictly) stationary when all $n$-variate distributions formed by selecting any pattern $(s_1,s_2,\ldots,s_{n-1})$ |
54,740 | Is the MA($\infty$) process with i.i.d. noise strictly stationary? | Strict stationarity do not imply weak essentially because is possible that the first two moments are not finite. If we add that
$V[\epsilon_t]=\sigma^2< \infty$
strictly stationarity imply weak also.
Seems me useful to note that stationarity is strongly related to ergodicity and, then, memory (this discussion can help: Stationarity and Ergodicity - links). You assume independence among $\epsilon_t$, so any memory problem depend only from the $MA$ parameters. Note that in $MA(q)$, for finite q case, parameters restrictions do not need, for infinite case absolute summability of parameters need. Moreover stationarity deal with unchangeable moments and distributional form. You assume identicity in distribution.
Keep in mind that if $\epsilon_t$ in also Gaussian, strict stationarity surely hold. However It seem me that under iid condition, that you invoke, strict stationarity is implied regardless distributional assumption. Considering that the finiteness of variance are added, also weak stationarity hold.
Your assumptions are very strong for time series. Is not a surprise that strict stationarity hold.
I add some detail in order to rend more clear what I said above. In the $MA(q)$ process we have that
$V[X_t] = \sigma^2 \displaystyle\sum_{k=0}^{q} a_k^2$
$COV(X_t,X_{t-s})= \sigma^2 \displaystyle\sum_{k=0}^{q-s} a_{k+s}a_k$; for ($1 \leq s \leq q$)
under absolute summability of coefficients, necessary condition for stationarity (weak and/or strict), the above formulas can be used also in the $MA(\infty)$ case; both term converge to a finite quantities. Moreover if the errors $(\epsilon_t)$ are not only iid but also Normal, the distribution of $X_t$ is Normal too (variance given above and mean $0$). All the possible joint distributions $(X_t,X_{t-1},…,X_{t-s})$ are jointly Normal, with any single terms of the covariance matrix that depend of the above formulas. If we shift the joint of $j$ step, we have $(X_{t+j},X_{t-1+j},…,X_{t-s+j})$ but this distribution remain the previous. There are no reason for its modifications. The memory decays with $s$, only this term matters.
If we rule out the Gaussian assumption among $\epsilon_t$, we don’t known more the form of the distributions, marginals and joints, of $X$ too. However there is no reason because the joint distributions of $X$ have to change under shift, $iid$ assumption matters here, therefore the process remain strict stationary (considering the finiteness of $\sigma^2$ also weak stationarity hold).
As counter example we can consider the case where identically distribution among $\epsilon_t$ not hold; more in particular it change at any realization. So, even if the moments above holds yet, we cannot find two joint distribution that share exactly the same form. Therefore strict stationarity clearly not hold, however we have to note that weak stationarity hold yet. This fact can happen under standard white noise condition for $\epsilon_t$. | Is the MA($\infty$) process with i.i.d. noise strictly stationary? | Strict stationarity do not imply weak essentially because is possible that the first two moments are not finite. If we add that
$V[\epsilon_t]=\sigma^2< \infty$
strictly stationarity imply weak also.
| Is the MA($\infty$) process with i.i.d. noise strictly stationary?
Strict stationarity do not imply weak essentially because is possible that the first two moments are not finite. If we add that
$V[\epsilon_t]=\sigma^2< \infty$
strictly stationarity imply weak also.
Seems me useful to note that stationarity is strongly related to ergodicity and, then, memory (this discussion can help: Stationarity and Ergodicity - links). You assume independence among $\epsilon_t$, so any memory problem depend only from the $MA$ parameters. Note that in $MA(q)$, for finite q case, parameters restrictions do not need, for infinite case absolute summability of parameters need. Moreover stationarity deal with unchangeable moments and distributional form. You assume identicity in distribution.
Keep in mind that if $\epsilon_t$ in also Gaussian, strict stationarity surely hold. However It seem me that under iid condition, that you invoke, strict stationarity is implied regardless distributional assumption. Considering that the finiteness of variance are added, also weak stationarity hold.
Your assumptions are very strong for time series. Is not a surprise that strict stationarity hold.
I add some detail in order to rend more clear what I said above. In the $MA(q)$ process we have that
$V[X_t] = \sigma^2 \displaystyle\sum_{k=0}^{q} a_k^2$
$COV(X_t,X_{t-s})= \sigma^2 \displaystyle\sum_{k=0}^{q-s} a_{k+s}a_k$; for ($1 \leq s \leq q$)
under absolute summability of coefficients, necessary condition for stationarity (weak and/or strict), the above formulas can be used also in the $MA(\infty)$ case; both term converge to a finite quantities. Moreover if the errors $(\epsilon_t)$ are not only iid but also Normal, the distribution of $X_t$ is Normal too (variance given above and mean $0$). All the possible joint distributions $(X_t,X_{t-1},…,X_{t-s})$ are jointly Normal, with any single terms of the covariance matrix that depend of the above formulas. If we shift the joint of $j$ step, we have $(X_{t+j},X_{t-1+j},…,X_{t-s+j})$ but this distribution remain the previous. There are no reason for its modifications. The memory decays with $s$, only this term matters.
If we rule out the Gaussian assumption among $\epsilon_t$, we don’t known more the form of the distributions, marginals and joints, of $X$ too. However there is no reason because the joint distributions of $X$ have to change under shift, $iid$ assumption matters here, therefore the process remain strict stationary (considering the finiteness of $\sigma^2$ also weak stationarity hold).
As counter example we can consider the case where identically distribution among $\epsilon_t$ not hold; more in particular it change at any realization. So, even if the moments above holds yet, we cannot find two joint distribution that share exactly the same form. Therefore strict stationarity clearly not hold, however we have to note that weak stationarity hold yet. This fact can happen under standard white noise condition for $\epsilon_t$. | Is the MA($\infty$) process with i.i.d. noise strictly stationary?
Strict stationarity do not imply weak essentially because is possible that the first two moments are not finite. If we add that
$V[\epsilon_t]=\sigma^2< \infty$
strictly stationarity imply weak also.
|
54,741 | Visualizing the hinge loss and 0-1 loss | The x-axis is the score output from a classifier, often interpreted as the estimated/predicted log-odds. The y-axis is the loss for a single datapoint with true label $y = 1$.
In notation, if we denote the score output from the classifier as $\hat s$, the plots are the graphs of the functions:
$$ f(\hat s) = \text{Zero-One-Loss}(\hat s, 1) $$
$$ f(\hat s) = \text{Hinge-Loss}(\hat s, 1) $$ | Visualizing the hinge loss and 0-1 loss | The x-axis is the score output from a classifier, often interpreted as the estimated/predicted log-odds. The y-axis is the loss for a single datapoint with true label $y = 1$.
In notation, if we denot | Visualizing the hinge loss and 0-1 loss
The x-axis is the score output from a classifier, often interpreted as the estimated/predicted log-odds. The y-axis is the loss for a single datapoint with true label $y = 1$.
In notation, if we denote the score output from the classifier as $\hat s$, the plots are the graphs of the functions:
$$ f(\hat s) = \text{Zero-One-Loss}(\hat s, 1) $$
$$ f(\hat s) = \text{Hinge-Loss}(\hat s, 1) $$ | Visualizing the hinge loss and 0-1 loss
The x-axis is the score output from a classifier, often interpreted as the estimated/predicted log-odds. The y-axis is the loss for a single datapoint with true label $y = 1$.
In notation, if we denot |
54,742 | How to include an interaction with sex | Algebra lights the way.
The purpose of an "interaction" between a binary variable like gender and another variable (let's just call it "$X$") is to model the possibility that how a response (call it "$Y$") is associated with $X$ may depend on the binary variable. Specifically, it allows for the slope (aka coefficient) of $X$ to vary with gender.
The desired model, without reference to how the binary variable might be encoded, therefore is
$$\eqalign{
E[Y\mid \text{Male}, X] &= \phi(\alpha + \beta_{\text{Male}} X) \\
E[Y\mid \text{Female}, X] &= \phi(\alpha + \beta_{\text{Female}} X).
}\tag{*}$$
for some function $\phi.$
One way--by far the commonest--to express this model with a single formula is to create a variable "$Z$" that indicates the gender: either $Z=1$ for males and $Z=0$ for females (the indicator function of $\text{Male}$ in the set $\{\text{Male},\text{Female}\}$) or the other way around with $Z=1$ for females and $Z=0$ for males (the indicator function of $\text{Female}$). But there are other ways, of which the most general is to
encode males as some number $Z=m$ and some different number $Z=f$ for females.
(Because $m\ne f,$ division by $m-f$ below is allowable.)
However we encode the binary variable, we may now express the model in a single formula as
$$E[Y\mid X] = \phi(\alpha + \beta Y + \gamma Z X)$$
because, setting
$$\gamma = \frac{\beta_{\text{Male}} - \beta_{\text{Female}}}{m - f}\tag{**}$$
and
$$\beta = \beta_{\text{Male}} - \gamma m = \beta_{\text{Female}} - \gamma f,$$
for males with $Z=m$ this gives
$$\phi(\alpha + \beta X + \gamma Z X) = \phi(\alpha + (\beta + \gamma m)X) = \phi(\alpha + \beta_{\text{Male}})X$$
and for females with $Z=f,$
$$\phi(\alpha + \beta X + \gamma Z X) = \phi(\alpha + (\beta + \gamma fX) = \phi(\alpha + \beta_{\text{Female}})X$$
which is exactly model $(*).$
The expression for $\gamma$ in $(**)$ is crucial: it shows how to interpret the model.
For instance, when using the indicator for males, $m-f = 1-0$ and $\gamma$ is the difference between the male and female slopes in the model. When using the indicator for females, $m-f = 0-1 = -1$ and now $\gamma$ is the difference computed in the other direction: between the female and male slopes.
In the example of the question where $m=1$ and $f=-1,$ now
$$\gamma = \frac{\beta_{\text{Male}} - \beta_{\text{Female}}}{m - f} = \frac{\beta_{\text{Male}} - \beta_{\text{Female}}}{2} \tag{**}$$
is half the difference in slopes.
Despite these differences in interpretation of the coefficient $\gamma,$ these are all equivalent models because they are all identical to $(*).$ | How to include an interaction with sex | Algebra lights the way.
The purpose of an "interaction" between a binary variable like gender and another variable (let's just call it "$X$") is to model the possibility that how a response (call it " | How to include an interaction with sex
Algebra lights the way.
The purpose of an "interaction" between a binary variable like gender and another variable (let's just call it "$X$") is to model the possibility that how a response (call it "$Y$") is associated with $X$ may depend on the binary variable. Specifically, it allows for the slope (aka coefficient) of $X$ to vary with gender.
The desired model, without reference to how the binary variable might be encoded, therefore is
$$\eqalign{
E[Y\mid \text{Male}, X] &= \phi(\alpha + \beta_{\text{Male}} X) \\
E[Y\mid \text{Female}, X] &= \phi(\alpha + \beta_{\text{Female}} X).
}\tag{*}$$
for some function $\phi.$
One way--by far the commonest--to express this model with a single formula is to create a variable "$Z$" that indicates the gender: either $Z=1$ for males and $Z=0$ for females (the indicator function of $\text{Male}$ in the set $\{\text{Male},\text{Female}\}$) or the other way around with $Z=1$ for females and $Z=0$ for males (the indicator function of $\text{Female}$). But there are other ways, of which the most general is to
encode males as some number $Z=m$ and some different number $Z=f$ for females.
(Because $m\ne f,$ division by $m-f$ below is allowable.)
However we encode the binary variable, we may now express the model in a single formula as
$$E[Y\mid X] = \phi(\alpha + \beta Y + \gamma Z X)$$
because, setting
$$\gamma = \frac{\beta_{\text{Male}} - \beta_{\text{Female}}}{m - f}\tag{**}$$
and
$$\beta = \beta_{\text{Male}} - \gamma m = \beta_{\text{Female}} - \gamma f,$$
for males with $Z=m$ this gives
$$\phi(\alpha + \beta X + \gamma Z X) = \phi(\alpha + (\beta + \gamma m)X) = \phi(\alpha + \beta_{\text{Male}})X$$
and for females with $Z=f,$
$$\phi(\alpha + \beta X + \gamma Z X) = \phi(\alpha + (\beta + \gamma fX) = \phi(\alpha + \beta_{\text{Female}})X$$
which is exactly model $(*).$
The expression for $\gamma$ in $(**)$ is crucial: it shows how to interpret the model.
For instance, when using the indicator for males, $m-f = 1-0$ and $\gamma$ is the difference between the male and female slopes in the model. When using the indicator for females, $m-f = 0-1 = -1$ and now $\gamma$ is the difference computed in the other direction: between the female and male slopes.
In the example of the question where $m=1$ and $f=-1,$ now
$$\gamma = \frac{\beta_{\text{Male}} - \beta_{\text{Female}}}{m - f} = \frac{\beta_{\text{Male}} - \beta_{\text{Female}}}{2} \tag{**}$$
is half the difference in slopes.
Despite these differences in interpretation of the coefficient $\gamma,$ these are all equivalent models because they are all identical to $(*).$ | How to include an interaction with sex
Algebra lights the way.
The purpose of an "interaction" between a binary variable like gender and another variable (let's just call it "$X$") is to model the possibility that how a response (call it " |
54,743 | How to include an interaction with sex | If you have an interaction with sex, then this means that you create a new variable that did not exist before.
For instance:
let the outcome (dependent variable) be the probability of a baby
let sex be a variable which is either 0 or 1
and let's say we interact it with condom use which is either 0 or 1 as well.
Then you could have some table like the following (I make these numbers up as an example but try to approach realistic values):
Probability of having a baby
Yes Sex No Sex
Unprotected 0.50 0
Condom 0.01 0
So this could be modelled with two fixed effects like
$$\text{$y = a + b$ sex $+c$ unprotected}$$
But you won't get it right. The above formula will give
Yes Sex No Sex
Unprotected a+b+c a+c
Condom a+b a
This has only three variables to determine 4 values. If you try to make unprotected sex equal to 0.5 by giving some weight to b or c then you get that protected sex or no sex will have too much weight.
When you add an interaction term then you get
$$\text{$y = a + b$ sex $+c$ unprotected $+ d$ sex and unprotected}$$
Yes Sex No Sex
Unprotected a+b+c+d a+c
Condom a+b a
So that is how your interaction with sex is helping to get babies.
You can give indeed different values to sex, this will change the weights. Also when you change the interaction term and where you intercept, then things get mixed up. It can change how significant the value of the intercept is, and depending on your interaction, the value of the fixed model effects change as well.
But for the total model prediction, the prediction for the probability of whether you get a baby, it does not matter. The values of the sexes and the interaction, their significance, should not be measured. An analysis of variance is better.
So when you got that fixed, then the point of the intercept becomes just a matter of convenience. I like to do like you and put it in between men and women by giving men and women equal, but opposite, weight -1 and +1. In that case, the factors will show the differences relative to a place that is in between men and women.
Quickie:
The model is equivalent in the prediction of the means as long as the column space remains the same (this is the case in your example when you include an intercept term), but particular statistical tests for coefficients may change.
See also
Here is a case discussed where changing the position of the intercept changes the significance of the intercept term. When we would add an intercept then this would also change the fixed effect terms
Both variables of my GLMM output are significant. Don't know how to interpret it?
Here is a similar example where the regressors are changed by adding a column to the other columns. The column space remains the same and the solution is the same... but, coefficients change, and also their z/t-scores and related significance (test like anova however do not change) Adding a Constant to Every Column of X (OLS)
Example where centering the columns is having an effect significance of parameters when tested with z or t-test (your change from 0,1 to -1,1 is also a sort of centering): p-values change after mean centering with interaction terms. How to test for significance?
One more example that shows that centering and rescaling of columns in the the design matrix results in effectively the same model (if the intercept is included as well) with the same results for anova and expressions like $R^2$. But.. the values for coefficient will be different and related tests like z/t-tests will be different Standardization of variables and collinearity | How to include an interaction with sex | If you have an interaction with sex, then this means that you create a new variable that did not exist before.
For instance:
let the outcome (dependent variable) be the probability of a baby
let sex | How to include an interaction with sex
If you have an interaction with sex, then this means that you create a new variable that did not exist before.
For instance:
let the outcome (dependent variable) be the probability of a baby
let sex be a variable which is either 0 or 1
and let's say we interact it with condom use which is either 0 or 1 as well.
Then you could have some table like the following (I make these numbers up as an example but try to approach realistic values):
Probability of having a baby
Yes Sex No Sex
Unprotected 0.50 0
Condom 0.01 0
So this could be modelled with two fixed effects like
$$\text{$y = a + b$ sex $+c$ unprotected}$$
But you won't get it right. The above formula will give
Yes Sex No Sex
Unprotected a+b+c a+c
Condom a+b a
This has only three variables to determine 4 values. If you try to make unprotected sex equal to 0.5 by giving some weight to b or c then you get that protected sex or no sex will have too much weight.
When you add an interaction term then you get
$$\text{$y = a + b$ sex $+c$ unprotected $+ d$ sex and unprotected}$$
Yes Sex No Sex
Unprotected a+b+c+d a+c
Condom a+b a
So that is how your interaction with sex is helping to get babies.
You can give indeed different values to sex, this will change the weights. Also when you change the interaction term and where you intercept, then things get mixed up. It can change how significant the value of the intercept is, and depending on your interaction, the value of the fixed model effects change as well.
But for the total model prediction, the prediction for the probability of whether you get a baby, it does not matter. The values of the sexes and the interaction, their significance, should not be measured. An analysis of variance is better.
So when you got that fixed, then the point of the intercept becomes just a matter of convenience. I like to do like you and put it in between men and women by giving men and women equal, but opposite, weight -1 and +1. In that case, the factors will show the differences relative to a place that is in between men and women.
Quickie:
The model is equivalent in the prediction of the means as long as the column space remains the same (this is the case in your example when you include an intercept term), but particular statistical tests for coefficients may change.
See also
Here is a case discussed where changing the position of the intercept changes the significance of the intercept term. When we would add an intercept then this would also change the fixed effect terms
Both variables of my GLMM output are significant. Don't know how to interpret it?
Here is a similar example where the regressors are changed by adding a column to the other columns. The column space remains the same and the solution is the same... but, coefficients change, and also their z/t-scores and related significance (test like anova however do not change) Adding a Constant to Every Column of X (OLS)
Example where centering the columns is having an effect significance of parameters when tested with z or t-test (your change from 0,1 to -1,1 is also a sort of centering): p-values change after mean centering with interaction terms. How to test for significance?
One more example that shows that centering and rescaling of columns in the the design matrix results in effectively the same model (if the intercept is included as well) with the same results for anova and expressions like $R^2$. But.. the values for coefficient will be different and related tests like z/t-tests will be different Standardization of variables and collinearity | How to include an interaction with sex
If you have an interaction with sex, then this means that you create a new variable that did not exist before.
For instance:
let the outcome (dependent variable) be the probability of a baby
let sex |
54,744 | How to linearize a non linear function | The following argument indicates how to address such questions generally.
Let's suppose there is a vector parameter $\theta\in\Theta\subset\mathbb{R}^p$ and a one-to-one differentiable reparameterization $\alpha = h(\theta)$ (where $h:\mathbb{R}^p\to\mathbb{R}$) and, if necessary, a re-expression of the variable $x$ in terms of a vector variable $y\in \mathbb{R}^p$ for which $x=g(y)$, so that the formula becomes a bilinear function of $y,\theta;$ that is,
$$\sum_{j=1}^p y_j\,\theta_j = \alpha(\theta) + \alpha(\theta)^2 x(y).$$
Differentiating both sides with respect to a fixed $\theta_i$ gives
$$y_i = \left(1 + 2\alpha(\theta)\, x(y)\right)\frac{\partial\alpha}{\partial \theta_i}.$$
The left hand side does not vary with $\theta_i,$ but the right hand side does unless $\alpha$ is (piecewise) constant as a function of $\theta_i.$ But in that case, unless the set in which $\alpha$ is assumed to lie contains no intervals at all, $\alpha$ cannot everywhere be a one-to-one function, showing it is not a reparameterization, QED.
It is instructive to see what happens when taking this approach with a linearizable formula. When we analyze the first example, we obtain $p$ equations
$$y_i = \frac{\partial (\alpha\beta)}{\partial \theta_i}+\frac{\partial (\beta^2)}{\partial \theta_i} x.$$
Because the left side does not vary with $\theta,$ we conclude that either $\beta^2$ is a (positive) linear function of $\theta_i$ and $\alpha\beta$ is a linear function of $\theta_i$ (in which case $y_i$ is an affine function of $x$) or $\beta^2$ does not depend on $\theta_i$ and $\alpha\beta$ is a linear function of $\theta_i.$ The simplest solution requires $p=2$ and takes $\beta^2 = \theta_1 \ge 0$ and $\alpha\beta=\theta_2,$ as shown in the question, but there are plenty of others.
One, with $p=3$ parameters, is $y_1=-3x,$ $y_2=1,$ $y_3=-4,$ $\beta^2 = -3\theta_1 \le 0,$ and $\alpha\beta = \theta_2 - 4\theta_3.$ You can check that $$\alpha\beta + \beta^2 x = (\theta_2 - 4\theta_3) + (-3\theta_1)(x) = \theta_1 y_1 + \theta_2 y_2 + \theta_3 y_3.$$ (The parameters in this solution are not identifiable, however.)
Another two-parameter solution is $y_1=1-x,$ $y_2=1,$ $\beta^2=-\theta_1,$ and $\alpha\beta=\theta_1 + \theta_2.$ You can check that $$\alpha \beta + \beta^2 x = (\theta_1 + \theta_2) + (-\theta_1)(1-y_1) = \theta_1 y_1 + \theta_2 y_2.$$ | How to linearize a non linear function | The following argument indicates how to address such questions generally.
Let's suppose there is a vector parameter $\theta\in\Theta\subset\mathbb{R}^p$ and a one-to-one differentiable reparameterizat | How to linearize a non linear function
The following argument indicates how to address such questions generally.
Let's suppose there is a vector parameter $\theta\in\Theta\subset\mathbb{R}^p$ and a one-to-one differentiable reparameterization $\alpha = h(\theta)$ (where $h:\mathbb{R}^p\to\mathbb{R}$) and, if necessary, a re-expression of the variable $x$ in terms of a vector variable $y\in \mathbb{R}^p$ for which $x=g(y)$, so that the formula becomes a bilinear function of $y,\theta;$ that is,
$$\sum_{j=1}^p y_j\,\theta_j = \alpha(\theta) + \alpha(\theta)^2 x(y).$$
Differentiating both sides with respect to a fixed $\theta_i$ gives
$$y_i = \left(1 + 2\alpha(\theta)\, x(y)\right)\frac{\partial\alpha}{\partial \theta_i}.$$
The left hand side does not vary with $\theta_i,$ but the right hand side does unless $\alpha$ is (piecewise) constant as a function of $\theta_i.$ But in that case, unless the set in which $\alpha$ is assumed to lie contains no intervals at all, $\alpha$ cannot everywhere be a one-to-one function, showing it is not a reparameterization, QED.
It is instructive to see what happens when taking this approach with a linearizable formula. When we analyze the first example, we obtain $p$ equations
$$y_i = \frac{\partial (\alpha\beta)}{\partial \theta_i}+\frac{\partial (\beta^2)}{\partial \theta_i} x.$$
Because the left side does not vary with $\theta,$ we conclude that either $\beta^2$ is a (positive) linear function of $\theta_i$ and $\alpha\beta$ is a linear function of $\theta_i$ (in which case $y_i$ is an affine function of $x$) or $\beta^2$ does not depend on $\theta_i$ and $\alpha\beta$ is a linear function of $\theta_i.$ The simplest solution requires $p=2$ and takes $\beta^2 = \theta_1 \ge 0$ and $\alpha\beta=\theta_2,$ as shown in the question, but there are plenty of others.
One, with $p=3$ parameters, is $y_1=-3x,$ $y_2=1,$ $y_3=-4,$ $\beta^2 = -3\theta_1 \le 0,$ and $\alpha\beta = \theta_2 - 4\theta_3.$ You can check that $$\alpha\beta + \beta^2 x = (\theta_2 - 4\theta_3) + (-3\theta_1)(x) = \theta_1 y_1 + \theta_2 y_2 + \theta_3 y_3.$$ (The parameters in this solution are not identifiable, however.)
Another two-parameter solution is $y_1=1-x,$ $y_2=1,$ $\beta^2=-\theta_1,$ and $\alpha\beta=\theta_1 + \theta_2.$ You can check that $$\alpha \beta + \beta^2 x = (\theta_1 + \theta_2) + (-\theta_1)(1-y_1) = \theta_1 y_1 + \theta_2 y_2.$$ | How to linearize a non linear function
The following argument indicates how to address such questions generally.
Let's suppose there is a vector parameter $\theta\in\Theta\subset\mathbb{R}^p$ and a one-to-one differentiable reparameterizat |
54,745 | How to linearize a non linear function | Your first example is a model with two effective parameters:$$y=\beta_0+\beta_x x+\varepsilon$$
You have two degrees of freedom $\alpha,\beta$ so you were able to linearize the model. Having the same degrees of freedom is not a sufficient condition but it's necessary. I show thesufficient conditions further in answer.
Your second example has two parameters too, the same as the first, but due to the constraint $\beta_x=\beta_0^2$ you have only one degree of freedom $\alpha$. Hence, you can't linearize this model.
It must be necessary that whether it is a linear or nonlinear transformation the degrees of freedom must be preserved! You can't expand one dimensional space $\alpha$ into two dimensional space $\beta_0,\beta_x$.
Sufficient condition and psudo-proof
You have a model $y=g(a,b)+h(a,b)x+\varepsilon$. It may be possible to transform it into $y=c+dx+\varepsilon$. Subtract one from another:
$$0=g(a,b)-c+(h(a,b)-d)x$$
Since, estimates $(\hat c,\hat d)$ are random variables that are a function of a random sample $(x,y)$ this can only work when:
$$g(\hat a,\hat b)=\hat c\\h(\hat a,\hat b)=\hat d$$
It's a system of algebraic equations that may have a unique solution. So, you might be able to map $(\hat c,\hat d)$ back to $(\hat a,\hat b)$. The sufficient condition is that the system has a solution.
Example: imagine that your equation was $y=g(a)+h(a)x+\varepsilon$. There's only one degree of freedom $a$ despite two functions $g,h$, so we have the following two to satisfy simultaneously:
$$g(\hat a)=\hat c\\h(\hat a)=\hat d$$
This is generally impossible to satisfy, unless somehow $\hat c=\hat d$ always. In other words you'd have to deal with truly one-dimensional system: $y=c(1+x)+\varepsilon$, essentially a regression through origin $y=x'+\varepsilon$ | How to linearize a non linear function | Your first example is a model with two effective parameters:$$y=\beta_0+\beta_x x+\varepsilon$$
You have two degrees of freedom $\alpha,\beta$ so you were able to linearize the model. Having the same | How to linearize a non linear function
Your first example is a model with two effective parameters:$$y=\beta_0+\beta_x x+\varepsilon$$
You have two degrees of freedom $\alpha,\beta$ so you were able to linearize the model. Having the same degrees of freedom is not a sufficient condition but it's necessary. I show thesufficient conditions further in answer.
Your second example has two parameters too, the same as the first, but due to the constraint $\beta_x=\beta_0^2$ you have only one degree of freedom $\alpha$. Hence, you can't linearize this model.
It must be necessary that whether it is a linear or nonlinear transformation the degrees of freedom must be preserved! You can't expand one dimensional space $\alpha$ into two dimensional space $\beta_0,\beta_x$.
Sufficient condition and psudo-proof
You have a model $y=g(a,b)+h(a,b)x+\varepsilon$. It may be possible to transform it into $y=c+dx+\varepsilon$. Subtract one from another:
$$0=g(a,b)-c+(h(a,b)-d)x$$
Since, estimates $(\hat c,\hat d)$ are random variables that are a function of a random sample $(x,y)$ this can only work when:
$$g(\hat a,\hat b)=\hat c\\h(\hat a,\hat b)=\hat d$$
It's a system of algebraic equations that may have a unique solution. So, you might be able to map $(\hat c,\hat d)$ back to $(\hat a,\hat b)$. The sufficient condition is that the system has a solution.
Example: imagine that your equation was $y=g(a)+h(a)x+\varepsilon$. There's only one degree of freedom $a$ despite two functions $g,h$, so we have the following two to satisfy simultaneously:
$$g(\hat a)=\hat c\\h(\hat a)=\hat d$$
This is generally impossible to satisfy, unless somehow $\hat c=\hat d$ always. In other words you'd have to deal with truly one-dimensional system: $y=c(1+x)+\varepsilon$, essentially a regression through origin $y=x'+\varepsilon$ | How to linearize a non linear function
Your first example is a model with two effective parameters:$$y=\beta_0+\beta_x x+\varepsilon$$
You have two degrees of freedom $\alpha,\beta$ so you were able to linearize the model. Having the same |
54,746 | What is the PDF of a Normal convolved with a Laplace | Let's work it out from first principles, beginning with the hard work of computing a convolution.
As an auxiliary calculation, consider the distribution of $W=X+Y$ where $Y$ has an Exponential distribution with pdf $$f_Y(y) = e^{-y}\,\mathcal{I}(y\gt 0)$$ and $X$ has a Normal$(\mu,\sigma^2)$ distribution with pdf $f_X(x;\mu,\sigma) = \phi((x-\mu)/\sigma)/\sigma$ where $$\phi(z) = \frac{1}{\sqrt{2\pi}}\,e^{-z^2/2}$$ is the standard Normal pdf. The PDF of the sum is the convolution
$$f_W(w;\mu,\sigma) = \int_{-\infty}^\infty f_Y(y) f_X(w-y;\mu,\sigma)\,\mathrm{d}y = \int_0^\infty e^{-y} f_X(w-y;\mu,\sigma)\,\mathrm{d}y.$$
Substituting $\sigma z = w - y - \mu$ expresses this integral as
$$\eqalign{f_W(w;\mu,\sigma) &= e^{\mu-w}\,e^{\sigma^2/2}\int_{-\infty}^{(w-\mu)/\sigma} \phi(z-\sigma)\,\mathrm{d}z \\
&= e^{\mu-w+\sigma^2/2}\, \Phi\left(\frac{w-\mu}{\sigma}-\sigma\right)}\tag{1}$$
where $\Phi$ is the standard normal CDF,
$$\Phi(z) = \int_{-\infty}^z \phi(z)\,\mathrm{d}z.$$
The rest builds on this work and is relatively easy.
An asymmetric Laplace random variable $U$ is based on a mixture of a scaled exponential distribution and the negative of a scaled exponential distribution (potentially with a different scale, thereby making the mixture asymmetric). This mixture is then shifted by a specified amount. The amount of mixing is established to give the Laplace pdf a unique value at its peak--but this is unimportant.
One component of $U$ therefore can be expressed as $$U_+ = \alpha Y + \lambda$$ with a positive scale $\alpha$ and the other component as $$U_- = -\beta Y + \lambda$$ with a positive scale $\beta.$ (I apologize: I worked this out before realizing that my $\alpha$ is $1/\alpha$ in the paper and my $\beta$ is $1/\beta$ in the paper: in the end, after setting $\alpha=\beta,$ this won't matter.)
When we add $X = \sigma Z + \mu$ we obtain two components, of which the first is $$W_+ = U_+ + X = \alpha Y + \lambda + \sigma Z + \mu = \alpha\left(Y + \left[\frac{\sigma}{\alpha} Z + \frac{\lambda + \mu}{\alpha}\right]\right)$$ and the second is similarly written. To obtain its pdf, all we need to do is scale formula $(1)$ by $\alpha,$ giving
$$f_{W_+}(w;\mu,\sigma,\lambda,\alpha) = \frac{1}{\alpha}\,f_W\left(\frac{w}{\alpha};\frac{\lambda+\mu}{\alpha}, \frac{\sigma}{\alpha}\right).\tag{2}$$
Likewise, because
$$W_- = U_- + X = -\beta Y + \lambda + \sigma Z + \mu = -\beta \left(Y + \left[-\frac{\sigma}{\beta } Z + \frac{\lambda + \mu}{\beta }\right]\right)$$
and $-Z$ has the same distribution as $Z$, formula $(1)$ yields
$$\eqalign{f_{W_-}(w;\mu,\sigma,\lambda,\beta) &= \frac{1}{\beta }\,f_W\left(-\frac{w}{\beta };-\frac{\lambda+\mu}{\beta }, \frac{\sigma}{\beta }\right) \\
&= f_{W_+}(-w;-\lambda,\beta,-\mu,\sigma).}\tag{3}$$
The mixture pdf is
$$f_W(w;\mu,\sigma,\lambda,\alpha,\beta,p) = pf_{W_+}(w;\mu,\sigma,\lambda,\alpha) + (1-p) f_{W_-}(w;\mu,\sigma,\lambda,\beta).\tag{4}$$
Comments
For the Laplace-Normal distribution, use $p = \alpha / (\alpha + \beta).$ In your case $\alpha=\beta,$ which evidently "simplifies" $(4)$ a tiny bit--but a quick look at its component formulas $(2)$ and $(3)$ suggests there's not much one can do algebraically to reduce the amount of computation, so why bother?
Each of the components of the final formula $(4),$ as embodied in formulas $(1),$ $(2),$ and $(3)$ can be separately and flexibly implemented and separately tested. This makes for an easier and more reliable software implementation than attempting to combine them all into one monster combination of $\phi$ and $\Phi,$ as done in the referenced paper. As a bonus, important numerical improvements in the calculation can be implemented exactly where they are needed, making the code relatively easy to maintain. As an example, see how $f_W$ is implemented using logarithms (as f.1) in the code below.
Illustration
This plot compares a histogram of one million iid draws from an asymmetric Laplace-Normal distribution with pdf $f_W(w;4,0.5,-3,2,1,2/3)$ to a calculation based directly on formulas $(1) - (4):$
The agreement is a pretty good test.
Code
Here's the R code that generated this simulation and this plot.
n <- 1e6 # Size of simulation
mu <- 4
sigma <- 1/2
alpha <- 2
lambda <- -3
beta <- 1
#
# Generate data.
# set.seed(17)
X <- rnorm(n, mu, sigma)
Y <- ifelse(runif(n, 0, alpha + beta) < alpha, alpha, -beta) * rexp(n) + lambda
W <- X + Y
#
# Plot their histogram.
#
hist(W, freq=FALSE, breaks=200, cex.main=1)
#
# Overplot the PDF.
#
f.1 <- function(w, mu=0, sigma=1) {
exp(mu - w + sigma^2/2 + pnorm((w - mu)/sigma - sigma, log=TRUE))
}
f.plus <- function(w, mu=0, sigma=1, lambda=0, alpha=1) {
f.1(w / alpha, (lambda + mu) / alpha, sigma / alpha) / alpha
}
f.minus <- function(w, mu=0, sigma=1, lambda=0, beta=1) {
f.plus(-w, -mu, sigma, -lambda, beta)
}
f <- function(w, mu=0, sigma=1, lambda=0, alpha=1, beta=1, p=1/2) {
p * f.plus(w, mu, sigma, lambda, alpha) + (1-p) * f.minus(w, mu, sigma, lambda, beta)
}
f.asymmetric <- function(y, mu=0, sigma=1, lambda=0, alpha=1, beta=1) {
f(y, mu, sigma, lambda, alpha, beta, alpha / (alpha + beta))
}
curve(f.asymmetric(x, mu, sigma, lambda, alpha, beta), add=TRUE, lwd=2, col="Red") | What is the PDF of a Normal convolved with a Laplace | Let's work it out from first principles, beginning with the hard work of computing a convolution.
As an auxiliary calculation, consider the distribution of $W=X+Y$ where $Y$ has an Exponential distrib | What is the PDF of a Normal convolved with a Laplace
Let's work it out from first principles, beginning with the hard work of computing a convolution.
As an auxiliary calculation, consider the distribution of $W=X+Y$ where $Y$ has an Exponential distribution with pdf $$f_Y(y) = e^{-y}\,\mathcal{I}(y\gt 0)$$ and $X$ has a Normal$(\mu,\sigma^2)$ distribution with pdf $f_X(x;\mu,\sigma) = \phi((x-\mu)/\sigma)/\sigma$ where $$\phi(z) = \frac{1}{\sqrt{2\pi}}\,e^{-z^2/2}$$ is the standard Normal pdf. The PDF of the sum is the convolution
$$f_W(w;\mu,\sigma) = \int_{-\infty}^\infty f_Y(y) f_X(w-y;\mu,\sigma)\,\mathrm{d}y = \int_0^\infty e^{-y} f_X(w-y;\mu,\sigma)\,\mathrm{d}y.$$
Substituting $\sigma z = w - y - \mu$ expresses this integral as
$$\eqalign{f_W(w;\mu,\sigma) &= e^{\mu-w}\,e^{\sigma^2/2}\int_{-\infty}^{(w-\mu)/\sigma} \phi(z-\sigma)\,\mathrm{d}z \\
&= e^{\mu-w+\sigma^2/2}\, \Phi\left(\frac{w-\mu}{\sigma}-\sigma\right)}\tag{1}$$
where $\Phi$ is the standard normal CDF,
$$\Phi(z) = \int_{-\infty}^z \phi(z)\,\mathrm{d}z.$$
The rest builds on this work and is relatively easy.
An asymmetric Laplace random variable $U$ is based on a mixture of a scaled exponential distribution and the negative of a scaled exponential distribution (potentially with a different scale, thereby making the mixture asymmetric). This mixture is then shifted by a specified amount. The amount of mixing is established to give the Laplace pdf a unique value at its peak--but this is unimportant.
One component of $U$ therefore can be expressed as $$U_+ = \alpha Y + \lambda$$ with a positive scale $\alpha$ and the other component as $$U_- = -\beta Y + \lambda$$ with a positive scale $\beta.$ (I apologize: I worked this out before realizing that my $\alpha$ is $1/\alpha$ in the paper and my $\beta$ is $1/\beta$ in the paper: in the end, after setting $\alpha=\beta,$ this won't matter.)
When we add $X = \sigma Z + \mu$ we obtain two components, of which the first is $$W_+ = U_+ + X = \alpha Y + \lambda + \sigma Z + \mu = \alpha\left(Y + \left[\frac{\sigma}{\alpha} Z + \frac{\lambda + \mu}{\alpha}\right]\right)$$ and the second is similarly written. To obtain its pdf, all we need to do is scale formula $(1)$ by $\alpha,$ giving
$$f_{W_+}(w;\mu,\sigma,\lambda,\alpha) = \frac{1}{\alpha}\,f_W\left(\frac{w}{\alpha};\frac{\lambda+\mu}{\alpha}, \frac{\sigma}{\alpha}\right).\tag{2}$$
Likewise, because
$$W_- = U_- + X = -\beta Y + \lambda + \sigma Z + \mu = -\beta \left(Y + \left[-\frac{\sigma}{\beta } Z + \frac{\lambda + \mu}{\beta }\right]\right)$$
and $-Z$ has the same distribution as $Z$, formula $(1)$ yields
$$\eqalign{f_{W_-}(w;\mu,\sigma,\lambda,\beta) &= \frac{1}{\beta }\,f_W\left(-\frac{w}{\beta };-\frac{\lambda+\mu}{\beta }, \frac{\sigma}{\beta }\right) \\
&= f_{W_+}(-w;-\lambda,\beta,-\mu,\sigma).}\tag{3}$$
The mixture pdf is
$$f_W(w;\mu,\sigma,\lambda,\alpha,\beta,p) = pf_{W_+}(w;\mu,\sigma,\lambda,\alpha) + (1-p) f_{W_-}(w;\mu,\sigma,\lambda,\beta).\tag{4}$$
Comments
For the Laplace-Normal distribution, use $p = \alpha / (\alpha + \beta).$ In your case $\alpha=\beta,$ which evidently "simplifies" $(4)$ a tiny bit--but a quick look at its component formulas $(2)$ and $(3)$ suggests there's not much one can do algebraically to reduce the amount of computation, so why bother?
Each of the components of the final formula $(4),$ as embodied in formulas $(1),$ $(2),$ and $(3)$ can be separately and flexibly implemented and separately tested. This makes for an easier and more reliable software implementation than attempting to combine them all into one monster combination of $\phi$ and $\Phi,$ as done in the referenced paper. As a bonus, important numerical improvements in the calculation can be implemented exactly where they are needed, making the code relatively easy to maintain. As an example, see how $f_W$ is implemented using logarithms (as f.1) in the code below.
Illustration
This plot compares a histogram of one million iid draws from an asymmetric Laplace-Normal distribution with pdf $f_W(w;4,0.5,-3,2,1,2/3)$ to a calculation based directly on formulas $(1) - (4):$
The agreement is a pretty good test.
Code
Here's the R code that generated this simulation and this plot.
n <- 1e6 # Size of simulation
mu <- 4
sigma <- 1/2
alpha <- 2
lambda <- -3
beta <- 1
#
# Generate data.
# set.seed(17)
X <- rnorm(n, mu, sigma)
Y <- ifelse(runif(n, 0, alpha + beta) < alpha, alpha, -beta) * rexp(n) + lambda
W <- X + Y
#
# Plot their histogram.
#
hist(W, freq=FALSE, breaks=200, cex.main=1)
#
# Overplot the PDF.
#
f.1 <- function(w, mu=0, sigma=1) {
exp(mu - w + sigma^2/2 + pnorm((w - mu)/sigma - sigma, log=TRUE))
}
f.plus <- function(w, mu=0, sigma=1, lambda=0, alpha=1) {
f.1(w / alpha, (lambda + mu) / alpha, sigma / alpha) / alpha
}
f.minus <- function(w, mu=0, sigma=1, lambda=0, beta=1) {
f.plus(-w, -mu, sigma, -lambda, beta)
}
f <- function(w, mu=0, sigma=1, lambda=0, alpha=1, beta=1, p=1/2) {
p * f.plus(w, mu, sigma, lambda, alpha) + (1-p) * f.minus(w, mu, sigma, lambda, beta)
}
f.asymmetric <- function(y, mu=0, sigma=1, lambda=0, alpha=1, beta=1) {
f(y, mu, sigma, lambda, alpha, beta, alpha / (alpha + beta))
}
curve(f.asymmetric(x, mu, sigma, lambda, alpha, beta), add=TRUE, lwd=2, col="Red") | What is the PDF of a Normal convolved with a Laplace
Let's work it out from first principles, beginning with the hard work of computing a convolution.
As an auxiliary calculation, consider the distribution of $W=X+Y$ where $Y$ has an Exponential distrib |
54,747 | Difference Between Scipy.optimize.least_squares and Scipy.optimize.curve_fit | There is no fundamental difference between curve_fit and least_squares. Moreover, if you don't use method = 'lm'they do exactly the same thing. You can check it in a source code of curve_fit fucntion on a Github:
if method == 'lm':
...
res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs)
...
else:
...
res = least_squares(func, p0, jac=jac, bounds=bounds, method=method,
**kwargs)
...
So, curve_fit is just a wrapper around least_squares. I've just checked them out and I've got the same results from both. | Difference Between Scipy.optimize.least_squares and Scipy.optimize.curve_fit | There is no fundamental difference between curve_fit and least_squares. Moreover, if you don't use method = 'lm'they do exactly the same thing. You can check it in a source code of curve_fit fucntion | Difference Between Scipy.optimize.least_squares and Scipy.optimize.curve_fit
There is no fundamental difference between curve_fit and least_squares. Moreover, if you don't use method = 'lm'they do exactly the same thing. You can check it in a source code of curve_fit fucntion on a Github:
if method == 'lm':
...
res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs)
...
else:
...
res = least_squares(func, p0, jac=jac, bounds=bounds, method=method,
**kwargs)
...
So, curve_fit is just a wrapper around least_squares. I've just checked them out and I've got the same results from both. | Difference Between Scipy.optimize.least_squares and Scipy.optimize.curve_fit
There is no fundamental difference between curve_fit and least_squares. Moreover, if you don't use method = 'lm'they do exactly the same thing. You can check it in a source code of curve_fit fucntion |
54,748 | Why is Kullback Leibler Divergence always positive? | Intuitive understanding is somewhat subjective, but I can at least offer my perspective:
Kullback-Leibler divergence is a concept from Information Theory. It tells you how much longer --- how many bits --- on average are your messages going to be if you use a suboptimal coding scheme.
For every probability distribution, there is a lower bound on the average message length, and that is the entropy of the distribution. For the distribution $P$ from your Wikipedia example, it is
$$
- \sum_x P(x) \cdot \log_2 P(x) \approx 1.462
$$
That is, if you were to record realisations of random variables from that probability distribution, e.g. in a computer file, or transmit them over a limited-bandwidth channel, you'd need, on average, at least $1.462$ bits per realisation, no matter how sophisticated your coding is. Since in that distribution the case $x = 2$ is three times as probable as $x = 3$, it makes sense to use a shorter code for encoding the event $x=2$ than for encoding $x=3$. You could, for example, use the following encoding:
x: 1 2 3
code: 01 1 001
The average message length with this code is $1.68$ bits, which is (of course!) more than the theoretical lower bound, but still better than an equal-length code, e.g.:
x: 1 2 3
code: 01 10 11
which would need $2$ bits per event. You can construct more complex codes to encode sequences of events, but no matter what you do, you won't be able to beat the information-theoretical lower bound.
Now, for a different distribution, say $Q$, there are other encodings that approximate the best possible coding. The entropy of $Q$ from your example is $\approx 1.583$ bits. As approximations, both above codes are equally good, requiring on average $2$ bits per event, but more complex codes might be better.
However, what is better for encoding $Q$ is not necessarily better for encoding $P$. Kullback-Leibler divergence tells you how many bits does it costs you to use a coding optimised for transmitting/storing information on $Q$ if your true probability distribution is $P$. This measure cannot be negative. If it were, it would mean that you could beat the optimal coding for $P$ by using the coding optimised for $Q$ instead.
Indeed, the KL-divergence $D_{KL}(P||P) = 0$ (easy to show, because $\log(p(x)/p(x)) = \log(1) = 0$) tells you that encoding the probability distribution $P$ with a code optimised for that distribution incurs zero costs. | Why is Kullback Leibler Divergence always positive? | Intuitive understanding is somewhat subjective, but I can at least offer my perspective:
Kullback-Leibler divergence is a concept from Information Theory. It tells you how much longer --- how many bit | Why is Kullback Leibler Divergence always positive?
Intuitive understanding is somewhat subjective, but I can at least offer my perspective:
Kullback-Leibler divergence is a concept from Information Theory. It tells you how much longer --- how many bits --- on average are your messages going to be if you use a suboptimal coding scheme.
For every probability distribution, there is a lower bound on the average message length, and that is the entropy of the distribution. For the distribution $P$ from your Wikipedia example, it is
$$
- \sum_x P(x) \cdot \log_2 P(x) \approx 1.462
$$
That is, if you were to record realisations of random variables from that probability distribution, e.g. in a computer file, or transmit them over a limited-bandwidth channel, you'd need, on average, at least $1.462$ bits per realisation, no matter how sophisticated your coding is. Since in that distribution the case $x = 2$ is three times as probable as $x = 3$, it makes sense to use a shorter code for encoding the event $x=2$ than for encoding $x=3$. You could, for example, use the following encoding:
x: 1 2 3
code: 01 1 001
The average message length with this code is $1.68$ bits, which is (of course!) more than the theoretical lower bound, but still better than an equal-length code, e.g.:
x: 1 2 3
code: 01 10 11
which would need $2$ bits per event. You can construct more complex codes to encode sequences of events, but no matter what you do, you won't be able to beat the information-theoretical lower bound.
Now, for a different distribution, say $Q$, there are other encodings that approximate the best possible coding. The entropy of $Q$ from your example is $\approx 1.583$ bits. As approximations, both above codes are equally good, requiring on average $2$ bits per event, but more complex codes might be better.
However, what is better for encoding $Q$ is not necessarily better for encoding $P$. Kullback-Leibler divergence tells you how many bits does it costs you to use a coding optimised for transmitting/storing information on $Q$ if your true probability distribution is $P$. This measure cannot be negative. If it were, it would mean that you could beat the optimal coding for $P$ by using the coding optimised for $Q$ instead.
Indeed, the KL-divergence $D_{KL}(P||P) = 0$ (easy to show, because $\log(p(x)/p(x)) = \log(1) = 0$) tells you that encoding the probability distribution $P$ with a code optimised for that distribution incurs zero costs. | Why is Kullback Leibler Divergence always positive?
Intuitive understanding is somewhat subjective, but I can at least offer my perspective:
Kullback-Leibler divergence is a concept from Information Theory. It tells you how much longer --- how many bit |
54,749 | Does paired sample `t test` need pre test? | The general sentiment on Cross Validated is that formal testing of normality is not helpful: either you have too few observations to reject, or you have so many that the tests become sensitive to deviations from normality that are not practically significant because your data are “normal enough”. Graphical examination such as histograms, kernel density estimates, and normal quantile-quantile plots will be your friend.
The t-test happens to be rather robust to deviations from normality, too. Also remember that you’d assess the differences between the groups, not the groups themselves. | Does paired sample `t test` need pre test? | The general sentiment on Cross Validated is that formal testing of normality is not helpful: either you have too few observations to reject, or you have so many that the tests become sensitive to devi | Does paired sample `t test` need pre test?
The general sentiment on Cross Validated is that formal testing of normality is not helpful: either you have too few observations to reject, or you have so many that the tests become sensitive to deviations from normality that are not practically significant because your data are “normal enough”. Graphical examination such as histograms, kernel density estimates, and normal quantile-quantile plots will be your friend.
The t-test happens to be rather robust to deviations from normality, too. Also remember that you’d assess the differences between the groups, not the groups themselves. | Does paired sample `t test` need pre test?
The general sentiment on Cross Validated is that formal testing of normality is not helpful: either you have too few observations to reject, or you have so many that the tests become sensitive to devi |
54,750 | An unbiased estimate for population variance | Presumably $Q_s(X) = 1 - P_s(X) = (n-X)/n.$
Writing $q=1-p$, let's work out the expectation of $n^2P_s(X)Q_s(X)$ using the definition of expectation, the formula for Binomial probabilities, and the Binomial Theorem:
$$\eqalign{
E\left[n^2P_s(X)Q_s(X)\right] &= E\left[X(n-X)\right] \\
&= \sum_x \Pr(X=x)\, x(n-x) & \text{(Definition of expectation)} \\
&= \sum_{x=0}^n \binom{n}{x}p^x q^{n-x}\, x(n-x) &\text{(Binomial distribution)} \\
&=\sum_{x=0}^n \binom{n}{x}\, pq \frac{\partial^2}{\partial p\partial q} \left(p^x\,q^{n-x}\right) \\
&= pq \frac{\partial^2}{\partial p\partial q}\sum_{x=0}^n \binom{n}{x}\, p^x\,q^{n-x} & \text{(Linearity of differentiation)}\\
&= pq \frac{\partial^2}{\partial p\partial q}\left(p+q\right)^n &\text{(Binomial Theorem)}\\
&= pq\,n(n-1)(p+q)^{n-2}.
}$$
(When $n=1$ or $n=0$ the result is just $0.$) Plugging in $p+q=1$ gives
$$E\left[n^2P_sQ_s\right] = n(n-1)pq$$
for all $n,$ whence for $n\gt 1,$
$$E\left[\frac{1}{n-1}\,P_s(X)Q_s(X)\right] = \frac{pq}{n}=\operatorname{Var}\left(P_s(X)\right).$$
Therefore $P_s(X)Q_s(X)/(n-1)$ is an unbiased estimator of the variance of $X/n$ (and so obviously $P_s(X)Q_s(X)/n$ is not: it is biased). | An unbiased estimate for population variance | Presumably $Q_s(X) = 1 - P_s(X) = (n-X)/n.$
Writing $q=1-p$, let's work out the expectation of $n^2P_s(X)Q_s(X)$ using the definition of expectation, the formula for Binomial probabilities, and the Bi | An unbiased estimate for population variance
Presumably $Q_s(X) = 1 - P_s(X) = (n-X)/n.$
Writing $q=1-p$, let's work out the expectation of $n^2P_s(X)Q_s(X)$ using the definition of expectation, the formula for Binomial probabilities, and the Binomial Theorem:
$$\eqalign{
E\left[n^2P_s(X)Q_s(X)\right] &= E\left[X(n-X)\right] \\
&= \sum_x \Pr(X=x)\, x(n-x) & \text{(Definition of expectation)} \\
&= \sum_{x=0}^n \binom{n}{x}p^x q^{n-x}\, x(n-x) &\text{(Binomial distribution)} \\
&=\sum_{x=0}^n \binom{n}{x}\, pq \frac{\partial^2}{\partial p\partial q} \left(p^x\,q^{n-x}\right) \\
&= pq \frac{\partial^2}{\partial p\partial q}\sum_{x=0}^n \binom{n}{x}\, p^x\,q^{n-x} & \text{(Linearity of differentiation)}\\
&= pq \frac{\partial^2}{\partial p\partial q}\left(p+q\right)^n &\text{(Binomial Theorem)}\\
&= pq\,n(n-1)(p+q)^{n-2}.
}$$
(When $n=1$ or $n=0$ the result is just $0.$) Plugging in $p+q=1$ gives
$$E\left[n^2P_sQ_s\right] = n(n-1)pq$$
for all $n,$ whence for $n\gt 1,$
$$E\left[\frac{1}{n-1}\,P_s(X)Q_s(X)\right] = \frac{pq}{n}=\operatorname{Var}\left(P_s(X)\right).$$
Therefore $P_s(X)Q_s(X)/(n-1)$ is an unbiased estimator of the variance of $X/n$ (and so obviously $P_s(X)Q_s(X)/n$ is not: it is biased). | An unbiased estimate for population variance
Presumably $Q_s(X) = 1 - P_s(X) = (n-X)/n.$
Writing $q=1-p$, let's work out the expectation of $n^2P_s(X)Q_s(X)$ using the definition of expectation, the formula for Binomial probabilities, and the Bi |
54,751 | An unbiased estimate for population variance | Here's an approach using the following variance formula and rule
$Var(\hat{p})=\frac{p(1-p)}{n}=E[\hat{p}^2]-E[\hat{p}]^2$
where $\hat{p}$ is the sample proportion of times an indicator variable is 1 in a simple random sample of size $n$, i.e. the mean of an indicator variable, and $p$ is the corresponding population proportion for that indicator variable.
Suppose we estimate the population variance for that indicator variable, which is $p(1-p)$ (in terms of the population proportion $p$), using the estimator $\hat{p}(1-\hat{p})$ (which uses the sample statistics only). This estimator is biased and multiplying it by $n/(n-1)$ would make it unbiased.
Proof:
$E[\hat{p}(1-\hat{p})]=E[\hat{p}-\hat{p}^2]=p-E[\hat{p}^2]$
Rearranging the variance formula and rule above, we get $E[\hat{p}^2]=p^2 + \frac{p(1-p)}{n}$. Plugging that in we get
$E[\hat{p}(1-\hat{p})]=p - p^2 - \frac{p(1-p)}{n} = \frac{n-1}{n}p(1-p)$ | An unbiased estimate for population variance | Here's an approach using the following variance formula and rule
$Var(\hat{p})=\frac{p(1-p)}{n}=E[\hat{p}^2]-E[\hat{p}]^2$
where $\hat{p}$ is the sample proportion of times an indicator variable is 1 | An unbiased estimate for population variance
Here's an approach using the following variance formula and rule
$Var(\hat{p})=\frac{p(1-p)}{n}=E[\hat{p}^2]-E[\hat{p}]^2$
where $\hat{p}$ is the sample proportion of times an indicator variable is 1 in a simple random sample of size $n$, i.e. the mean of an indicator variable, and $p$ is the corresponding population proportion for that indicator variable.
Suppose we estimate the population variance for that indicator variable, which is $p(1-p)$ (in terms of the population proportion $p$), using the estimator $\hat{p}(1-\hat{p})$ (which uses the sample statistics only). This estimator is biased and multiplying it by $n/(n-1)$ would make it unbiased.
Proof:
$E[\hat{p}(1-\hat{p})]=E[\hat{p}-\hat{p}^2]=p-E[\hat{p}^2]$
Rearranging the variance formula and rule above, we get $E[\hat{p}^2]=p^2 + \frac{p(1-p)}{n}$. Plugging that in we get
$E[\hat{p}(1-\hat{p})]=p - p^2 - \frac{p(1-p)}{n} = \frac{n-1}{n}p(1-p)$ | An unbiased estimate for population variance
Here's an approach using the following variance formula and rule
$Var(\hat{p})=\frac{p(1-p)}{n}=E[\hat{p}^2]-E[\hat{p}]^2$
where $\hat{p}$ is the sample proportion of times an indicator variable is 1 |
54,752 | Comparing Coefficients of Two Time Series Models | Let us define $\delta_i = \beta_{1,i} - \gamma_{1,i}$, with i indexing the samples.
You would ideally regress $\delta_i$ over 1: $\delta_i = b.1 +\eta_i $.
If I make no mistake, your question can indeed be rephrased:: What is the value of $b$? Is $b$ significantly different from 0?
However, as noticed by @F.Tusel, there is some uncertainty regarding $\delta_i$; this will bias upwards the variance associated with $b$, and this could cause your result to be (wrongly) non-significant.
If your result is significantly different from zero, stop here as what is below would only increase significance.
If not: Having uncertainty on the value of the dependent variable is however a classical problem. A clear explanation on how to deal with it in classical cases can be found in [1].
Is it simple in your case? It probably depends. Are your samples i.i.d. ?
If yes: the variance-covariance matrix of $\delta_i$ is simply the diagonal matrix with $\sigma_{\delta_i}$ on the diagonal. And $\sigma_{\delta_i}$ can itself be infered from the matrixes of variance-covariance of $\left(\array{\beta_{1,i} \\ \gamma_{1,i} }\right)$. The latter can be estimated if you jointly estimate: $$ \left(\array{y_{t,i} \\ p_{t,i}}\right) = \left(\array{\beta_{0,i} \\ \gamma_{0,i} }\right) + \left(\array{y_{t-1,i} \ \ 0 \\ 0 \ \ p_{t-1,i}}\right)\left(\array{\beta_{1,i} \\ \gamma_{1,i} }\right) + \left(\array{\epsilon_{t,i} \\ \omega_{t,i} }\right) $$.
If not, more thinking is required. Please add more information in the Question regarding your samples.
Reference:
[1] Lewis, Jeffrey B., and Drew A. Linzer. "Estimating regression models in which the dependent variable is based on estimates." Political analysis 13.4 (2005): 345-364. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.318.7018&rep=rep1&type=pdf | Comparing Coefficients of Two Time Series Models | Let us define $\delta_i = \beta_{1,i} - \gamma_{1,i}$, with i indexing the samples.
You would ideally regress $\delta_i$ over 1: $\delta_i = b.1 +\eta_i $.
If I make no mistake, your question can inde | Comparing Coefficients of Two Time Series Models
Let us define $\delta_i = \beta_{1,i} - \gamma_{1,i}$, with i indexing the samples.
You would ideally regress $\delta_i$ over 1: $\delta_i = b.1 +\eta_i $.
If I make no mistake, your question can indeed be rephrased:: What is the value of $b$? Is $b$ significantly different from 0?
However, as noticed by @F.Tusel, there is some uncertainty regarding $\delta_i$; this will bias upwards the variance associated with $b$, and this could cause your result to be (wrongly) non-significant.
If your result is significantly different from zero, stop here as what is below would only increase significance.
If not: Having uncertainty on the value of the dependent variable is however a classical problem. A clear explanation on how to deal with it in classical cases can be found in [1].
Is it simple in your case? It probably depends. Are your samples i.i.d. ?
If yes: the variance-covariance matrix of $\delta_i$ is simply the diagonal matrix with $\sigma_{\delta_i}$ on the diagonal. And $\sigma_{\delta_i}$ can itself be infered from the matrixes of variance-covariance of $\left(\array{\beta_{1,i} \\ \gamma_{1,i} }\right)$. The latter can be estimated if you jointly estimate: $$ \left(\array{y_{t,i} \\ p_{t,i}}\right) = \left(\array{\beta_{0,i} \\ \gamma_{0,i} }\right) + \left(\array{y_{t-1,i} \ \ 0 \\ 0 \ \ p_{t-1,i}}\right)\left(\array{\beta_{1,i} \\ \gamma_{1,i} }\right) + \left(\array{\epsilon_{t,i} \\ \omega_{t,i} }\right) $$.
If not, more thinking is required. Please add more information in the Question regarding your samples.
Reference:
[1] Lewis, Jeffrey B., and Drew A. Linzer. "Estimating regression models in which the dependent variable is based on estimates." Political analysis 13.4 (2005): 345-364. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.318.7018&rep=rep1&type=pdf | Comparing Coefficients of Two Time Series Models
Let us define $\delta_i = \beta_{1,i} - \gamma_{1,i}$, with i indexing the samples.
You would ideally regress $\delta_i$ over 1: $\delta_i = b.1 +\eta_i $.
If I make no mistake, your question can inde |
54,753 | Comparing Coefficients of Two Time Series Models | I fully concur with the last paragraph of @AlexC-L's answer which is in essence a paired comparisons method. I have a feeling, though, that you do not want to look at the raw differences $\delta_i = \beta_i - \gamma_i$. The $\beta_i$ and $\gamma_i$ are presumably estimated by regression and are affected by uncertainty: does $\hat\beta_i=0.8$ with standard deviation of 0.6 imply higher persistence that $\hat\gamma_i=0.5$ with standard deviation 0.1? I would think the second more indicative of persistence than the first, which is not even significantly different from zero.
The choice depends on your problem, but I think you might at least consider to take not the raw estimated coefficients, but rather the coefficients measured in standard deviations when you compute the differences. | Comparing Coefficients of Two Time Series Models | I fully concur with the last paragraph of @AlexC-L's answer which is in essence a paired comparisons method. I have a feeling, though, that you do not want to look at the raw differences $\delta_i = \ | Comparing Coefficients of Two Time Series Models
I fully concur with the last paragraph of @AlexC-L's answer which is in essence a paired comparisons method. I have a feeling, though, that you do not want to look at the raw differences $\delta_i = \beta_i - \gamma_i$. The $\beta_i$ and $\gamma_i$ are presumably estimated by regression and are affected by uncertainty: does $\hat\beta_i=0.8$ with standard deviation of 0.6 imply higher persistence that $\hat\gamma_i=0.5$ with standard deviation 0.1? I would think the second more indicative of persistence than the first, which is not even significantly different from zero.
The choice depends on your problem, but I think you might at least consider to take not the raw estimated coefficients, but rather the coefficients measured in standard deviations when you compute the differences. | Comparing Coefficients of Two Time Series Models
I fully concur with the last paragraph of @AlexC-L's answer which is in essence a paired comparisons method. I have a feeling, though, that you do not want to look at the raw differences $\delta_i = \ |
54,754 | Comparing Coefficients of Two Time Series Models | I assume when saying "test wether the independent variable has a significantly larger effect on the dependent variable in the adjusted panel than in the unadjusted panel", you are actually trying to find out which model can better describe the uncertainties and relations among the observations.
So instead of comparing the difference of the coefficients, a better approach is to perform model selection on your models. Since model selection has to be done on the same set of samples, you need to some how tweak your models to make them applying to the same sample set:
Model 1:
$$
\begin{align}y & = \beta_0 + \beta_1x_1 + ... + \beta_n x_n + \epsilon \\\Rightarrow y &\sim F(y|x_{1:n},\theta_1) \\\end{align}
$$
Where $\theta_1 = \{\beta_{0:n} \text{ and all the other parameters}\}$, you can understand $F(y|x_{1:n},\theta_1)$ as a distribution of $y$ conditioned on $(x_{1:n},\theta_1)$. For example when the model is a simple linear regression $y = \beta_0 + \beta_1x_1 + ... + \beta_n x_n + \epsilon,\epsilon \sim N(0,\sigma^2)$, then $F(y|x_{1:n},\theta_1)$ will be a normal distribution with mean $\beta_0+\beta_1x_1+...+\beta_nx_n$ and variance $\sigma^2$, i.e. $F(y|x_{1:n},\theta_1) = N(y|\beta_0+\beta_1x_1+...+\beta_nx_n, \sigma^2)$
Model 2:
No matter how you "adjust" your samples, there must be a way to represent the adjustment with a function, say $p = h(y)$, $k=g(x)$. For example if the adjustment is discounting future payment $y$ into current value $p$, then the function $h(y)$ will be something like $h(y) = \frac{y}{(1+r)^m}$. With this idea in mind, your second model $p = \gamma_0 + \gamma_1k_1+,...,+\gamma_nk_n+\epsilon_2$ can be rewritten as:
$$
\begin{align}y & = h^{-1}(\gamma_0 + \gamma_1g(x_1)+,...,+\gamma_ng(x_n)+\epsilon_2) \\\Rightarrow y &\sim G(y|x_{1:n},\theta_2) \\\end{align}
$$
Where $\theta_2 = \{\gamma_{0:n} \text{ and all the other parameters involved in h() and g()}\}$.
Now that both the models are put to the same set of samples, you can start comparing them. There are two common ways to perform the comparison:
method1: If $F()$ and $G()$ are Bayesian models, you can compare their marginal likelihood (the higher the better) or BIC (the lower the better).
method2: Use cross validation and compare their cross validated expected-prediction-error. | Comparing Coefficients of Two Time Series Models | I assume when saying "test wether the independent variable has a significantly larger effect on the dependent variable in the adjusted panel than in the unadjusted panel", you are actually trying to f | Comparing Coefficients of Two Time Series Models
I assume when saying "test wether the independent variable has a significantly larger effect on the dependent variable in the adjusted panel than in the unadjusted panel", you are actually trying to find out which model can better describe the uncertainties and relations among the observations.
So instead of comparing the difference of the coefficients, a better approach is to perform model selection on your models. Since model selection has to be done on the same set of samples, you need to some how tweak your models to make them applying to the same sample set:
Model 1:
$$
\begin{align}y & = \beta_0 + \beta_1x_1 + ... + \beta_n x_n + \epsilon \\\Rightarrow y &\sim F(y|x_{1:n},\theta_1) \\\end{align}
$$
Where $\theta_1 = \{\beta_{0:n} \text{ and all the other parameters}\}$, you can understand $F(y|x_{1:n},\theta_1)$ as a distribution of $y$ conditioned on $(x_{1:n},\theta_1)$. For example when the model is a simple linear regression $y = \beta_0 + \beta_1x_1 + ... + \beta_n x_n + \epsilon,\epsilon \sim N(0,\sigma^2)$, then $F(y|x_{1:n},\theta_1)$ will be a normal distribution with mean $\beta_0+\beta_1x_1+...+\beta_nx_n$ and variance $\sigma^2$, i.e. $F(y|x_{1:n},\theta_1) = N(y|\beta_0+\beta_1x_1+...+\beta_nx_n, \sigma^2)$
Model 2:
No matter how you "adjust" your samples, there must be a way to represent the adjustment with a function, say $p = h(y)$, $k=g(x)$. For example if the adjustment is discounting future payment $y$ into current value $p$, then the function $h(y)$ will be something like $h(y) = \frac{y}{(1+r)^m}$. With this idea in mind, your second model $p = \gamma_0 + \gamma_1k_1+,...,+\gamma_nk_n+\epsilon_2$ can be rewritten as:
$$
\begin{align}y & = h^{-1}(\gamma_0 + \gamma_1g(x_1)+,...,+\gamma_ng(x_n)+\epsilon_2) \\\Rightarrow y &\sim G(y|x_{1:n},\theta_2) \\\end{align}
$$
Where $\theta_2 = \{\gamma_{0:n} \text{ and all the other parameters involved in h() and g()}\}$.
Now that both the models are put to the same set of samples, you can start comparing them. There are two common ways to perform the comparison:
method1: If $F()$ and $G()$ are Bayesian models, you can compare their marginal likelihood (the higher the better) or BIC (the lower the better).
method2: Use cross validation and compare their cross validated expected-prediction-error. | Comparing Coefficients of Two Time Series Models
I assume when saying "test wether the independent variable has a significantly larger effect on the dependent variable in the adjusted panel than in the unadjusted panel", you are actually trying to f |
54,755 | Comparing Coefficients of Two Time Series Models | Note: This answer does not take into consideration that $\beta_i$ and $\gamma_i$ are themselves estimated (thank @Turell for pointing that out). I make another try in another answer.
You have n $\beta_i$ and n $\gamma_i$ that you want to compare. If n is large enough, you might turn this problem into the comparison between two distributions.
You may use the Kolmogorov–Smirnov test to determine if those two distributions are significantly different from each other. If this is the case, graphical inspection may then be used to determine if beta are generally higher than the gamma.
To go beyond this simple graphical inspection, you may look at the differences between the quantiles of two distributions following Doksum, and Wilcox.
One issue I notice yet: above, this would not compare $\beta_{i0}$ and $\gamma_{i0}$ one to one. This can be fixed by defining: $\delta = \beta - \gamma$, and by comparing $\delta$ with a distribution of zeros. | Comparing Coefficients of Two Time Series Models | Note: This answer does not take into consideration that $\beta_i$ and $\gamma_i$ are themselves estimated (thank @Turell for pointing that out). I make another try in another answer.
You have n $\bet | Comparing Coefficients of Two Time Series Models
Note: This answer does not take into consideration that $\beta_i$ and $\gamma_i$ are themselves estimated (thank @Turell for pointing that out). I make another try in another answer.
You have n $\beta_i$ and n $\gamma_i$ that you want to compare. If n is large enough, you might turn this problem into the comparison between two distributions.
You may use the Kolmogorov–Smirnov test to determine if those two distributions are significantly different from each other. If this is the case, graphical inspection may then be used to determine if beta are generally higher than the gamma.
To go beyond this simple graphical inspection, you may look at the differences between the quantiles of two distributions following Doksum, and Wilcox.
One issue I notice yet: above, this would not compare $\beta_{i0}$ and $\gamma_{i0}$ one to one. This can be fixed by defining: $\delta = \beta - \gamma$, and by comparing $\delta$ with a distribution of zeros. | Comparing Coefficients of Two Time Series Models
Note: This answer does not take into consideration that $\beta_i$ and $\gamma_i$ are themselves estimated (thank @Turell for pointing that out). I make another try in another answer.
You have n $\bet |
54,756 | Law of total probability and conditioning on multiple events | \begin{align}
P(Y=y|X) &=E(1_{Y=y}|X) \\ &\overset{Tower\ property}{=}E\color{green}{\bigg(}E\color{red}{(}1_{Y=y}|X \color{red}{)}|(X,Z)\color{green}{\bigg)}
\\
&\overset{Tower\ property}{=}E\color{green}{\bigg(}E\color{red}{(}1_{Y=y}|(X,Z) \color{red}{)}|X\color{green}{\bigg)}
\\
&= E\color{green}{\bigg(}g(X,Z) |X\color{green}{\bigg)}
\\
&= \sum_{z} g(X,Z=z) P(Z=z|X)
\\
&= \sum_{z} E\color{red}{(}1_{Y=y}|(X,Z=z) \color{red}{)} P(Z=z|X)
\\
&= \sum_{z} P\color{red}{(}Y=y|(X,Z=z) \color{red}{)} P(Z=z|X)
\end{align}
So
\begin{align}
P(Y=y|X)=
\sum_{z} P\color{red}{(}Y=y|(X,Z=z) \color{red}{)} P(Z=z|X)
\end{align} an hence
\begin{align}
P(Y=y|X=x)=
\sum_{z} P\color{red}{(}Y=y|(X=x,Z=z) \color{red}{)} P(Z=z|X=x)
\end{align}
Detail:Tower Property Conditional_expectation
For sub-σ-algebras
$$\mathcal H_{1} \subset H_{2} \subset \mathcal F$$
we have
$$E(E(Y\mid \mathcal H_{2})\mid \mathcal H_{1})=E(E(Y \mid \mathcal H_{1})\mid \mathcal H_{2})=E(Y\mid \mathcal H_{1})$$.
In this situation $\mathcal H_{1}=\sigma(X) \subset \mathcal H_{2}=\sigma(X,Z) $
so
$$E(E(Y\mid \sigma(X,Z))\mid \sigma(X))=E(E(Y \mid \sigma(X))\mid \mathcal \sigma(X,Z))=E(Y\mid \sigma(X))$$ | Law of total probability and conditioning on multiple events | \begin{align}
P(Y=y|X) &=E(1_{Y=y}|X) \\ &\overset{Tower\ property}{=}E\color{green}{\bigg(}E\color{red}{(}1_{Y=y}|X \color{red}{)}|(X,Z)\color{green}{\bigg)}
\\
&\overset{Tower\ property}{=}E\color{ | Law of total probability and conditioning on multiple events
\begin{align}
P(Y=y|X) &=E(1_{Y=y}|X) \\ &\overset{Tower\ property}{=}E\color{green}{\bigg(}E\color{red}{(}1_{Y=y}|X \color{red}{)}|(X,Z)\color{green}{\bigg)}
\\
&\overset{Tower\ property}{=}E\color{green}{\bigg(}E\color{red}{(}1_{Y=y}|(X,Z) \color{red}{)}|X\color{green}{\bigg)}
\\
&= E\color{green}{\bigg(}g(X,Z) |X\color{green}{\bigg)}
\\
&= \sum_{z} g(X,Z=z) P(Z=z|X)
\\
&= \sum_{z} E\color{red}{(}1_{Y=y}|(X,Z=z) \color{red}{)} P(Z=z|X)
\\
&= \sum_{z} P\color{red}{(}Y=y|(X,Z=z) \color{red}{)} P(Z=z|X)
\end{align}
So
\begin{align}
P(Y=y|X)=
\sum_{z} P\color{red}{(}Y=y|(X,Z=z) \color{red}{)} P(Z=z|X)
\end{align} an hence
\begin{align}
P(Y=y|X=x)=
\sum_{z} P\color{red}{(}Y=y|(X=x,Z=z) \color{red}{)} P(Z=z|X=x)
\end{align}
Detail:Tower Property Conditional_expectation
For sub-σ-algebras
$$\mathcal H_{1} \subset H_{2} \subset \mathcal F$$
we have
$$E(E(Y\mid \mathcal H_{2})\mid \mathcal H_{1})=E(E(Y \mid \mathcal H_{1})\mid \mathcal H_{2})=E(Y\mid \mathcal H_{1})$$.
In this situation $\mathcal H_{1}=\sigma(X) \subset \mathcal H_{2}=\sigma(X,Z) $
so
$$E(E(Y\mid \sigma(X,Z))\mid \sigma(X))=E(E(Y \mid \sigma(X))\mid \mathcal \sigma(X,Z))=E(Y\mid \sigma(X))$$ | Law of total probability and conditioning on multiple events
\begin{align}
P(Y=y|X) &=E(1_{Y=y}|X) \\ &\overset{Tower\ property}{=}E\color{green}{\bigg(}E\color{red}{(}1_{Y=y}|X \color{red}{)}|(X,Z)\color{green}{\bigg)}
\\
&\overset{Tower\ property}{=}E\color{ |
54,757 | Law of total probability and conditioning on multiple events | Your purported proof of $(3.4)$, without using independent of $Z$ and $X$, is not correct. It is not valid to form an event that includes a condition, because that condition then escapes the other probability operator in the law of total probability. In fact, the equation is not true in general (i.e., without the independence condition), as can be seen by considering the counterexample with joint mass function:
$$\mathbb{P}(X=x,Y=y,Z=z) = \begin{cases}
\tfrac{1}{2} & & \text{if } x = 0, y = 0, z = 1, \\[6pt]
\tfrac{1}{2} & & \text{if } x = 1, y = 1, z = 0, \\[6pt]
0 & & \text{otherwise}. \\[6pt]
\end{cases}$$
In this case, we have:
$$1 = \mathbb{P}(Y=x|X=x) \neq
\sum_z \mathbb{P}(Y=x|X=x, Z=z) \cdot \mathbb{P}(Z=z) = \tfrac{1}{2}.$$ | Law of total probability and conditioning on multiple events | Your purported proof of $(3.4)$, without using independent of $Z$ and $X$, is not correct. It is not valid to form an event that includes a condition, because that condition then escapes the other pr | Law of total probability and conditioning on multiple events
Your purported proof of $(3.4)$, without using independent of $Z$ and $X$, is not correct. It is not valid to form an event that includes a condition, because that condition then escapes the other probability operator in the law of total probability. In fact, the equation is not true in general (i.e., without the independence condition), as can be seen by considering the counterexample with joint mass function:
$$\mathbb{P}(X=x,Y=y,Z=z) = \begin{cases}
\tfrac{1}{2} & & \text{if } x = 0, y = 0, z = 1, \\[6pt]
\tfrac{1}{2} & & \text{if } x = 1, y = 1, z = 0, \\[6pt]
0 & & \text{otherwise}. \\[6pt]
\end{cases}$$
In this case, we have:
$$1 = \mathbb{P}(Y=x|X=x) \neq
\sum_z \mathbb{P}(Y=x|X=x, Z=z) \cdot \mathbb{P}(Z=z) = \tfrac{1}{2}.$$ | Law of total probability and conditioning on multiple events
Your purported proof of $(3.4)$, without using independent of $Z$ and $X$, is not correct. It is not valid to form an event that includes a condition, because that condition then escapes the other pr |
54,758 | Law of total probability and conditioning on multiple events | I think there is a mistake in your proof when you define the event $A$ as $Y=y|X=x$, this definition does not make sense. You cannot include conditionality in an event (what would be a realization of such an event?), you can just talk about probability of an event conditionally to some other event. Conditioning on an event $X=x$ defines new probability measures, but does not define new events.
The proof of equation $(3.3)$ is just the application of the law of total probability, to which you add a conditionality on $X=x$ at every probability (it's the law of total probability applied to the probability measure $ P(.|X=x)$).
Then you need independence to say that the law of $Z$ and the law of $Z$ conditionally on $ X=x$ are the same.
Here is an example were $X$ and $Z$ are not independent. $X$ is the choice (with probabilities $1/2$) of a coin between a fair one and a biased one with two tails, $Y$ is the result of a toss of the chosen coin, and $Z=Y$. Then equation $(3.5)$ does not hold.
$$
P(Y= tail | X= biased) =1
$$ and
\begin{aligned}
&\sum_z P(Y=tail|X=biased , Z=z) P(Z=z) \\
&= P(Y= tail| X=biased, Z= tail) P(Z=tail) \\ &+ P(Y = tail|X= biased, Z= head)P(Z= head) \\
&= 1\times P(Y = tail) + 0 \\
&=3/4
\end{aligned}
using that $Z=Y$.
I hope this helps. | Law of total probability and conditioning on multiple events | I think there is a mistake in your proof when you define the event $A$ as $Y=y|X=x$, this definition does not make sense. You cannot include conditionality in an event (what would be a realization of | Law of total probability and conditioning on multiple events
I think there is a mistake in your proof when you define the event $A$ as $Y=y|X=x$, this definition does not make sense. You cannot include conditionality in an event (what would be a realization of such an event?), you can just talk about probability of an event conditionally to some other event. Conditioning on an event $X=x$ defines new probability measures, but does not define new events.
The proof of equation $(3.3)$ is just the application of the law of total probability, to which you add a conditionality on $X=x$ at every probability (it's the law of total probability applied to the probability measure $ P(.|X=x)$).
Then you need independence to say that the law of $Z$ and the law of $Z$ conditionally on $ X=x$ are the same.
Here is an example were $X$ and $Z$ are not independent. $X$ is the choice (with probabilities $1/2$) of a coin between a fair one and a biased one with two tails, $Y$ is the result of a toss of the chosen coin, and $Z=Y$. Then equation $(3.5)$ does not hold.
$$
P(Y= tail | X= biased) =1
$$ and
\begin{aligned}
&\sum_z P(Y=tail|X=biased , Z=z) P(Z=z) \\
&= P(Y= tail| X=biased, Z= tail) P(Z=tail) \\ &+ P(Y = tail|X= biased, Z= head)P(Z= head) \\
&= 1\times P(Y = tail) + 0 \\
&=3/4
\end{aligned}
using that $Z=Y$.
I hope this helps. | Law of total probability and conditioning on multiple events
I think there is a mistake in your proof when you define the event $A$ as $Y=y|X=x$, this definition does not make sense. You cannot include conditionality in an event (what would be a realization of |
54,759 | Concept of a z-score for a gamma distribution | The $z$ score expressed how many standard deviations a given observation from a symmetric distribution is away from the mean. Negative $z$ scores indicate an observation below, positive $z$ scores above the mean.
The logic doesn't make much sense for asymmetric distributions, even when standard deviations exist. In a symmetric distribution, observations one SD above and one SD below the mean are "equally typical". For an asymmetric distribution like the gamma, there may not even be a possible observation one SD below the mean (if the shape parameter is less than one, the mean is lower than the SD, so if you subtract one SD from the mean, you end up negative), while observations one SD above the mean make perfect sense.
Instead, it may make more sense to fit a distribution like the gamma and work with percentiles. An observation at the 30th percentile could be said to be "as typical as" an observation at the 70th percentile, since both are 20 percentage points away from the median. The advantage is that this carries the exact same information as the $z$ score in the symmetric case.
Alternatively, you could work with percentages, but without fitting a distribution, by using the empirical cumulative distribution function. | Concept of a z-score for a gamma distribution | The $z$ score expressed how many standard deviations a given observation from a symmetric distribution is away from the mean. Negative $z$ scores indicate an observation below, positive $z$ scores abo | Concept of a z-score for a gamma distribution
The $z$ score expressed how many standard deviations a given observation from a symmetric distribution is away from the mean. Negative $z$ scores indicate an observation below, positive $z$ scores above the mean.
The logic doesn't make much sense for asymmetric distributions, even when standard deviations exist. In a symmetric distribution, observations one SD above and one SD below the mean are "equally typical". For an asymmetric distribution like the gamma, there may not even be a possible observation one SD below the mean (if the shape parameter is less than one, the mean is lower than the SD, so if you subtract one SD from the mean, you end up negative), while observations one SD above the mean make perfect sense.
Instead, it may make more sense to fit a distribution like the gamma and work with percentiles. An observation at the 30th percentile could be said to be "as typical as" an observation at the 70th percentile, since both are 20 percentage points away from the median. The advantage is that this carries the exact same information as the $z$ score in the symmetric case.
Alternatively, you could work with percentages, but without fitting a distribution, by using the empirical cumulative distribution function. | Concept of a z-score for a gamma distribution
The $z$ score expressed how many standard deviations a given observation from a symmetric distribution is away from the mean. Negative $z$ scores indicate an observation below, positive $z$ scores abo |
54,760 | Convergence of Poisson Random Variable | Since $X$ is discrete, you can simplify a little:
$$\lim_{n\to\infty}p(X_n=0) = \lim_{n\to\infty}\text{e}^{-{1 \over n}} = \text{e}^{\lim_{n\to\infty}{-{1\over n}}} = \text{e}^0=1$$
where we can go from the second to the third term by the continuity of the exponentiation function.
The second statement follows from the first, as $n\cdot0 = 0$ and $n\cdot X \neq 0$ if $X \neq 0$, so $p(nX_n=0) = p(X_n=0)$, and since they are equal $\forall n$, their limits are equal too. | Convergence of Poisson Random Variable | Since $X$ is discrete, you can simplify a little:
$$\lim_{n\to\infty}p(X_n=0) = \lim_{n\to\infty}\text{e}^{-{1 \over n}} = \text{e}^{\lim_{n\to\infty}{-{1\over n}}} = \text{e}^0=1$$
where we can go fr | Convergence of Poisson Random Variable
Since $X$ is discrete, you can simplify a little:
$$\lim_{n\to\infty}p(X_n=0) = \lim_{n\to\infty}\text{e}^{-{1 \over n}} = \text{e}^{\lim_{n\to\infty}{-{1\over n}}} = \text{e}^0=1$$
where we can go from the second to the third term by the continuity of the exponentiation function.
The second statement follows from the first, as $n\cdot0 = 0$ and $n\cdot X \neq 0$ if $X \neq 0$, so $p(nX_n=0) = p(X_n=0)$, and since they are equal $\forall n$, their limits are equal too. | Convergence of Poisson Random Variable
Since $X$ is discrete, you can simplify a little:
$$\lim_{n\to\infty}p(X_n=0) = \lim_{n\to\infty}\text{e}^{-{1 \over n}} = \text{e}^{\lim_{n\to\infty}{-{1\over n}}} = \text{e}^0=1$$
where we can go fr |
54,761 | Convergence of Poisson Random Variable | Alternate answer for part-2:
Let $X = \lim_{n\rightarrow\infty}X_n$, then we have
$$
\begin{align*}
P(X=k) & = \lim_{n\rightarrow\infty}\,P(X_n=k) \\
& = \lim_{n\rightarrow\infty}\frac{1}{e^{\frac{1}{n}}n^{k}k!}
& = \begin{cases}
& 1;\qquad X_n = 0 \\
& 0;\qquad \text{otherwise}
\end{cases}
\end{align*}
$$
Now, using this definition of $X$, we get
$$
\begin{align*}
\lim_{n\rightarrow \infty} P(|nX_n - X| > \epsilon) & = \lim_{n\rightarrow \infty} P(|nX_n|>\epsilon) \\
& = \lim_{n\rightarrow\infty} P(X_n > \frac{\epsilon}{n}) \\
& = 1 - \lim_{n\rightarrow\infty} P(X_n \leq \frac{\epsilon}{n}) \\
& = 1 - 1 = 0\qquad\text{since, }P(X\leq 0) = 1
\end{align*}
$$
QED! | Convergence of Poisson Random Variable | Alternate answer for part-2:
Let $X = \lim_{n\rightarrow\infty}X_n$, then we have
$$
\begin{align*}
P(X=k) & = \lim_{n\rightarrow\infty}\,P(X_n=k) \\
& = \lim_{n\rightarrow\infty}\frac{1}{e^{\frac{1}{ | Convergence of Poisson Random Variable
Alternate answer for part-2:
Let $X = \lim_{n\rightarrow\infty}X_n$, then we have
$$
\begin{align*}
P(X=k) & = \lim_{n\rightarrow\infty}\,P(X_n=k) \\
& = \lim_{n\rightarrow\infty}\frac{1}{e^{\frac{1}{n}}n^{k}k!}
& = \begin{cases}
& 1;\qquad X_n = 0 \\
& 0;\qquad \text{otherwise}
\end{cases}
\end{align*}
$$
Now, using this definition of $X$, we get
$$
\begin{align*}
\lim_{n\rightarrow \infty} P(|nX_n - X| > \epsilon) & = \lim_{n\rightarrow \infty} P(|nX_n|>\epsilon) \\
& = \lim_{n\rightarrow\infty} P(X_n > \frac{\epsilon}{n}) \\
& = 1 - \lim_{n\rightarrow\infty} P(X_n \leq \frac{\epsilon}{n}) \\
& = 1 - 1 = 0\qquad\text{since, }P(X\leq 0) = 1
\end{align*}
$$
QED! | Convergence of Poisson Random Variable
Alternate answer for part-2:
Let $X = \lim_{n\rightarrow\infty}X_n$, then we have
$$
\begin{align*}
P(X=k) & = \lim_{n\rightarrow\infty}\,P(X_n=k) \\
& = \lim_{n\rightarrow\infty}\frac{1}{e^{\frac{1}{ |
54,762 | Why does the proportions_ztest function in statsmodels produce different values than the formula for a 1-proportion Z test? | proportions_ztest seems to work exactly as documented.
Unfortunately what the documentation says it does is just not what you're expecting it to do.
By default this function uses the sample proportion in calculating the standard error of $p-p_0$. There's an option (via a boolean function argument) to change that default, after which the output should match your own calculation.
I feel the writers of the documentation have let you down: The documentation doesn't make this arguably nonstandard default sufficiently obvious to the naive reader - in fact, it's pretty hard to spot (I found it fairly easily only because I knew to expect something like this).
[The choice itself is reasonable, but failing to make it blatantly clear - e.g. with a warning in bold lettering right at the top of the documentation - amounts to user-vicious behaviour. You don't put such a big "gotcha" into your packages without really good reason, and you make sure people can easily tell when you do.]
Edit: This question has led to a ticket relating to the documentation (see comments below). I applaud the fast action taken -- though the documentation itself may not rapidly change, something productive happened. I am not used to my complaints about documentation (I've made a ton of them here, about various programs) being acted on at all. | Why does the proportions_ztest function in statsmodels produce different values than the formula for | proportions_ztest seems to work exactly as documented.
Unfortunately what the documentation says it does is just not what you're expecting it to do.
By default this function uses the sample proportio | Why does the proportions_ztest function in statsmodels produce different values than the formula for a 1-proportion Z test?
proportions_ztest seems to work exactly as documented.
Unfortunately what the documentation says it does is just not what you're expecting it to do.
By default this function uses the sample proportion in calculating the standard error of $p-p_0$. There's an option (via a boolean function argument) to change that default, after which the output should match your own calculation.
I feel the writers of the documentation have let you down: The documentation doesn't make this arguably nonstandard default sufficiently obvious to the naive reader - in fact, it's pretty hard to spot (I found it fairly easily only because I knew to expect something like this).
[The choice itself is reasonable, but failing to make it blatantly clear - e.g. with a warning in bold lettering right at the top of the documentation - amounts to user-vicious behaviour. You don't put such a big "gotcha" into your packages without really good reason, and you make sure people can easily tell when you do.]
Edit: This question has led to a ticket relating to the documentation (see comments below). I applaud the fast action taken -- though the documentation itself may not rapidly change, something productive happened. I am not used to my complaints about documentation (I've made a ton of them here, about various programs) being acted on at all. | Why does the proportions_ztest function in statsmodels produce different values than the formula for
proportions_ztest seems to work exactly as documented.
Unfortunately what the documentation says it does is just not what you're expecting it to do.
By default this function uses the sample proportio |
54,763 | Why does the proportions_ztest function in statsmodels produce different values than the formula for a 1-proportion Z test? | Even though @Glen_b answer is completely right, it doesn't mention the parameter they are referring to.
For others who may fall into this same problem, the right way of using proportions_ztest the way the OP tried to use it is as follows:
from statsmodels.stats.proportion import proportions_ztest
proportions_ztest(10, 50, 0.5, prop_var=0.5)
so that the actual calculation is (with $p_0$ as the null hypothesis):
$
\begin{equation}
p_1 = \frac{successes}{n}\\
Z = \cfrac{p_1-p_0}{\sqrt{\frac{p_0(1-p_0)}{n}}} \\
\end{equation}
$
On the other hand:
if not setting prop_var with the same value as $p_0$, then statsmodels calculates $Z$ in a different way:
$Z = \cfrac{p_1-p_0}{\sqrt{\frac{p_1(1-p_1)}{n}}}$
Why that? Because when prop_var is not set, the standard error is estimated using the samples provided, which are always the first two parameters of proportions_ztests, be them two values or two lists of values (as used when calculating the test for two proportions). | Why does the proportions_ztest function in statsmodels produce different values than the formula for | Even though @Glen_b answer is completely right, it doesn't mention the parameter they are referring to.
For others who may fall into this same problem, the right way of using proportions_ztest the way | Why does the proportions_ztest function in statsmodels produce different values than the formula for a 1-proportion Z test?
Even though @Glen_b answer is completely right, it doesn't mention the parameter they are referring to.
For others who may fall into this same problem, the right way of using proportions_ztest the way the OP tried to use it is as follows:
from statsmodels.stats.proportion import proportions_ztest
proportions_ztest(10, 50, 0.5, prop_var=0.5)
so that the actual calculation is (with $p_0$ as the null hypothesis):
$
\begin{equation}
p_1 = \frac{successes}{n}\\
Z = \cfrac{p_1-p_0}{\sqrt{\frac{p_0(1-p_0)}{n}}} \\
\end{equation}
$
On the other hand:
if not setting prop_var with the same value as $p_0$, then statsmodels calculates $Z$ in a different way:
$Z = \cfrac{p_1-p_0}{\sqrt{\frac{p_1(1-p_1)}{n}}}$
Why that? Because when prop_var is not set, the standard error is estimated using the samples provided, which are always the first two parameters of proportions_ztests, be them two values or two lists of values (as used when calculating the test for two proportions). | Why does the proportions_ztest function in statsmodels produce different values than the formula for
Even though @Glen_b answer is completely right, it doesn't mention the parameter they are referring to.
For others who may fall into this same problem, the right way of using proportions_ztest the way |
54,764 | time series model with additional, time-independent regressors? | For time dependent regressors, it is pretty straightforward. Many classes of time series models can handle them, including from the ARIMA family (ex: ARIMAX and regression with ARIMA errors), BSTS, Facebook Prophet, and others.
The tricky part is time independent regressors: Most people don't realize that time independent regressors are of no use whatsoever unless you are modeling multiple time series at the same time.
Consider your housing example: If you are trying to model the price of a single house over time, which has a fixed size, then there is no way such a model can capture the effect of the size of the house on the price, since it hasn't "seen" the effect of any other sizes on the price in order to determine what the right coefficients/weights should be.
This means you will have to use an approach that allows you to model multiple time series together, while including the effect of the time-independent variables. Here are a couple of ways you might be able to pull that off (there might be more, but I am not familiar with them):
If your time independent variables are categorical or discrete valued, you can use a hierarchical/grouped time series forecasting approach, where you group your time series along the different possible values of your time independent variable. The advantage of this approach is that you can use most of the text book time series methods (ARIMA, Exponential Smoothing, etc...) in concert with a hierarchical forecasting scheme. The downside of this approach is that it doesn't allow for continuous time-independent variables, and you have to make some assumptions before hand on the importance and the effect of your time independent variables (e.g. should I aggregate my time series based on zip code first, then based on number of rooms, or the other way around?).
If your time independent variables are continuous, you can use a general machine learning based approach like XGBoost, or Deep Learning models. The advantage of this approach is that it can handle any type of additional variables because this type of ML model is usually very flexible. The downside of this approach is that ML models are much harder to implement (coding, Hyperparameter optimization, etc...) than regular time series models, and it is usually difficult to interpret their output, since they tend to highly non-linear and "black box" in nature.
In response to the comment about how to include size in a hierarchical model:
Simply transforming the sizes to a discrete value instead of continuous one won't help much, because you would still end up with a very large range of nodes in your hierarchy, and each one will only have a very small number of time series in it, thus defeating the purpose of hierarchical forecasting in the first place.
Instead I suggest one of the two following ways of dealing with size variable:
Plot the distribution and histograms of your sizes, and see if they have any distinct modes or clusters. Hopefully there would be only a small number of them. You can then assign each size to a bin that corresponds to the cluster it is in, and use that as your aggregation criteria.
Example: Your sizes are $[1899 , 2023, 2200, 2300, 3500, 3570, 3995, 4012]$, you can see that there are two clusters. Assign $[1899 , 2023, 2200, 2300]$ to $group 1$, and assign $[3500, 3570, 3995, 4012]$ to $group 2$. Then uses those two groups to aggregate you data.
Note that this will only work if there are definite clusters in the data. It will not work if you had a size distribution that was more uniform like $[1899 , 2023, 2850, 3010, 3500, 3995, 4012, 4300]$.
A second approach (this is not really stats, just an example of using domain knowledge) is to just ignore the size, because the size will correlate strongly with other, more manageable, discrete variables like number of rooms, and number of stories in the house. You can just use those as proxies for the size of the house. | time series model with additional, time-independent regressors? | For time dependent regressors, it is pretty straightforward. Many classes of time series models can handle them, including from the ARIMA family (ex: ARIMAX and regression with ARIMA errors), BSTS, Fa | time series model with additional, time-independent regressors?
For time dependent regressors, it is pretty straightforward. Many classes of time series models can handle them, including from the ARIMA family (ex: ARIMAX and regression with ARIMA errors), BSTS, Facebook Prophet, and others.
The tricky part is time independent regressors: Most people don't realize that time independent regressors are of no use whatsoever unless you are modeling multiple time series at the same time.
Consider your housing example: If you are trying to model the price of a single house over time, which has a fixed size, then there is no way such a model can capture the effect of the size of the house on the price, since it hasn't "seen" the effect of any other sizes on the price in order to determine what the right coefficients/weights should be.
This means you will have to use an approach that allows you to model multiple time series together, while including the effect of the time-independent variables. Here are a couple of ways you might be able to pull that off (there might be more, but I am not familiar with them):
If your time independent variables are categorical or discrete valued, you can use a hierarchical/grouped time series forecasting approach, where you group your time series along the different possible values of your time independent variable. The advantage of this approach is that you can use most of the text book time series methods (ARIMA, Exponential Smoothing, etc...) in concert with a hierarchical forecasting scheme. The downside of this approach is that it doesn't allow for continuous time-independent variables, and you have to make some assumptions before hand on the importance and the effect of your time independent variables (e.g. should I aggregate my time series based on zip code first, then based on number of rooms, or the other way around?).
If your time independent variables are continuous, you can use a general machine learning based approach like XGBoost, or Deep Learning models. The advantage of this approach is that it can handle any type of additional variables because this type of ML model is usually very flexible. The downside of this approach is that ML models are much harder to implement (coding, Hyperparameter optimization, etc...) than regular time series models, and it is usually difficult to interpret their output, since they tend to highly non-linear and "black box" in nature.
In response to the comment about how to include size in a hierarchical model:
Simply transforming the sizes to a discrete value instead of continuous one won't help much, because you would still end up with a very large range of nodes in your hierarchy, and each one will only have a very small number of time series in it, thus defeating the purpose of hierarchical forecasting in the first place.
Instead I suggest one of the two following ways of dealing with size variable:
Plot the distribution and histograms of your sizes, and see if they have any distinct modes or clusters. Hopefully there would be only a small number of them. You can then assign each size to a bin that corresponds to the cluster it is in, and use that as your aggregation criteria.
Example: Your sizes are $[1899 , 2023, 2200, 2300, 3500, 3570, 3995, 4012]$, you can see that there are two clusters. Assign $[1899 , 2023, 2200, 2300]$ to $group 1$, and assign $[3500, 3570, 3995, 4012]$ to $group 2$. Then uses those two groups to aggregate you data.
Note that this will only work if there are definite clusters in the data. It will not work if you had a size distribution that was more uniform like $[1899 , 2023, 2850, 3010, 3500, 3995, 4012, 4300]$.
A second approach (this is not really stats, just an example of using domain knowledge) is to just ignore the size, because the size will correlate strongly with other, more manageable, discrete variables like number of rooms, and number of stories in the house. You can just use those as proxies for the size of the house. | time series model with additional, time-independent regressors?
For time dependent regressors, it is pretty straightforward. Many classes of time series models can handle them, including from the ARIMA family (ex: ARIMAX and regression with ARIMA errors), BSTS, Fa |
54,765 | Why high correlation coefficient doesn't guarantee high VIF? | What do you consider to be a high correlation coefficient? What do you consider to be a low VIF?
The VIF is calculated by regressing predictor $i$ on all the other predictors, and then calculating $VIF = \frac{1}{1 - R_i^2}$. If you consider a VIF of 5 to be high, you'd only get a high VIF if $R_i^2$ was greater than or equal to 0.8.
Now imagine that you have two predictors $i$, and $j$ that have a correlation coefficient of 0.8, which is fairly high, but no other predictors are correlated with predictor $i$ or $j$. When you then regress $i$ on all the other variables, $R_i^2$ will be slightly greater than the square of the correlation coefficient between $i$ and $j$, which is $0.8^2 = 0.64$ (smaller than the 0.8 needed to achieve a VIF of 5). In other words, the VIF's for $i$ and $j$ will be small, even though the correlation between them is high.
Basically the VIF for predictor $i$ captures how well all other predictors can explain predictor $i$. But to get a VIF that's considered high (greater than or equal to 5), there has to be a very strong fit when regressing predictor $i$ on the other variables. As demonstrated, it's definitely possible to have a "high" correlation between two variables, but still have "low" VIF's.
As far as your quote, the opposite can happen as well. You could have low pairwise correlations, but have high VIF's. This is because it's possible that there's a strong relationship between predictor $i$ and all the other variables together, even though there's not a high correlation between predictor $i$ any other predictor independently. | Why high correlation coefficient doesn't guarantee high VIF? | What do you consider to be a high correlation coefficient? What do you consider to be a low VIF?
The VIF is calculated by regressing predictor $i$ on all the other predictors, and then calculating $VI | Why high correlation coefficient doesn't guarantee high VIF?
What do you consider to be a high correlation coefficient? What do you consider to be a low VIF?
The VIF is calculated by regressing predictor $i$ on all the other predictors, and then calculating $VIF = \frac{1}{1 - R_i^2}$. If you consider a VIF of 5 to be high, you'd only get a high VIF if $R_i^2$ was greater than or equal to 0.8.
Now imagine that you have two predictors $i$, and $j$ that have a correlation coefficient of 0.8, which is fairly high, but no other predictors are correlated with predictor $i$ or $j$. When you then regress $i$ on all the other variables, $R_i^2$ will be slightly greater than the square of the correlation coefficient between $i$ and $j$, which is $0.8^2 = 0.64$ (smaller than the 0.8 needed to achieve a VIF of 5). In other words, the VIF's for $i$ and $j$ will be small, even though the correlation between them is high.
Basically the VIF for predictor $i$ captures how well all other predictors can explain predictor $i$. But to get a VIF that's considered high (greater than or equal to 5), there has to be a very strong fit when regressing predictor $i$ on the other variables. As demonstrated, it's definitely possible to have a "high" correlation between two variables, but still have "low" VIF's.
As far as your quote, the opposite can happen as well. You could have low pairwise correlations, but have high VIF's. This is because it's possible that there's a strong relationship between predictor $i$ and all the other variables together, even though there's not a high correlation between predictor $i$ any other predictor independently. | Why high correlation coefficient doesn't guarantee high VIF?
What do you consider to be a high correlation coefficient? What do you consider to be a low VIF?
The VIF is calculated by regressing predictor $i$ on all the other predictors, and then calculating $VI |
54,766 | What does "weighted logistic regression" mean? | Let's begin with a weighted average, which slightly modifies the formula for an ordinary average:
$$\bar{x}^w=\frac{\sum_i w_i x_i}{\sum w_i}$$
An unweighted average would correspond to using $w_i=1$ (though any other constant would do as well).
Why would we do that?
Imagine, for example, that each value occurred multiple times ("We have 15 ones, 23 twos, 19 threes, 8 fours and 1 six"); then we could use weights to reflect the multiplicity of each value ($w_1=15$, etc). Then the weighted average is a faster way to calculate the average you'd get if you wrote "1" fifteen times and "2" twenty three times (etc) and calculated the ordinary average.
For another possible example, imagine that each observation was itself an average. Each average is not equally informative -- the ones based on larger samples should carry more weight (other things being equal).
(In that case if we set each observation's weight to the underlying sample size, we get the overall average of all the data that would comprise the component averages.)
There are many other reasons one might weight observations differently, though (e.g. if the observation values are not all equally precise).
In somewhat similar fashion, we can modify the estimator in ordinary regression to incorporate weights to the observations. It will reproduce a weighted average when the regression is intercept only.
The usual multiple regression estimator is $\hat{\beta}=(X^\top X)^{-1}X^\top y$. The weighted regression estimator is $\hat{\beta}=(X^\top W X)^{-1}X^\top W y$, where $W$ is a diagonal matrix, with weights on the diagonal, $W_{ii} = w_i$.
Weighted logistic regression works similarly, but without a closed form solution as you get with weighted linear regression. | What does "weighted logistic regression" mean? | Let's begin with a weighted average, which slightly modifies the formula for an ordinary average:
$$\bar{x}^w=\frac{\sum_i w_i x_i}{\sum w_i}$$
An unweighted average would correspond to using $w_i=1$ | What does "weighted logistic regression" mean?
Let's begin with a weighted average, which slightly modifies the formula for an ordinary average:
$$\bar{x}^w=\frac{\sum_i w_i x_i}{\sum w_i}$$
An unweighted average would correspond to using $w_i=1$ (though any other constant would do as well).
Why would we do that?
Imagine, for example, that each value occurred multiple times ("We have 15 ones, 23 twos, 19 threes, 8 fours and 1 six"); then we could use weights to reflect the multiplicity of each value ($w_1=15$, etc). Then the weighted average is a faster way to calculate the average you'd get if you wrote "1" fifteen times and "2" twenty three times (etc) and calculated the ordinary average.
For another possible example, imagine that each observation was itself an average. Each average is not equally informative -- the ones based on larger samples should carry more weight (other things being equal).
(In that case if we set each observation's weight to the underlying sample size, we get the overall average of all the data that would comprise the component averages.)
There are many other reasons one might weight observations differently, though (e.g. if the observation values are not all equally precise).
In somewhat similar fashion, we can modify the estimator in ordinary regression to incorporate weights to the observations. It will reproduce a weighted average when the regression is intercept only.
The usual multiple regression estimator is $\hat{\beta}=(X^\top X)^{-1}X^\top y$. The weighted regression estimator is $\hat{\beta}=(X^\top W X)^{-1}X^\top W y$, where $W$ is a diagonal matrix, with weights on the diagonal, $W_{ii} = w_i$.
Weighted logistic regression works similarly, but without a closed form solution as you get with weighted linear regression. | What does "weighted logistic regression" mean?
Let's begin with a weighted average, which slightly modifies the formula for an ordinary average:
$$\bar{x}^w=\frac{\sum_i w_i x_i}{\sum w_i}$$
An unweighted average would correspond to using $w_i=1$ |
54,767 | What does "weighted logistic regression" mean? | Weighted logistic regression is used when you have an imbalanced dataset. Let's understand with an example.
Let's assume you have a dataset with patient details and you need to predict whether patient has cancer or not. Such datasets are generally imbalanced. If you have 10,000 data points who is having cancer and 1,000,000 data points don't have cancer. So, your approach may be like:
Sample 10,000 data points having cancer (100 % of patients having cancer)
Sample 1,00,000 data points don't have cancer (10 % of patients don't have cancer)
Now you give weight=10 for data points don't have cancer so that, effect of 1,00,000 sampled data points is same as 1,000,000 data points. This is one of techniques that you can use when you have imbalanced dataset. | What does "weighted logistic regression" mean? | Weighted logistic regression is used when you have an imbalanced dataset. Let's understand with an example.
Let's assume you have a dataset with patient details and you need to predict whether patient | What does "weighted logistic regression" mean?
Weighted logistic regression is used when you have an imbalanced dataset. Let's understand with an example.
Let's assume you have a dataset with patient details and you need to predict whether patient has cancer or not. Such datasets are generally imbalanced. If you have 10,000 data points who is having cancer and 1,000,000 data points don't have cancer. So, your approach may be like:
Sample 10,000 data points having cancer (100 % of patients having cancer)
Sample 1,00,000 data points don't have cancer (10 % of patients don't have cancer)
Now you give weight=10 for data points don't have cancer so that, effect of 1,00,000 sampled data points is same as 1,000,000 data points. This is one of techniques that you can use when you have imbalanced dataset. | What does "weighted logistic regression" mean?
Weighted logistic regression is used when you have an imbalanced dataset. Let's understand with an example.
Let's assume you have a dataset with patient details and you need to predict whether patient |
54,768 | Decision tree that fits a regression at leaf nodes | Although regression trees with constant fits in the terminal nodes are still much more widely used in practice, there is a long history of literature on regression trees that fit regression models (or other kinds of statistical models) in the nodes of the tree. RECPAM by Ciampi et al. (1988) is pioneering work in the statistical literature, already focusing on survival models in the nodes. M5 by Quinlan (1992) is the first algorithm published in Machine Learning. Other algorithms include GUIDE (Loh 2002), FT (Gama 2004), RD and RA trees (Potts & Sammut 2005), and MOB (Zeileis et al. 2008). An overview of several of these, focusing on MOB in the illustrations, is given by Rusch & Zeileis (2013). The algorithms differ somewhat in the degree to which you have to explicitly specify the regression model considered for the nodes (e.g., as in MOB) vs. using all variable in both the regression and the partitioning (e.g., in M5).
R packages
lmtree() in the partykit package (based on MOB)
M5P() in the RWeka package (approximating M5)
Previous answers (by @Momo)
Advantage of GLMs in terminal nodes of a regression tree?
Regression tree algorithm with linear regression models in each leaf
References
Ciampi A, Hogg SA, McKinney S, Thiffault J (1988). "RECPAM: A Computer Program for Recursive Partition and Amalgamation for Censored Survival Data and Other Situations Frequently Occurring in Biostatistics. I. Methods and Program Features." Computer Methods and Programs in Biomedicine, 26(3), 239-256.
Quinlan R (1992). "Learning with Continuous Classes." In Proceedings of the Australian Joint Conference on Artificial Intelligence, pp. 343–348. World Scientific, Singapore.
Loh WY (2002). "Regression Trees with Unbiased Variable Selection and Interaction Detection." Statistica Sinica, 12, 361–386. http://www.jstor.org/stable/24306967
Gama J (2004). "Functional Trees." Machine Learning, 55(3), 219–250. doi:10.1023/B:MACH.0000027782.67192.13
Potts D, Sammut C (2005). "Incremental Learning of Linear Model Trees." Machine Learning, 61(1-3), 5–48. doi:10.1007/s10994-005-1121-8
Zeileis A, Hothorn T, Hornik K (2008). “Model-Based Recursive Partitioning.” Journal of Computational and Graphical Statistics, 17(2), 492–514. doi:10.1198/106186008x319331
Rusch T, Zeileis A (2013). "Gaining Insight with Recursive Partitioning of Generalized Linear Models." Journal of Statistical Computation and Simulation, 83(7), 1301–1315. doi:10.1080/00949655.2012.658804 | Decision tree that fits a regression at leaf nodes | Although regression trees with constant fits in the terminal nodes are still much more widely used in practice, there is a long history of literature on regression trees that fit regression models (or | Decision tree that fits a regression at leaf nodes
Although regression trees with constant fits in the terminal nodes are still much more widely used in practice, there is a long history of literature on regression trees that fit regression models (or other kinds of statistical models) in the nodes of the tree. RECPAM by Ciampi et al. (1988) is pioneering work in the statistical literature, already focusing on survival models in the nodes. M5 by Quinlan (1992) is the first algorithm published in Machine Learning. Other algorithms include GUIDE (Loh 2002), FT (Gama 2004), RD and RA trees (Potts & Sammut 2005), and MOB (Zeileis et al. 2008). An overview of several of these, focusing on MOB in the illustrations, is given by Rusch & Zeileis (2013). The algorithms differ somewhat in the degree to which you have to explicitly specify the regression model considered for the nodes (e.g., as in MOB) vs. using all variable in both the regression and the partitioning (e.g., in M5).
R packages
lmtree() in the partykit package (based on MOB)
M5P() in the RWeka package (approximating M5)
Previous answers (by @Momo)
Advantage of GLMs in terminal nodes of a regression tree?
Regression tree algorithm with linear regression models in each leaf
References
Ciampi A, Hogg SA, McKinney S, Thiffault J (1988). "RECPAM: A Computer Program for Recursive Partition and Amalgamation for Censored Survival Data and Other Situations Frequently Occurring in Biostatistics. I. Methods and Program Features." Computer Methods and Programs in Biomedicine, 26(3), 239-256.
Quinlan R (1992). "Learning with Continuous Classes." In Proceedings of the Australian Joint Conference on Artificial Intelligence, pp. 343–348. World Scientific, Singapore.
Loh WY (2002). "Regression Trees with Unbiased Variable Selection and Interaction Detection." Statistica Sinica, 12, 361–386. http://www.jstor.org/stable/24306967
Gama J (2004). "Functional Trees." Machine Learning, 55(3), 219–250. doi:10.1023/B:MACH.0000027782.67192.13
Potts D, Sammut C (2005). "Incremental Learning of Linear Model Trees." Machine Learning, 61(1-3), 5–48. doi:10.1007/s10994-005-1121-8
Zeileis A, Hothorn T, Hornik K (2008). “Model-Based Recursive Partitioning.” Journal of Computational and Graphical Statistics, 17(2), 492–514. doi:10.1198/106186008x319331
Rusch T, Zeileis A (2013). "Gaining Insight with Recursive Partitioning of Generalized Linear Models." Journal of Statistical Computation and Simulation, 83(7), 1301–1315. doi:10.1080/00949655.2012.658804 | Decision tree that fits a regression at leaf nodes
Although regression trees with constant fits in the terminal nodes are still much more widely used in practice, there is a long history of literature on regression trees that fit regression models (or |
54,769 | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSearchCV? | You need to check the accuracy difference between train and test set for each fold result. If your model gives you high training accuracy but low test accuracy so your model is overfitting. If your model does not give good training accuracy you can say your model is underfitting.
GridSearchCV is trying to find the best hyperparameters for your model. To do this, it splits the dataset into three-part. It uses a train set for the training part then test your data with validation set and tuning your parameters based on the validation set results. Finally, it uses test set to take the final model accuracy.
from sklearn.model_selection import KFold
kf = KFold(n_splits=5,random_state=42,shuffle=True)
# these are you training data points:
# features and targets
X = ....
y = ....
accuracies = []
for train_index, test_index in kf.split(X):
data_train = X[train_index]
target_train = y[train_index]
data_test = X[test_index]
target_test = y[test_index]
# if needed, do preprocessing here
clf = LogisticRegression()
clf.fit(data_train,target_train)
test_preds = clf.predict(data_test)
test_accuracy = accuracy_score(target_test,test_preds)
train_preds = clf.predict(data_train)
train_accuracy = accuracy_score(target_train, train_preds)
print(train_accuracy, test_accuracy, (train_accuracy - test_accuracy) )
accuracies.append(accuracy)
# this is the average accuracy over all folds
average_accuracy = np.mean(accuracies) | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSear | You need to check the accuracy difference between train and test set for each fold result. If your model gives you high training accuracy but low test accuracy so your model is overfitting. If your mo | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSearchCV?
You need to check the accuracy difference between train and test set for each fold result. If your model gives you high training accuracy but low test accuracy so your model is overfitting. If your model does not give good training accuracy you can say your model is underfitting.
GridSearchCV is trying to find the best hyperparameters for your model. To do this, it splits the dataset into three-part. It uses a train set for the training part then test your data with validation set and tuning your parameters based on the validation set results. Finally, it uses test set to take the final model accuracy.
from sklearn.model_selection import KFold
kf = KFold(n_splits=5,random_state=42,shuffle=True)
# these are you training data points:
# features and targets
X = ....
y = ....
accuracies = []
for train_index, test_index in kf.split(X):
data_train = X[train_index]
target_train = y[train_index]
data_test = X[test_index]
target_test = y[test_index]
# if needed, do preprocessing here
clf = LogisticRegression()
clf.fit(data_train,target_train)
test_preds = clf.predict(data_test)
test_accuracy = accuracy_score(target_test,test_preds)
train_preds = clf.predict(data_train)
train_accuracy = accuracy_score(target_train, train_preds)
print(train_accuracy, test_accuracy, (train_accuracy - test_accuracy) )
accuracies.append(accuracy)
# this is the average accuracy over all folds
average_accuracy = np.mean(accuracies) | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSear
You need to check the accuracy difference between train and test set for each fold result. If your model gives you high training accuracy but low test accuracy so your model is overfitting. If your mo |
54,770 | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSearchCV? | I am trying to answer each of the questions, you asked.
....... However, averaging scores you get from cross validation returns just a single score. Should this be interpreted as the train or the test score from the previous case? or neither?
I am not sure which library or package, you are using for cross-validation. I am assuming that you are using cross_val_score method (as it is widely used in tutorials). This method splits training set into k folds. Training on k-1 folds, the remaining fold is used as a test set to compute a performance. Thus, in a sense that you are not doing cross-validation for model selection, average accuracy you get from cross-validation is a estimate of test accuracy and you can call this test-score, not training score.
..... score from the previous case? or neither? How can we tell if the model is overfit or underfit?
I think your question of model overfit is regarding cross validation. You can easily understand whether your model is overfitted or not by comparing testing and training accuracy. However, for each of the k folds, cross_val_score gives you testing accuracy, not training accuracy. Hence, you should use sklearn's cross_validate which returns a dict containing test-score and others. And if you want to get training score as well, you just have to set value of return_train_score parameter to True.
A code snippet is following:
scores = cross_validate(rdn_forest_clf, train_features, train_lables, cv=5, scoring='f1', return_train_score=True)
print(scores.keys()) # Will print dict_keys(['fit_time', 'score_time', 'test_score', 'train_score'])
print(scores["train_score"]) # Will print your training score
print(scores["test_score"]) # Will print test score
.........confirm that your performance metric remains approximately the same. Is this necessary since we can just assume the model is not over/under-fit since we allow GridSearchCV to choose the best hyper-parameters?
Cross validation over the grid of hyper-parameters does not guarantee you about overfitting. Thus, to check whether the model you find by GridSearchCV is overfitted or not, you can use cv_results_ attribute of GridSearchCV. cv_results_ is a dictionary which contains details (e.g. mean_test_score, mean_score_time etc. ) for each combination of the parameters, given in parameters' grid. And to get training score related values (e.g. mean_train_score, std_train_score etc.), you have to pas return_train_score = True which is by default false.
Here is a code snipped to get mean training and testing accuracy for each combination of the parameters.
param_grid = {'n_estimators': [4, 5, 10, 15, 20, 30, 40, 60, 190, 500, 800], 'max_depth': [3, 4, 5, 6]}
grid_search_m = GridSearchCV(clf, param_grid, cv=5, scoring='f1', return_train_score=True)
grid_search_m.fit(train_features, train_lables)
print(grid_search_m.cv_results_.keys())
print(grid_search_m.cv_results_["mean_train_score"].shape) # n_estimators: 11 values, max_depth: 4 values. Thus shape, 11*4=44
print(grid_search_m.cv_results_["mean_test_score"].shape)
print(grid_search_m.cv_results_["mean_train_score"])
print(grid_search_m.cv_results_["mean_test_score"])
Then, comparing training and testing accuracy, you can ensure whether your model is overfitted or not. You can check different SE questions also to find the strategies for this comparison.
Furthermore, I have read something confusing ...... model is trained on the training set, and evaluated on the validation set in order to choose the best hyper-parameters, and then taking the best hyper-parameters is trained on train+val, and evaluated on test.
This also can be done during working on a ML model, that is, using validation set instead of cross-validation. However, to avoid “wasting” too much training data in validation sets, a common technique is to use cross-validation. | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSear | I am trying to answer each of the questions, you asked.
....... However, averaging scores you get from cross validation returns just a single score. Should this be interpreted as the train or the test | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSearchCV?
I am trying to answer each of the questions, you asked.
....... However, averaging scores you get from cross validation returns just a single score. Should this be interpreted as the train or the test score from the previous case? or neither?
I am not sure which library or package, you are using for cross-validation. I am assuming that you are using cross_val_score method (as it is widely used in tutorials). This method splits training set into k folds. Training on k-1 folds, the remaining fold is used as a test set to compute a performance. Thus, in a sense that you are not doing cross-validation for model selection, average accuracy you get from cross-validation is a estimate of test accuracy and you can call this test-score, not training score.
..... score from the previous case? or neither? How can we tell if the model is overfit or underfit?
I think your question of model overfit is regarding cross validation. You can easily understand whether your model is overfitted or not by comparing testing and training accuracy. However, for each of the k folds, cross_val_score gives you testing accuracy, not training accuracy. Hence, you should use sklearn's cross_validate which returns a dict containing test-score and others. And if you want to get training score as well, you just have to set value of return_train_score parameter to True.
A code snippet is following:
scores = cross_validate(rdn_forest_clf, train_features, train_lables, cv=5, scoring='f1', return_train_score=True)
print(scores.keys()) # Will print dict_keys(['fit_time', 'score_time', 'test_score', 'train_score'])
print(scores["train_score"]) # Will print your training score
print(scores["test_score"]) # Will print test score
.........confirm that your performance metric remains approximately the same. Is this necessary since we can just assume the model is not over/under-fit since we allow GridSearchCV to choose the best hyper-parameters?
Cross validation over the grid of hyper-parameters does not guarantee you about overfitting. Thus, to check whether the model you find by GridSearchCV is overfitted or not, you can use cv_results_ attribute of GridSearchCV. cv_results_ is a dictionary which contains details (e.g. mean_test_score, mean_score_time etc. ) for each combination of the parameters, given in parameters' grid. And to get training score related values (e.g. mean_train_score, std_train_score etc.), you have to pas return_train_score = True which is by default false.
Here is a code snipped to get mean training and testing accuracy for each combination of the parameters.
param_grid = {'n_estimators': [4, 5, 10, 15, 20, 30, 40, 60, 190, 500, 800], 'max_depth': [3, 4, 5, 6]}
grid_search_m = GridSearchCV(clf, param_grid, cv=5, scoring='f1', return_train_score=True)
grid_search_m.fit(train_features, train_lables)
print(grid_search_m.cv_results_.keys())
print(grid_search_m.cv_results_["mean_train_score"].shape) # n_estimators: 11 values, max_depth: 4 values. Thus shape, 11*4=44
print(grid_search_m.cv_results_["mean_test_score"].shape)
print(grid_search_m.cv_results_["mean_train_score"])
print(grid_search_m.cv_results_["mean_test_score"])
Then, comparing training and testing accuracy, you can ensure whether your model is overfitted or not. You can check different SE questions also to find the strategies for this comparison.
Furthermore, I have read something confusing ...... model is trained on the training set, and evaluated on the validation set in order to choose the best hyper-parameters, and then taking the best hyper-parameters is trained on train+val, and evaluated on test.
This also can be done during working on a ML model, that is, using validation set instead of cross-validation. However, to avoid “wasting” too much training data in validation sets, a common technique is to use cross-validation. | How to evaluate whether model is overfitting or underfitting when using cross_val_score and GridSear
I am trying to answer each of the questions, you asked.
....... However, averaging scores you get from cross validation returns just a single score. Should this be interpreted as the train or the test |
54,771 | Average number of random permutations of a sequence, before seeing a sorted sequence | Let us start by assuming $N$ unique entries in our vector.
The action of randomly shuffling a vector and checking whether it is sorted in a particular order afterwards is equivalent to picking a permutation at random and checking whether it is one very specific one, namely the one that orders the vector in the order we want.
The permutation group of a set with $N$ elements has $N!$ elements. By assumption, each permutation is equally probable.
Your experiment thus is an iterated sampling from a Bernoulli distribution with a success probability of $p=\frac{1}{N!}$. The number of trials until the first success is geometrically distributed, and the expectation you are looking for is the expected number of draws until the first success. Which is the expectation of our geometric distribution, namely $\frac{1}{p}=N!$.
We can extend the treatment for duplicates. Suppose the vector has $n$ unique entries, each one appearing $k_1, k_2, \dots, k_n$ times, so $k_1+k_2+\dots+k_n=N$. Then two permutations of the vector will yield the same vector if they only differ by reorderings within each separate entry. And there are $k_1!k_2!\cdots k_n!$ permutations within each entry. So the overall permutation group of our vector with multiplicities has $\frac{N!}{k_1!k_2!\cdots k_n!}$ permutations that actually result in different vectors. The rest of the analysis runs as above, so the result is that we expect to have to draw $\frac{N!}{k_1!k_2!\cdots k_n!}$ permutations before seeing an ordered vector. | Average number of random permutations of a sequence, before seeing a sorted sequence | Let us start by assuming $N$ unique entries in our vector.
The action of randomly shuffling a vector and checking whether it is sorted in a particular order afterwards is equivalent to picking a permu | Average number of random permutations of a sequence, before seeing a sorted sequence
Let us start by assuming $N$ unique entries in our vector.
The action of randomly shuffling a vector and checking whether it is sorted in a particular order afterwards is equivalent to picking a permutation at random and checking whether it is one very specific one, namely the one that orders the vector in the order we want.
The permutation group of a set with $N$ elements has $N!$ elements. By assumption, each permutation is equally probable.
Your experiment thus is an iterated sampling from a Bernoulli distribution with a success probability of $p=\frac{1}{N!}$. The number of trials until the first success is geometrically distributed, and the expectation you are looking for is the expected number of draws until the first success. Which is the expectation of our geometric distribution, namely $\frac{1}{p}=N!$.
We can extend the treatment for duplicates. Suppose the vector has $n$ unique entries, each one appearing $k_1, k_2, \dots, k_n$ times, so $k_1+k_2+\dots+k_n=N$. Then two permutations of the vector will yield the same vector if they only differ by reorderings within each separate entry. And there are $k_1!k_2!\cdots k_n!$ permutations within each entry. So the overall permutation group of our vector with multiplicities has $\frac{N!}{k_1!k_2!\cdots k_n!}$ permutations that actually result in different vectors. The rest of the analysis runs as above, so the result is that we expect to have to draw $\frac{N!}{k_1!k_2!\cdots k_n!}$ permutations before seeing an ordered vector. | Average number of random permutations of a sequence, before seeing a sorted sequence
Let us start by assuming $N$ unique entries in our vector.
The action of randomly shuffling a vector and checking whether it is sorted in a particular order afterwards is equivalent to picking a permu |
54,772 | What is the difference between "controlling for a variable" and interaction? | It makes more sense to say that someone becomes heavier if one is taller and/or consumes more soda than that someone becomes taller if (s)he is heavier and consumes more soda. So I assume you mean that the dependent/explained/left-hand-side/y-variable is weight and the independent/explanatory/right-hand-side/x-variables are height and soda consumption. For this example assume that tall people tend to drink more sodas.
So the model while only controlling for sodas is:
$\widehat{weight}= b_0 + b_1 height + b_2 soda$
While the model with the interaction effect is:
$\widehat{weight}= b_0 + b_1 height + b_2 soda + b_3 height \times soda$
If you control for soda use you are comparing people of different height but with the same soda use, that is, you keep the control variables constant. If we had not controlled for soda, then part of the effect of height would actually be the result of tall people drinking more sodas, and those who drink more sodas tend to be heavier. Controlling for soda, means that we filter this part out by keeping the soda consumption constant. However, there is only one effect of height on weight. Regardless of your soda consumption, you will on average gain $b_1$ grams for every centimeter you get taller.
If you add an interaction effect, you say that the effect of height differs depending on your soda consumption. If we treat both variables linearly we would get something like, the effect of height on weight is $b_1+b_3\times soda$. So if one does not drink soda at all, i.e. $soda$ is 0, then you will gain on average $b_1$ grams for every centimeter you get taller. However, if you drink 10 sodas a day, then you will get $b_1 + 10\times b_3$ grams for every centimeter you get taller. These different effects of height on weight are also controlled for soda. In the first case the soda consumption is kept constant at 0, while in the second case the soda consumption is kept constant at 10. | What is the difference between "controlling for a variable" and interaction? | It makes more sense to say that someone becomes heavier if one is taller and/or consumes more soda than that someone becomes taller if (s)he is heavier and consumes more soda. So I assume you mean tha | What is the difference between "controlling for a variable" and interaction?
It makes more sense to say that someone becomes heavier if one is taller and/or consumes more soda than that someone becomes taller if (s)he is heavier and consumes more soda. So I assume you mean that the dependent/explained/left-hand-side/y-variable is weight and the independent/explanatory/right-hand-side/x-variables are height and soda consumption. For this example assume that tall people tend to drink more sodas.
So the model while only controlling for sodas is:
$\widehat{weight}= b_0 + b_1 height + b_2 soda$
While the model with the interaction effect is:
$\widehat{weight}= b_0 + b_1 height + b_2 soda + b_3 height \times soda$
If you control for soda use you are comparing people of different height but with the same soda use, that is, you keep the control variables constant. If we had not controlled for soda, then part of the effect of height would actually be the result of tall people drinking more sodas, and those who drink more sodas tend to be heavier. Controlling for soda, means that we filter this part out by keeping the soda consumption constant. However, there is only one effect of height on weight. Regardless of your soda consumption, you will on average gain $b_1$ grams for every centimeter you get taller.
If you add an interaction effect, you say that the effect of height differs depending on your soda consumption. If we treat both variables linearly we would get something like, the effect of height on weight is $b_1+b_3\times soda$. So if one does not drink soda at all, i.e. $soda$ is 0, then you will gain on average $b_1$ grams for every centimeter you get taller. However, if you drink 10 sodas a day, then you will get $b_1 + 10\times b_3$ grams for every centimeter you get taller. These different effects of height on weight are also controlled for soda. In the first case the soda consumption is kept constant at 0, while in the second case the soda consumption is kept constant at 10. | What is the difference between "controlling for a variable" and interaction?
It makes more sense to say that someone becomes heavier if one is taller and/or consumes more soda than that someone becomes taller if (s)he is heavier and consumes more soda. So I assume you mean tha |
54,773 | Is it possible of overfit using Propensity score matching with the MatchIt R package? | First, I would caution anyone without a background in applied statistics from performing advanced analyses like propensity score matching. The ease of the software makes it seem like the procedure itself is easy, when in fact there are many considerations required to make a valid inference. I'm also pretty skeptical of guides for practice published in non-statistical journals by non-statisticians, as these tend to focus on the use of software rather than consideration of the nuances of the analysis. I'm sure you could find a biostatistician who would be willing to help you implement best practices.
That said, you bring up two major issues in propensity score analysis: covariate selection and assessment of the propensity score model. These are both huge topics about which there is ongoing research. I'll give you some pointers and some literature that can help you make your decisions.
Regarding covariate selection: if you want an unbiased estimated of the treatment effect, you need to eliminate confounding without introducing bias. To eliminate confounding, you need to control for a sufficient set of variables that blocks all the backdoor paths from treatment to the outcome. A backdoor path is a causal pathway that involves a common cause of treatment assignment and variation in the outcome. Substantive research can be illuminating, but it's better to be as conservative as possible by including as many relevant variables as possible without inducing bias. The types of variables you should include are those that affect the outcome and are not affected by the treatment. You should also include variables that are known to affect both selection into treatment and variation in the outcome. Do not include variables that could possibly be affected by the treatment or that affect selection into treatment but are otherwise unrelated to the outcome. See Brookhart et al. (2006) for specific discussion of variable selection for propensity score models and Elwert (2010) for discussion of a more complete theory for adjusting for confounding. Critically, failing to include relevant confounding variable can bias your effect, and medical research may not have found all of them, so I would warn you against excluding covariates just because there isn't a medical paper documenting the confounding relationship. Prior research can be used as positive evidence to justify the necessity of a variable's inclusion, but there is likely no negative evidence that can justify a variable's exclusion except for a demonstration of its lack of effect on the outcome.
Regarding assessment of the propensity score model: the goal of matching is to create balance between the treated an untreated on the relevant covariates. You can do this either by trying various propensity score models and matching algorithms that yield balance or by attempting to specify a well-justified propensity score model that comes close to modeling the true propensity score. See some discussion about the distinction here. This matter is well-described in Ho, Imai, King, & Stuart (2007). The goal of the propensity score is to create balance, not achieve good fit. It's often better to model the propensity score in way that would normally be considered overfit if you were to try to interpret the propensity score model used than to ensure a parsimonious and theoretically justified model. There is a large literature on assessing balance, most of which is summarized in the documentation for the R package cobalt and my answer here.
There are so many ways of estimating propensity scores and using them that I really don't think someone with no expertise on this matter should be performing this analysis without the guidance of a trained statistician. Matching may not be (and likely isn't) the best method for you to use anyway. If you want to do as little thinking and decision-making as possible, I recommend you look into targeted maximum likelihood estimation of causal effects. The package survtmle implements this method. Whatever you do, do not choose a method just because it's what you know or it's what other researchers use, and do not attempt to perform a complex analysis without the help of a statistician. | Is it possible of overfit using Propensity score matching with the MatchIt R package? | First, I would caution anyone without a background in applied statistics from performing advanced analyses like propensity score matching. The ease of the software makes it seem like the procedure its | Is it possible of overfit using Propensity score matching with the MatchIt R package?
First, I would caution anyone without a background in applied statistics from performing advanced analyses like propensity score matching. The ease of the software makes it seem like the procedure itself is easy, when in fact there are many considerations required to make a valid inference. I'm also pretty skeptical of guides for practice published in non-statistical journals by non-statisticians, as these tend to focus on the use of software rather than consideration of the nuances of the analysis. I'm sure you could find a biostatistician who would be willing to help you implement best practices.
That said, you bring up two major issues in propensity score analysis: covariate selection and assessment of the propensity score model. These are both huge topics about which there is ongoing research. I'll give you some pointers and some literature that can help you make your decisions.
Regarding covariate selection: if you want an unbiased estimated of the treatment effect, you need to eliminate confounding without introducing bias. To eliminate confounding, you need to control for a sufficient set of variables that blocks all the backdoor paths from treatment to the outcome. A backdoor path is a causal pathway that involves a common cause of treatment assignment and variation in the outcome. Substantive research can be illuminating, but it's better to be as conservative as possible by including as many relevant variables as possible without inducing bias. The types of variables you should include are those that affect the outcome and are not affected by the treatment. You should also include variables that are known to affect both selection into treatment and variation in the outcome. Do not include variables that could possibly be affected by the treatment or that affect selection into treatment but are otherwise unrelated to the outcome. See Brookhart et al. (2006) for specific discussion of variable selection for propensity score models and Elwert (2010) for discussion of a more complete theory for adjusting for confounding. Critically, failing to include relevant confounding variable can bias your effect, and medical research may not have found all of them, so I would warn you against excluding covariates just because there isn't a medical paper documenting the confounding relationship. Prior research can be used as positive evidence to justify the necessity of a variable's inclusion, but there is likely no negative evidence that can justify a variable's exclusion except for a demonstration of its lack of effect on the outcome.
Regarding assessment of the propensity score model: the goal of matching is to create balance between the treated an untreated on the relevant covariates. You can do this either by trying various propensity score models and matching algorithms that yield balance or by attempting to specify a well-justified propensity score model that comes close to modeling the true propensity score. See some discussion about the distinction here. This matter is well-described in Ho, Imai, King, & Stuart (2007). The goal of the propensity score is to create balance, not achieve good fit. It's often better to model the propensity score in way that would normally be considered overfit if you were to try to interpret the propensity score model used than to ensure a parsimonious and theoretically justified model. There is a large literature on assessing balance, most of which is summarized in the documentation for the R package cobalt and my answer here.
There are so many ways of estimating propensity scores and using them that I really don't think someone with no expertise on this matter should be performing this analysis without the guidance of a trained statistician. Matching may not be (and likely isn't) the best method for you to use anyway. If you want to do as little thinking and decision-making as possible, I recommend you look into targeted maximum likelihood estimation of causal effects. The package survtmle implements this method. Whatever you do, do not choose a method just because it's what you know or it's what other researchers use, and do not attempt to perform a complex analysis without the help of a statistician. | Is it possible of overfit using Propensity score matching with the MatchIt R package?
First, I would caution anyone without a background in applied statistics from performing advanced analyses like propensity score matching. The ease of the software makes it seem like the procedure its |
54,774 | Should I remove duplicates from my dataset for my machine learning problem? | You should probably remove them. Duplicates are an extreme case of nonrandom sampling, and they bias your fitted model. Including them will essentially lead to the model overfitting this subset of points.
I say probably because you should (1) be sure they are not real data that coincidentally have values that are identical, and (2) try to figure why you have duplicates in your data. For example, sometimes people intentionally 'oversample' rare categories in training data, though why this is done is not clear, as it probably helps only in rare cases.
As a side note, it's worth reading this thread: Why is accuracy not the best measure for assessing classification models? | Should I remove duplicates from my dataset for my machine learning problem? | You should probably remove them. Duplicates are an extreme case of nonrandom sampling, and they bias your fitted model. Including them will essentially lead to the model overfitting this subset of poi | Should I remove duplicates from my dataset for my machine learning problem?
You should probably remove them. Duplicates are an extreme case of nonrandom sampling, and they bias your fitted model. Including them will essentially lead to the model overfitting this subset of points.
I say probably because you should (1) be sure they are not real data that coincidentally have values that are identical, and (2) try to figure why you have duplicates in your data. For example, sometimes people intentionally 'oversample' rare categories in training data, though why this is done is not clear, as it probably helps only in rare cases.
As a side note, it's worth reading this thread: Why is accuracy not the best measure for assessing classification models? | Should I remove duplicates from my dataset for my machine learning problem?
You should probably remove them. Duplicates are an extreme case of nonrandom sampling, and they bias your fitted model. Including them will essentially lead to the model overfitting this subset of poi |
54,775 | Selecting ARIMA orders based on ACF-PACF vs. auto.arima | First, it is very hard to use (P)ACF plots to identify ARIMA(p,d,q) models if both p and q are nonzero. See Hyndman & Athanasopoulos:
If p and q are both positive, then the plots do not help in finding suitable values of p and q.
Second, your peaks at lag 4 only barely exceed the confidence bands.
I would always prefer auto.arima() over parsing (P)ACF plots myself, i.e., the Box-Jenkins approach. It is built by experts with a lot of experience, and it truly is a gold standard for ARIMA modeling, unless you are an expert yourself and you are working academically on the frontiers of knowledge.
In the present case, auto.arima() would prefer a simple mean model. I would recommend that you run with this. | Selecting ARIMA orders based on ACF-PACF vs. auto.arima | First, it is very hard to use (P)ACF plots to identify ARIMA(p,d,q) models if both p and q are nonzero. See Hyndman & Athanasopoulos:
If p and q are both positive, then the plots do not help in findi | Selecting ARIMA orders based on ACF-PACF vs. auto.arima
First, it is very hard to use (P)ACF plots to identify ARIMA(p,d,q) models if both p and q are nonzero. See Hyndman & Athanasopoulos:
If p and q are both positive, then the plots do not help in finding suitable values of p and q.
Second, your peaks at lag 4 only barely exceed the confidence bands.
I would always prefer auto.arima() over parsing (P)ACF plots myself, i.e., the Box-Jenkins approach. It is built by experts with a lot of experience, and it truly is a gold standard for ARIMA modeling, unless you are an expert yourself and you are working academically on the frontiers of knowledge.
In the present case, auto.arima() would prefer a simple mean model. I would recommend that you run with this. | Selecting ARIMA orders based on ACF-PACF vs. auto.arima
First, it is very hard to use (P)ACF plots to identify ARIMA(p,d,q) models if both p and q are nonzero. See Hyndman & Athanasopoulos:
If p and q are both positive, then the plots do not help in findi |
54,776 | Why are emmeans package means different than regular means? | The fundamental difference between estimated marginal means (EMMs) and ordinary marginal means of data (OMMs) is that OMMs summarize the data, while EMMs summarize a model. Thus, if you fit a different model to the data, the EMMs are potentially different. EMMs are not just one thing.
To be a bit more precise, EMMs involve three entities:
A model for the data
A grid consisting of all combinations of reference valuses for the predictors. Typically, the reference values are, in the case of factors, the levels of those factors; and in the case of numeric predictors, the means of those predictors.
A weighting scheme (usually equal weights)
Given these, EMMs are obtained by first using the given model to obtain predictions at each combination of reference values; and then obtaining marginal averages of those predictions according to the weighting scheme.
In the case where equal weights are used, the model is fitted using lm() (or equivalent), all the predictors are factors, the design is balanced, and the model contains all interactions among these factors, then the predicted values are the cell means of the data, and the EMMs are the same as the OMMs. However, any deviations from these issues -- e.g., unequal weights, not using least-squares, not having balanced data, having some numerical predictors, not having all interactions in the model -- may lead to the EMMs being different from the OMMs.
Some further notes specific to other answers or comments in this thread:
Regarding empty cells, then usually a model with all interactions will be unable to unable to estimate all the grid values, causing some or all of the EMMs to be non-estimable (but see an exception below). Fitting a different model where one or more of the interactions are excluded may lead to the grid values being estimable, and hence the EMMs being estimable.
The question of whether observations are missing at random, not at random, completely at random, etc. is a modeling issue (or, per some comments, whether you trust the model you used). If the model is [in]appropriate or [un]trustworthy, the resulting EMMs will also be [in]appropriate or [un]trustworthy. Some missingness assumptions allow for multiple imputation techniques, and those may (or may not) allow for grid means to be estimable, and will; impact the EMMs accordingly.
Alternative weighting schemes (such as weighting proportionally to marginal frequencies) obviously affect the EMMs as well. A weighting scheme that gives zero weight to any grid combination that is non-estimable will provide estimable EMMs where otherwise they would be non-estimable. In particular, in an (all-factors, all-interactions, least-squares) situation, weighting according to cell frequencies will yield EMMs equal to OMMs. | Why are emmeans package means different than regular means? | The fundamental difference between estimated marginal means (EMMs) and ordinary marginal means of data (OMMs) is that OMMs summarize the data, while EMMs summarize a model. Thus, if you fit a differen | Why are emmeans package means different than regular means?
The fundamental difference between estimated marginal means (EMMs) and ordinary marginal means of data (OMMs) is that OMMs summarize the data, while EMMs summarize a model. Thus, if you fit a different model to the data, the EMMs are potentially different. EMMs are not just one thing.
To be a bit more precise, EMMs involve three entities:
A model for the data
A grid consisting of all combinations of reference valuses for the predictors. Typically, the reference values are, in the case of factors, the levels of those factors; and in the case of numeric predictors, the means of those predictors.
A weighting scheme (usually equal weights)
Given these, EMMs are obtained by first using the given model to obtain predictions at each combination of reference values; and then obtaining marginal averages of those predictions according to the weighting scheme.
In the case where equal weights are used, the model is fitted using lm() (or equivalent), all the predictors are factors, the design is balanced, and the model contains all interactions among these factors, then the predicted values are the cell means of the data, and the EMMs are the same as the OMMs. However, any deviations from these issues -- e.g., unequal weights, not using least-squares, not having balanced data, having some numerical predictors, not having all interactions in the model -- may lead to the EMMs being different from the OMMs.
Some further notes specific to other answers or comments in this thread:
Regarding empty cells, then usually a model with all interactions will be unable to unable to estimate all the grid values, causing some or all of the EMMs to be non-estimable (but see an exception below). Fitting a different model where one or more of the interactions are excluded may lead to the grid values being estimable, and hence the EMMs being estimable.
The question of whether observations are missing at random, not at random, completely at random, etc. is a modeling issue (or, per some comments, whether you trust the model you used). If the model is [in]appropriate or [un]trustworthy, the resulting EMMs will also be [in]appropriate or [un]trustworthy. Some missingness assumptions allow for multiple imputation techniques, and those may (or may not) allow for grid means to be estimable, and will; impact the EMMs accordingly.
Alternative weighting schemes (such as weighting proportionally to marginal frequencies) obviously affect the EMMs as well. A weighting scheme that gives zero weight to any grid combination that is non-estimable will provide estimable EMMs where otherwise they would be non-estimable. In particular, in an (all-factors, all-interactions, least-squares) situation, weighting according to cell frequencies will yield EMMs equal to OMMs. | Why are emmeans package means different than regular means?
The fundamental difference between estimated marginal means (EMMs) and ordinary marginal means of data (OMMs) is that OMMs summarize the data, while EMMs summarize a model. Thus, if you fit a differen |
54,777 | Why are emmeans package means different than regular means? | You are indeed right that this difference can be explained from the missing data you have. In particular, when you have missing data that are of the missing at random type, then the observed data are not a representative sample of your target population. In this case, the simple sample means will be biased and should not be trusted.
The mixed model, on the contrary, will give you correct estimates and inferences in a missing at random setting, provided that your model is correctly/flexibly specified.
Hence, you should better trust what is reported by emmeans based on your fitted mixed model. | Why are emmeans package means different than regular means? | You are indeed right that this difference can be explained from the missing data you have. In particular, when you have missing data that are of the missing at random type, then the observed data are | Why are emmeans package means different than regular means?
You are indeed right that this difference can be explained from the missing data you have. In particular, when you have missing data that are of the missing at random type, then the observed data are not a representative sample of your target population. In this case, the simple sample means will be biased and should not be trusted.
The mixed model, on the contrary, will give you correct estimates and inferences in a missing at random setting, provided that your model is correctly/flexibly specified.
Hence, you should better trust what is reported by emmeans based on your fitted mixed model. | Why are emmeans package means different than regular means?
You are indeed right that this difference can be explained from the missing data you have. In particular, when you have missing data that are of the missing at random type, then the observed data are |
54,778 | Simulating random walk with "known" prediction | I started by shortening your series to 20 realizations, so we could actually see something.
set.seed(420)
x=rnorm(20)
y=rep(NA,length(x))
y[1]=x[1]
for (i in 2:length(x)) y[i]=y[i-1]+x[i]*0.7
Then I simulated five trajectories (each of length six). First, I draw 5 normal random variables with mean 'y[15]. Then I draw another 5 normal random variables with meany[16]`. And so forth. Finally, I connect the first set, the second set, up to the fifth set.
pred_index <- 15:20
n_sims <- 5
sd <- 0.2
sims <- sapply(y[pred_index],FUN=function(yy)rnorm(n_sims,mean=yy,sd=sd))
This gives us five trajectories.
plot(y,type="l",lwd=2,ylim=range(c(y,sims)))
for ( jj in 1:n_sims ) lines(pred_index,sims[jj,],col="green")
lines(pred_index,y[pred_index],col="red",lwd=2)
Here are the simulations, with the last "actual" observation in the first column:
> (foo <- cbind(y[pred_index[1]-1],sims))
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] -1.774497 -2.058771 -2.171860 -1.740961 -1.394201 -0.9514782 -1.6126567
[2,] -1.774497 -2.088186 -2.445334 -1.939621 -1.490676 -1.1473929 -1.5795282
[3,] -1.774497 -1.896928 -1.802883 -2.045344 -1.271107 -0.9969500 -0.9606348
[4,] -1.774497 -2.139482 -2.332128 -1.934507 -1.615830 -1.0684256 -1.1898377
[5,] -1.774497 -2.027070 -2.412320 -1.589246 -1.945217 -1.4398321 -1.1173766
Since this is supposed to be a random walk, we can look at the step-by-step increments within each simulation, by simply taking successive differences between the columns of this matrix:
> sapply(1:ncol(sims),FUN=function(jj)foo[,jj+1]-foo[,jj])
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] -0.2842738 -0.11308883 0.4308988 0.3467603 0.4427225 -0.66117845
[2,] -0.3136891 -0.35714748 0.5057129 0.4489450 0.3432831 -0.43213534
[3,] -0.1224306 0.09404472 -0.2424613 0.7742371 0.2741574 0.03631521
[4,] -0.3649842 -0.19264672 0.3976211 0.3186768 0.5474049 -0.12141211
[5,] -0.2525731 -0.38524925 0.8230736 -0.3559711 0.5053851 0.32245545 | Simulating random walk with "known" prediction | I started by shortening your series to 20 realizations, so we could actually see something.
set.seed(420)
x=rnorm(20)
y=rep(NA,length(x))
y[1]=x[1]
for (i in 2:length(x)) y[i]=y[i-1]+x[i]*0.7
Then I | Simulating random walk with "known" prediction
I started by shortening your series to 20 realizations, so we could actually see something.
set.seed(420)
x=rnorm(20)
y=rep(NA,length(x))
y[1]=x[1]
for (i in 2:length(x)) y[i]=y[i-1]+x[i]*0.7
Then I simulated five trajectories (each of length six). First, I draw 5 normal random variables with mean 'y[15]. Then I draw another 5 normal random variables with meany[16]`. And so forth. Finally, I connect the first set, the second set, up to the fifth set.
pred_index <- 15:20
n_sims <- 5
sd <- 0.2
sims <- sapply(y[pred_index],FUN=function(yy)rnorm(n_sims,mean=yy,sd=sd))
This gives us five trajectories.
plot(y,type="l",lwd=2,ylim=range(c(y,sims)))
for ( jj in 1:n_sims ) lines(pred_index,sims[jj,],col="green")
lines(pred_index,y[pred_index],col="red",lwd=2)
Here are the simulations, with the last "actual" observation in the first column:
> (foo <- cbind(y[pred_index[1]-1],sims))
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] -1.774497 -2.058771 -2.171860 -1.740961 -1.394201 -0.9514782 -1.6126567
[2,] -1.774497 -2.088186 -2.445334 -1.939621 -1.490676 -1.1473929 -1.5795282
[3,] -1.774497 -1.896928 -1.802883 -2.045344 -1.271107 -0.9969500 -0.9606348
[4,] -1.774497 -2.139482 -2.332128 -1.934507 -1.615830 -1.0684256 -1.1898377
[5,] -1.774497 -2.027070 -2.412320 -1.589246 -1.945217 -1.4398321 -1.1173766
Since this is supposed to be a random walk, we can look at the step-by-step increments within each simulation, by simply taking successive differences between the columns of this matrix:
> sapply(1:ncol(sims),FUN=function(jj)foo[,jj+1]-foo[,jj])
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] -0.2842738 -0.11308883 0.4308988 0.3467603 0.4427225 -0.66117845
[2,] -0.3136891 -0.35714748 0.5057129 0.4489450 0.3432831 -0.43213534
[3,] -0.1224306 0.09404472 -0.2424613 0.7742371 0.2741574 0.03631521
[4,] -0.3649842 -0.19264672 0.3976211 0.3186768 0.5474049 -0.12141211
[5,] -0.2525731 -0.38524925 0.8230736 -0.3559711 0.5053851 0.32245545 | Simulating random walk with "known" prediction
I started by shortening your series to 20 realizations, so we could actually see something.
set.seed(420)
x=rnorm(20)
y=rep(NA,length(x))
y[1]=x[1]
for (i in 2:length(x)) y[i]=y[i-1]+x[i]*0.7
Then I |
54,779 | Simulating random walk with "known" prediction | The specification of the sort of data that you want to generate is very broad, and there are many ways to simulate sort of random walks with paths that have a tendency to 'return to the mean'.
For instance, you can change your code like:
set.seed(420)
a = 0.9
b = 0.7
n = 10^4
x=rnorm(n)
y=rep(NA,n)
y[1]=x[1]
for (i in 2:n) {
y[i]=a*y[i-1]+b*x[i]
}
Which is a damped random walk.
related question (and possibly duplicate): Creating auto-correlated random values in R
more similar types of random walks: https://en.wikipedia.org/wiki/Autoregressive-moving-average_model | Simulating random walk with "known" prediction | The specification of the sort of data that you want to generate is very broad, and there are many ways to simulate sort of random walks with paths that have a tendency to 'return to the mean'.
For ins | Simulating random walk with "known" prediction
The specification of the sort of data that you want to generate is very broad, and there are many ways to simulate sort of random walks with paths that have a tendency to 'return to the mean'.
For instance, you can change your code like:
set.seed(420)
a = 0.9
b = 0.7
n = 10^4
x=rnorm(n)
y=rep(NA,n)
y[1]=x[1]
for (i in 2:n) {
y[i]=a*y[i-1]+b*x[i]
}
Which is a damped random walk.
related question (and possibly duplicate): Creating auto-correlated random values in R
more similar types of random walks: https://en.wikipedia.org/wiki/Autoregressive-moving-average_model | Simulating random walk with "known" prediction
The specification of the sort of data that you want to generate is very broad, and there are many ways to simulate sort of random walks with paths that have a tendency to 'return to the mean'.
For ins |
54,780 | Simulating random walk with "known" prediction | A random walk process that you're using has a constant (or zero) drift and variance:
$$dW_t=\xi_t,\\\xi_t\sim \mathcal N(0,\sigma^2)$$
You want "arbitrary variablility", i.e. $\sigma^2$ is not only changing with time, but also in some arbitrary way, whatever you meant by this word. If you meant that it's stochastic, unpredictable, then maybe you need to look at stochastic variance processes where $\sigma^2_t$ is a random process itself. One such model is Heston model, which is popular in derivative pricing in finance.
There are simpler models such as GARCH.
In GARCH the variance is not stochastic in sense that the variance of next step is completely determined by information to date. However, since you constantly get new information, the future variance gets updated at every step. So the variance is also arbitrary albeit in a narrower sense compared to the stochastic volatility process such as Heston. | Simulating random walk with "known" prediction | A random walk process that you're using has a constant (or zero) drift and variance:
$$dW_t=\xi_t,\\\xi_t\sim \mathcal N(0,\sigma^2)$$
You want "arbitrary variablility", i.e. $\sigma^2$ is not only c | Simulating random walk with "known" prediction
A random walk process that you're using has a constant (or zero) drift and variance:
$$dW_t=\xi_t,\\\xi_t\sim \mathcal N(0,\sigma^2)$$
You want "arbitrary variablility", i.e. $\sigma^2$ is not only changing with time, but also in some arbitrary way, whatever you meant by this word. If you meant that it's stochastic, unpredictable, then maybe you need to look at stochastic variance processes where $\sigma^2_t$ is a random process itself. One such model is Heston model, which is popular in derivative pricing in finance.
There are simpler models such as GARCH.
In GARCH the variance is not stochastic in sense that the variance of next step is completely determined by information to date. However, since you constantly get new information, the future variance gets updated at every step. So the variance is also arbitrary albeit in a narrower sense compared to the stochastic volatility process such as Heston. | Simulating random walk with "known" prediction
A random walk process that you're using has a constant (or zero) drift and variance:
$$dW_t=\xi_t,\\\xi_t\sim \mathcal N(0,\sigma^2)$$
You want "arbitrary variablility", i.e. $\sigma^2$ is not only c |
54,781 | What is the probability that Person A will require more tosses of a particular coin than Person B to obtain the first head? | Your first way and result are correct. For the second way, we can exchange the summation order, and $i$ should start from $2$:
$$\begin{align}p&=\pi^2\sum_{i=2}^\infty (1-\pi)^{i-1}\sum_{j=1}^{i-1}(1-\pi)^{j-1}= \pi^2\sum_{i=2}^\infty (1-\pi)^{i-1} \left(\frac{1-(1-\pi)^{i-1}}{\pi}\right)\\&=\pi\sum_{i=2}^\infty(1-\pi)^{i-1}-\pi \sum_{i=2}^\infty (1-\pi)^{2(i-1)}\\&=(1-\pi)-\frac{(1-\pi)^2}{2-\pi}=\frac{1-\pi}{2-\pi}\end{align}$$
which is the same with your first result. | What is the probability that Person A will require more tosses of a particular coin than Person B to | Your first way and result are correct. For the second way, we can exchange the summation order, and $i$ should start from $2$:
$$\begin{align}p&=\pi^2\sum_{i=2}^\infty (1-\pi)^{i-1}\sum_{j=1}^{i-1}(1- | What is the probability that Person A will require more tosses of a particular coin than Person B to obtain the first head?
Your first way and result are correct. For the second way, we can exchange the summation order, and $i$ should start from $2$:
$$\begin{align}p&=\pi^2\sum_{i=2}^\infty (1-\pi)^{i-1}\sum_{j=1}^{i-1}(1-\pi)^{j-1}= \pi^2\sum_{i=2}^\infty (1-\pi)^{i-1} \left(\frac{1-(1-\pi)^{i-1}}{\pi}\right)\\&=\pi\sum_{i=2}^\infty(1-\pi)^{i-1}-\pi \sum_{i=2}^\infty (1-\pi)^{2(i-1)}\\&=(1-\pi)-\frac{(1-\pi)^2}{2-\pi}=\frac{1-\pi}{2-\pi}\end{align}$$
which is the same with your first result. | What is the probability that Person A will require more tosses of a particular coin than Person B to
Your first way and result are correct. For the second way, we can exchange the summation order, and $i$ should start from $2$:
$$\begin{align}p&=\pi^2\sum_{i=2}^\infty (1-\pi)^{i-1}\sum_{j=1}^{i-1}(1- |
54,782 | How to predict by hand in R using splines regression? [closed] | As per my comment, once you fit your model, you can extract the values of the predictors included in the model using the model.matrix() function:
require(stats)
require(splines)
require(graphics)
fm1 <- lm(weight ~ bs(height, df = 5), data = women)
summary(fm1)
model.matrix(fm1)
The R output produced by model.matrix() for your model is as follows:
> model.matrix(fm1)
(Intercept) bs(height, df = 5)1 bs(height, df = 5)2 bs(height, df = 5)3 bs(height, df = 5)4 bs(height, df = 5)5
1 1 0.000000e+00 0.000000000 0.000000000 0.000000e+00 0.000000000
2 1 4.534439e-01 0.059857872 0.001639942 0.000000e+00 0.000000000
3 1 5.969388e-01 0.203352770 0.013119534 0.000000e+00 0.000000000
4 1 5.338010e-01 0.376366618 0.044278426 0.000000e+00 0.000000000
5 1 3.673469e-01 0.524781341 0.104956268 0.000000e+00 0.000000000
6 1 2.001640e-01 0.595025510 0.204719388 9.110787e-05 0.000000000
7 1 9.110787e-02 0.566326531 0.336734694 5.830904e-03 0.000000000
8 1 3.125000e-02 0.468750000 0.468750000 3.125000e-02 0.000000000
9 1 5.830904e-03 0.336734694 0.566326531 9.110787e-02 0.000000000
10 1 9.110787e-05 0.204719388 0.595025510 2.001640e-01 0.000000000
11 1 0.000000e+00 0.104956268 0.524781341 3.673469e-01 0.002915452
12 1 0.000000e+00 0.044278426 0.376366618 5.338010e-01 0.045553936
13 1 0.000000e+00 0.013119534 0.203352770 5.969388e-01 0.186588921
14 1 0.000000e+00 0.001639942 0.059857872 4.534439e-01 0.485058309
15 1 0.000000e+00 0.000000000 0.000000000 0.000000e+00 1.000000000
attr(,"assign")
[1] 0 1 1 1 1 1
According to this output, the weight can be predicted from height via a linear combination of basis functions as follows:
\begin{align*}
weight = \beta_0 + \beta_1*bs(height, df = 5)1 + \beta_2*bs(height, df = 5)2 + \\ \beta_3*bs(height, df = 5)3 + \beta_4* bs(height, df = 5)4 + \\ \beta_5 * bs(height, df = 5)5
\end{align*}
However, since the regression coefficients $\beta_0$ through $\beta_5$ are unknown, you will need to replace them with their estimated values $b0$ through $b5$ reported in the model summary when computing the predicted weight. For example, from summary(fm1), you will find that $b0 = 114.8799$, $b1 = 3.4657$, etc.
You can therefore predict weight as a function of height - for all the heights observed in the data - using the R command, which is R's translation of the above equation):
pred <- model.matrix(fm1) %*% coef(fm1)
The predicted weights are as follows:
> pred
[,1]
1 114.8799
2 117.2767
3 119.9608
4 122.8568
5 125.8895
6 128.9841
7 132.1124
8 135.3176
9 138.6491
10 142.1564
11 145.8886
12 149.8935
13 154.2176
14 158.9074
15 164.0095
Finally, you can plot these predicted weights and compare them against your "safe" predictions to see they look identical:
plot(women, xlab = "Height (in)", ylab = "Weight (lb)")
ht <- seq(57, 73, length.out = 200)
lines(ht, predict(fm1, data.frame(height = ht)), lwd=3, col="grey")
lines(women$height, pred, col = "magenta", lty=2, lwd = 2)
legend("topleft", c("Safe prediction","model.matrix() prediction"),
lty=c(1,2), lwd=c(3,2), col=c("grey","magenta"))
You can plot the basis functions used in the approximation of the effect of height on weight with these commands:
matplot(women$height, model.matrix(fm1)[,-1], type='l',
col=rainbow(5), lwd=2)
legend("top", legend = dimnames(model.matrix(fm1))[[2]][-1],
col=rainbow(5), lty=1:5, bty="n", lwd=2)
The resulting plot is shown below. From this plot, if you want to predict weight for a height of 70 inches, say, you would evaluate each of the 5 basis functions at 70 inches and then multiply the results with the corresponding basis function coefficients reported by R and add the estimated intercept on top of it:
\begin{align*}
weight = b_0 + b_1*bs(height = 70, df = 5)1 + b_2*bs(height = 70, df = 5)2 + \\ b_3*bs(height = 70, df = 5)3 + b_4* bs(height = 70, df = 5)4 + \\ b_5 * bs(height = 70, df = 5)5
\end{align*}
Here, the $b$'s are used to denote the estimated values of the $\beta$'s. | How to predict by hand in R using splines regression? [closed] | As per my comment, once you fit your model, you can extract the values of the predictors included in the model using the model.matrix() function:
require(stats)
require(splines)
require(graphics)
fm | How to predict by hand in R using splines regression? [closed]
As per my comment, once you fit your model, you can extract the values of the predictors included in the model using the model.matrix() function:
require(stats)
require(splines)
require(graphics)
fm1 <- lm(weight ~ bs(height, df = 5), data = women)
summary(fm1)
model.matrix(fm1)
The R output produced by model.matrix() for your model is as follows:
> model.matrix(fm1)
(Intercept) bs(height, df = 5)1 bs(height, df = 5)2 bs(height, df = 5)3 bs(height, df = 5)4 bs(height, df = 5)5
1 1 0.000000e+00 0.000000000 0.000000000 0.000000e+00 0.000000000
2 1 4.534439e-01 0.059857872 0.001639942 0.000000e+00 0.000000000
3 1 5.969388e-01 0.203352770 0.013119534 0.000000e+00 0.000000000
4 1 5.338010e-01 0.376366618 0.044278426 0.000000e+00 0.000000000
5 1 3.673469e-01 0.524781341 0.104956268 0.000000e+00 0.000000000
6 1 2.001640e-01 0.595025510 0.204719388 9.110787e-05 0.000000000
7 1 9.110787e-02 0.566326531 0.336734694 5.830904e-03 0.000000000
8 1 3.125000e-02 0.468750000 0.468750000 3.125000e-02 0.000000000
9 1 5.830904e-03 0.336734694 0.566326531 9.110787e-02 0.000000000
10 1 9.110787e-05 0.204719388 0.595025510 2.001640e-01 0.000000000
11 1 0.000000e+00 0.104956268 0.524781341 3.673469e-01 0.002915452
12 1 0.000000e+00 0.044278426 0.376366618 5.338010e-01 0.045553936
13 1 0.000000e+00 0.013119534 0.203352770 5.969388e-01 0.186588921
14 1 0.000000e+00 0.001639942 0.059857872 4.534439e-01 0.485058309
15 1 0.000000e+00 0.000000000 0.000000000 0.000000e+00 1.000000000
attr(,"assign")
[1] 0 1 1 1 1 1
According to this output, the weight can be predicted from height via a linear combination of basis functions as follows:
\begin{align*}
weight = \beta_0 + \beta_1*bs(height, df = 5)1 + \beta_2*bs(height, df = 5)2 + \\ \beta_3*bs(height, df = 5)3 + \beta_4* bs(height, df = 5)4 + \\ \beta_5 * bs(height, df = 5)5
\end{align*}
However, since the regression coefficients $\beta_0$ through $\beta_5$ are unknown, you will need to replace them with their estimated values $b0$ through $b5$ reported in the model summary when computing the predicted weight. For example, from summary(fm1), you will find that $b0 = 114.8799$, $b1 = 3.4657$, etc.
You can therefore predict weight as a function of height - for all the heights observed in the data - using the R command, which is R's translation of the above equation):
pred <- model.matrix(fm1) %*% coef(fm1)
The predicted weights are as follows:
> pred
[,1]
1 114.8799
2 117.2767
3 119.9608
4 122.8568
5 125.8895
6 128.9841
7 132.1124
8 135.3176
9 138.6491
10 142.1564
11 145.8886
12 149.8935
13 154.2176
14 158.9074
15 164.0095
Finally, you can plot these predicted weights and compare them against your "safe" predictions to see they look identical:
plot(women, xlab = "Height (in)", ylab = "Weight (lb)")
ht <- seq(57, 73, length.out = 200)
lines(ht, predict(fm1, data.frame(height = ht)), lwd=3, col="grey")
lines(women$height, pred, col = "magenta", lty=2, lwd = 2)
legend("topleft", c("Safe prediction","model.matrix() prediction"),
lty=c(1,2), lwd=c(3,2), col=c("grey","magenta"))
You can plot the basis functions used in the approximation of the effect of height on weight with these commands:
matplot(women$height, model.matrix(fm1)[,-1], type='l',
col=rainbow(5), lwd=2)
legend("top", legend = dimnames(model.matrix(fm1))[[2]][-1],
col=rainbow(5), lty=1:5, bty="n", lwd=2)
The resulting plot is shown below. From this plot, if you want to predict weight for a height of 70 inches, say, you would evaluate each of the 5 basis functions at 70 inches and then multiply the results with the corresponding basis function coefficients reported by R and add the estimated intercept on top of it:
\begin{align*}
weight = b_0 + b_1*bs(height = 70, df = 5)1 + b_2*bs(height = 70, df = 5)2 + \\ b_3*bs(height = 70, df = 5)3 + b_4* bs(height = 70, df = 5)4 + \\ b_5 * bs(height = 70, df = 5)5
\end{align*}
Here, the $b$'s are used to denote the estimated values of the $\beta$'s. | How to predict by hand in R using splines regression? [closed]
As per my comment, once you fit your model, you can extract the values of the predictors included in the model using the model.matrix() function:
require(stats)
require(splines)
require(graphics)
fm |
54,783 | Should I cross-validate a logistic regression model that will not be used to make predictions? | You want to identify "variables that are most strongly related to the outcomes of complaints against practitioners in a profession," but not to predict future outcomes of complaints. Presumably, the idea is to generate hypotheses about factors that might be manipulated in future work to reduce undesirable outcomes. Cross-validation to choose a LASSO model, combined with bootstrapping to gauge the stability of the set of selected predictors, provides one useful approach to this type of problem.
LASSO is typically combined with cross validation to choose the number of predictor variables maintained in the model, based on optimizing an appropriate measure of cross-validated error; deviance is a good choice of measure for logistic regression. In practice, you might find a model similar to one developed by stepwise selection, but the penalization of regression coefficients in LASSO avoids the overfitting with stepwise selection that sends shudders up the spines of statisticians.
If the candidate predictors are correlated, those maintained by any selection scheme can be highly dependent on the particular sample at hand. So it's also important to get a sense of how stable your set of selected predictors might be if you could take more samples from the underlying population. Bootstrapping is the next best thing to taking more samples from the population. Bootstrap sampling with replacement from the data at hand is a reasonable approximation to taking more samples from the population.
So you repeat the entire LASSO model-building process, with its inherent cross-validation to choose predictors, on multiple bootstrap samples from your data set. Then you can see how frequently individual predictors are kept or omitted. That will give you an idea about which predictors might deserve the most focused future attention. That process is reasonably simple to automate with simple scripts.
An Introduction to Statistical Learning works through using cross-validation to optimize LASSO in Section 6.6.2; that example is for linear regression but the approach is the same for logistic regression, with deviance minimized. Statistical Learning with Sparsity illustrates bootstrapping to evaluate the stability of predictors chosen by LASSO and their coefficient values in Section 6.2.
As I noted in a comment, the issue of traditional inference (p-values, confidence intervals, etc) in models that used the data to select predictors is difficult. Chapter 20 of Statistical Learning with Sparsity goes into the problems. As you seem to be primarily interested in using the present data to direct future work, however, that might not be a big issue for you. | Should I cross-validate a logistic regression model that will not be used to make predictions? | You want to identify "variables that are most strongly related to the outcomes of complaints against practitioners in a profession," but not to predict future outcomes of complaints. Presumably, the i | Should I cross-validate a logistic regression model that will not be used to make predictions?
You want to identify "variables that are most strongly related to the outcomes of complaints against practitioners in a profession," but not to predict future outcomes of complaints. Presumably, the idea is to generate hypotheses about factors that might be manipulated in future work to reduce undesirable outcomes. Cross-validation to choose a LASSO model, combined with bootstrapping to gauge the stability of the set of selected predictors, provides one useful approach to this type of problem.
LASSO is typically combined with cross validation to choose the number of predictor variables maintained in the model, based on optimizing an appropriate measure of cross-validated error; deviance is a good choice of measure for logistic regression. In practice, you might find a model similar to one developed by stepwise selection, but the penalization of regression coefficients in LASSO avoids the overfitting with stepwise selection that sends shudders up the spines of statisticians.
If the candidate predictors are correlated, those maintained by any selection scheme can be highly dependent on the particular sample at hand. So it's also important to get a sense of how stable your set of selected predictors might be if you could take more samples from the underlying population. Bootstrapping is the next best thing to taking more samples from the population. Bootstrap sampling with replacement from the data at hand is a reasonable approximation to taking more samples from the population.
So you repeat the entire LASSO model-building process, with its inherent cross-validation to choose predictors, on multiple bootstrap samples from your data set. Then you can see how frequently individual predictors are kept or omitted. That will give you an idea about which predictors might deserve the most focused future attention. That process is reasonably simple to automate with simple scripts.
An Introduction to Statistical Learning works through using cross-validation to optimize LASSO in Section 6.6.2; that example is for linear regression but the approach is the same for logistic regression, with deviance minimized. Statistical Learning with Sparsity illustrates bootstrapping to evaluate the stability of predictors chosen by LASSO and their coefficient values in Section 6.2.
As I noted in a comment, the issue of traditional inference (p-values, confidence intervals, etc) in models that used the data to select predictors is difficult. Chapter 20 of Statistical Learning with Sparsity goes into the problems. As you seem to be primarily interested in using the present data to direct future work, however, that might not be a big issue for you. | Should I cross-validate a logistic regression model that will not be used to make predictions?
You want to identify "variables that are most strongly related to the outcomes of complaints against practitioners in a profession," but not to predict future outcomes of complaints. Presumably, the i |
54,784 | Should I cross-validate a logistic regression model that will not be used to make predictions? | Your question is, IMHO, slightly off the point. In statistics book often a distinction is made between inference and prediction (e.g. in Harrell 2001 Regression Modeling Strategies, or in Shmueli 2010's paper To explain or to predict?). In your case, I would argue you are actually interested in using the data to form an hypothesis, i.e. explorative data analysis. For that you need no (cross-)validation.
In a way, explorative data analysis imposes least constraints on your analysis. For prediction, you need to make sure that you validate on independent data. For inference, you need to be very careful not to re-use data and fall prey to data snooping. For exploration, however, you only have to make sure you don't interpret your findings' significances (if you have played with your data at all)!
For example, take your data, throw them at some machine-learning tool, such as randomForest, and thereby identify the most important predictors, plot the data, done. But you cannot use the same data to first identify a model structure (e.g. by model selection) and then use the same data to parameterise it and interpret its estimates' significances. That would be data snooping, and your significances will be corrupted (see, e.g., variable selection and model selection).
Whether you are using Gini impurity, or partial R2 or standardised model coefficients (my personal favourite) as "importance" is a secondary issue. But to me there seems little point in doing model selection or model comparisons, as you are not interested in the model, but in the variables' importance. I guess fitting the full model and interpreting that is fine. And for that, I would use randomForest rather than a GLM, as it has all bells and whistles (read: non-linearity and interactions) built in off-the-shelf. | Should I cross-validate a logistic regression model that will not be used to make predictions? | Your question is, IMHO, slightly off the point. In statistics book often a distinction is made between inference and prediction (e.g. in Harrell 2001 Regression Modeling Strategies, or in Shmueli 2010 | Should I cross-validate a logistic regression model that will not be used to make predictions?
Your question is, IMHO, slightly off the point. In statistics book often a distinction is made between inference and prediction (e.g. in Harrell 2001 Regression Modeling Strategies, or in Shmueli 2010's paper To explain or to predict?). In your case, I would argue you are actually interested in using the data to form an hypothesis, i.e. explorative data analysis. For that you need no (cross-)validation.
In a way, explorative data analysis imposes least constraints on your analysis. For prediction, you need to make sure that you validate on independent data. For inference, you need to be very careful not to re-use data and fall prey to data snooping. For exploration, however, you only have to make sure you don't interpret your findings' significances (if you have played with your data at all)!
For example, take your data, throw them at some machine-learning tool, such as randomForest, and thereby identify the most important predictors, plot the data, done. But you cannot use the same data to first identify a model structure (e.g. by model selection) and then use the same data to parameterise it and interpret its estimates' significances. That would be data snooping, and your significances will be corrupted (see, e.g., variable selection and model selection).
Whether you are using Gini impurity, or partial R2 or standardised model coefficients (my personal favourite) as "importance" is a secondary issue. But to me there seems little point in doing model selection or model comparisons, as you are not interested in the model, but in the variables' importance. I guess fitting the full model and interpreting that is fine. And for that, I would use randomForest rather than a GLM, as it has all bells and whistles (read: non-linearity and interactions) built in off-the-shelf. | Should I cross-validate a logistic regression model that will not be used to make predictions?
Your question is, IMHO, slightly off the point. In statistics book often a distinction is made between inference and prediction (e.g. in Harrell 2001 Regression Modeling Strategies, or in Shmueli 2010 |
54,785 | Should I cross-validate a logistic regression model that will not be used to make predictions? | The model will not be used to make predictions on future outcomes, but to make inferences about decisions during the time period.
Having lots of hope, that I am not mistaken, I understand, that your goal is to make causal inferernce. This means you want to say "such and such decision caused different probability of outcome". Correct me if I am wrong.
Probably the best way to construct statistical inference model is to know how it is done in the given field and relate to that. This means doing a strong literature review. If it is not possible, usually one looks at other fields, which are close in theory, and similar econometrically.
I have never seen cross-validated inference model, thus I would assume it does not help. Nor in papers nor in econometrics books such thing seems to be discussed. It think it could help if you wanted a different thing - finding the best predictors of outcome.
Information Criterions and R$^2$'s seem to be just a merely hints in constructing inference models.
One possible, and often advised, approach to construct such model is to make a list of determinant variables from the articles, and then run model, showing some iterations (they may be general-to-specific ones) to somehow address data mining problem.
Nevertheless if the model is to infer about causation, argumenting that correlation is fair evidence for causation, the key problem is endogeinety and omitted variable bias. It is hard to infer about causation of all variables in one article, so most articles showing general determinants of a phenomenon stick to correlation. Inferring about causality is often done just for one variable (group of variables relating to single phenomenon).
When constructing statistical inference model it is still important to have some causal background. I would suggest to start from here, but when looking for stronger background I would suggest Mostly harmless econometrics and Causality books (both possible to find as pdf for free). | Should I cross-validate a logistic regression model that will not be used to make predictions? | The model will not be used to make predictions on future outcomes, but to make inferences about decisions during the time period.
Having lots of hope, that I am not mistaken, I understand, that your | Should I cross-validate a logistic regression model that will not be used to make predictions?
The model will not be used to make predictions on future outcomes, but to make inferences about decisions during the time period.
Having lots of hope, that I am not mistaken, I understand, that your goal is to make causal inferernce. This means you want to say "such and such decision caused different probability of outcome". Correct me if I am wrong.
Probably the best way to construct statistical inference model is to know how it is done in the given field and relate to that. This means doing a strong literature review. If it is not possible, usually one looks at other fields, which are close in theory, and similar econometrically.
I have never seen cross-validated inference model, thus I would assume it does not help. Nor in papers nor in econometrics books such thing seems to be discussed. It think it could help if you wanted a different thing - finding the best predictors of outcome.
Information Criterions and R$^2$'s seem to be just a merely hints in constructing inference models.
One possible, and often advised, approach to construct such model is to make a list of determinant variables from the articles, and then run model, showing some iterations (they may be general-to-specific ones) to somehow address data mining problem.
Nevertheless if the model is to infer about causation, argumenting that correlation is fair evidence for causation, the key problem is endogeinety and omitted variable bias. It is hard to infer about causation of all variables in one article, so most articles showing general determinants of a phenomenon stick to correlation. Inferring about causality is often done just for one variable (group of variables relating to single phenomenon).
When constructing statistical inference model it is still important to have some causal background. I would suggest to start from here, but when looking for stronger background I would suggest Mostly harmless econometrics and Causality books (both possible to find as pdf for free). | Should I cross-validate a logistic regression model that will not be used to make predictions?
The model will not be used to make predictions on future outcomes, but to make inferences about decisions during the time period.
Having lots of hope, that I am not mistaken, I understand, that your |
54,786 | What does beta distribution "has support" mean? | I think it is better read this way:
the beta prior has and only has support over all valid probabilities
However, I think it is a bit redundant in terms of its meaning? | What does beta distribution "has support" mean? | I think it is better read this way:
the beta prior has and only has support over all valid probabilities
However, I think it is a bit redundant in terms of its meaning? | What does beta distribution "has support" mean?
I think it is better read this way:
the beta prior has and only has support over all valid probabilities
However, I think it is a bit redundant in terms of its meaning? | What does beta distribution "has support" mean?
I think it is better read this way:
the beta prior has and only has support over all valid probabilities
However, I think it is a bit redundant in terms of its meaning? |
54,787 | What does beta distribution "has support" mean? | For a single probability parameter, the interval $\mathscr{P} \equiv [0,1]$ is the set of "all valid probabilities and only valid probabilities". Thus, when they say that the Beta distribution "has support over" $\mathscr{P}$, what they mean is that for any random variable $p \sim \text{Beta}$ we have:
$$\begin{equation} \begin{aligned}
\mathbb{P}(p \in \mathscr{P}) &= 1, \\[6pt]
\mathbb{P}(p \in \mathscr{S}) &<1 \text{ for every closed (measureable) set } \mathscr{S} \subset \mathscr{P}.
\end{aligned} \end{equation}$$
In other words, the interval $\mathscr{P}$ is the smallest closed set containing $p$ with probability one. Informally, this can be thought of as the closure of the set of possible values of $p$. (Note that the same reasoning occurs for the Dirichlet distribution when you extend to a probability vector.) The support can also be thought of as the closure of the set of values with non-zero density, so another way to think of the support here is that:
$$\mathscr{P} = \text{cl} \{ p \in \mathbb{R} | \text{Beta}(p|\alpha,\beta) > 0 \}.$$ | What does beta distribution "has support" mean? | For a single probability parameter, the interval $\mathscr{P} \equiv [0,1]$ is the set of "all valid probabilities and only valid probabilities". Thus, when they say that the Beta distribution "has s | What does beta distribution "has support" mean?
For a single probability parameter, the interval $\mathscr{P} \equiv [0,1]$ is the set of "all valid probabilities and only valid probabilities". Thus, when they say that the Beta distribution "has support over" $\mathscr{P}$, what they mean is that for any random variable $p \sim \text{Beta}$ we have:
$$\begin{equation} \begin{aligned}
\mathbb{P}(p \in \mathscr{P}) &= 1, \\[6pt]
\mathbb{P}(p \in \mathscr{S}) &<1 \text{ for every closed (measureable) set } \mathscr{S} \subset \mathscr{P}.
\end{aligned} \end{equation}$$
In other words, the interval $\mathscr{P}$ is the smallest closed set containing $p$ with probability one. Informally, this can be thought of as the closure of the set of possible values of $p$. (Note that the same reasoning occurs for the Dirichlet distribution when you extend to a probability vector.) The support can also be thought of as the closure of the set of values with non-zero density, so another way to think of the support here is that:
$$\mathscr{P} = \text{cl} \{ p \in \mathbb{R} | \text{Beta}(p|\alpha,\beta) > 0 \}.$$ | What does beta distribution "has support" mean?
For a single probability parameter, the interval $\mathscr{P} \equiv [0,1]$ is the set of "all valid probabilities and only valid probabilities". Thus, when they say that the Beta distribution "has s |
54,788 | What does beta distribution "has support" mean? | I wanted to provide a non-technical answer to your question.
The beta distribution is commonly used in the context of modeling proportions.
As an example, let’s say you select (at random) 100 geographic sites for a study and you keep track for each site what proportion of the site’s area can be classified as “forest”. (If 20% of the area of the first site can be classified as “forest”, the proportion of interest is 0.20, etc.)
If you denote by X the random variable defined as “proportion of site’s area that can be classified as a forest”, then you might want to assume that X follows a beta distribution.
What does it mean that this beta distribution has a support of (0,1)?
It simply means that the possible values of X span the interval (0,1). In particular, the 100 realized values of X collected in your study would all be expected to be strictly greater than 0 and strictly less than 1.
This is fine if you selected your sites so as to always include a mixture of forest and grasses.
But what if you’re interested in sites which might be all forest, all grasses or a combination of both?
Then you might have to model X via a zero-and-one inflated beta distribution, whose support is the interval [0,1]. In that case, the support of the distribution of X tells you that the possible values of X could live anywhere inside the interval [0,1] (including at the edges of this interval). For the example of with 100 sites, this might mean that a fraction of those sites would have no forest on them, a fraction would have some forest and a fraction would have only forest on them. | What does beta distribution "has support" mean? | I wanted to provide a non-technical answer to your question.
The beta distribution is commonly used in the context of modeling proportions.
As an example, let’s say you select (at random) 100 geogr | What does beta distribution "has support" mean?
I wanted to provide a non-technical answer to your question.
The beta distribution is commonly used in the context of modeling proportions.
As an example, let’s say you select (at random) 100 geographic sites for a study and you keep track for each site what proportion of the site’s area can be classified as “forest”. (If 20% of the area of the first site can be classified as “forest”, the proportion of interest is 0.20, etc.)
If you denote by X the random variable defined as “proportion of site’s area that can be classified as a forest”, then you might want to assume that X follows a beta distribution.
What does it mean that this beta distribution has a support of (0,1)?
It simply means that the possible values of X span the interval (0,1). In particular, the 100 realized values of X collected in your study would all be expected to be strictly greater than 0 and strictly less than 1.
This is fine if you selected your sites so as to always include a mixture of forest and grasses.
But what if you’re interested in sites which might be all forest, all grasses or a combination of both?
Then you might have to model X via a zero-and-one inflated beta distribution, whose support is the interval [0,1]. In that case, the support of the distribution of X tells you that the possible values of X could live anywhere inside the interval [0,1] (including at the edges of this interval). For the example of with 100 sites, this might mean that a fraction of those sites would have no forest on them, a fraction would have some forest and a fraction would have only forest on them. | What does beta distribution "has support" mean?
I wanted to provide a non-technical answer to your question.
The beta distribution is commonly used in the context of modeling proportions.
As an example, let’s say you select (at random) 100 geogr |
54,789 | Can anyone give a concrete example to illustrate what is an uniform prior? | Let's say you don't know the probability of head, $p$, of a coin. You decide to conduct an experiment to estimate what it is, via Bayesian analysis. It requires you to choose a prior, and in general you're free to choose one of the feasible ones. If you don't know or don't want to assume anything about this $p$, you can say that it is uniformly distributed in $[0,1]$, in which $f_P(p)=1, 0\leq p\leq 1$, $0$ otherwise. This is quite similar to saying that any $p$ value in $[0,1]$ is equally likely. This prior distribution is a uniform prior.
You can also choose other priors, that focus on different regions in $[0,1]$, for example, if you choose a prior like $f_P(p)=\frac{3}{2}(1-(1-2p)^2),\ \ \ 0\leq p\leq 1$, you'll assume that $p$ is more likely to be around $0.5$ compared to edge cases, such as $p=0,p=1$. | Can anyone give a concrete example to illustrate what is an uniform prior? | Let's say you don't know the probability of head, $p$, of a coin. You decide to conduct an experiment to estimate what it is, via Bayesian analysis. It requires you to choose a prior, and in general y | Can anyone give a concrete example to illustrate what is an uniform prior?
Let's say you don't know the probability of head, $p$, of a coin. You decide to conduct an experiment to estimate what it is, via Bayesian analysis. It requires you to choose a prior, and in general you're free to choose one of the feasible ones. If you don't know or don't want to assume anything about this $p$, you can say that it is uniformly distributed in $[0,1]$, in which $f_P(p)=1, 0\leq p\leq 1$, $0$ otherwise. This is quite similar to saying that any $p$ value in $[0,1]$ is equally likely. This prior distribution is a uniform prior.
You can also choose other priors, that focus on different regions in $[0,1]$, for example, if you choose a prior like $f_P(p)=\frac{3}{2}(1-(1-2p)^2),\ \ \ 0\leq p\leq 1$, you'll assume that $p$ is more likely to be around $0.5$ compared to edge cases, such as $p=0,p=1$. | Can anyone give a concrete example to illustrate what is an uniform prior?
Let's say you don't know the probability of head, $p$, of a coin. You decide to conduct an experiment to estimate what it is, via Bayesian analysis. It requires you to choose a prior, and in general y |
54,790 | Can anyone give a concrete example to illustrate what is an uniform prior? | The notion of uniform prior understood as a prior with a constant density $\pi(\theta)=c$ is not well-defined (or even meaningful) as it depends on both
the dominating measure that determines the density function of the prior (i.e., one measures volume);
the parameterisation $\theta$ of the sampling model $f(x|\theta)$ for which the prior is constructed, e.g. variance versus precision.
If either entry is modified, the density of the prior changes as well and stops being constant.
In the earlier answer,
the dominating measure may be the Lebesgue measure, $\text{d}p$, constant over the unit interval, or it may be the Haldane measure, $[p(1-p)]^{-1}\text{d}p$, which explodes to infinity in zero and one. For this latter measure, there is no possible uniform prior, as the measure is $\sigma$-finite, i.e., does not have a finite mass and
the Bernoulli model can be parameterised in $p$, in $q=\sqrt{p}$, or in $r=\log(p)$. A uniform prior on $p$ does not induce a uniform prior on $q$ (or conversely) and there is no possible prior on $r$, which varies on $(-\infty,0)$. | Can anyone give a concrete example to illustrate what is an uniform prior? | The notion of uniform prior understood as a prior with a constant density $\pi(\theta)=c$ is not well-defined (or even meaningful) as it depends on both
the dominating measure that determines the den | Can anyone give a concrete example to illustrate what is an uniform prior?
The notion of uniform prior understood as a prior with a constant density $\pi(\theta)=c$ is not well-defined (or even meaningful) as it depends on both
the dominating measure that determines the density function of the prior (i.e., one measures volume);
the parameterisation $\theta$ of the sampling model $f(x|\theta)$ for which the prior is constructed, e.g. variance versus precision.
If either entry is modified, the density of the prior changes as well and stops being constant.
In the earlier answer,
the dominating measure may be the Lebesgue measure, $\text{d}p$, constant over the unit interval, or it may be the Haldane measure, $[p(1-p)]^{-1}\text{d}p$, which explodes to infinity in zero and one. For this latter measure, there is no possible uniform prior, as the measure is $\sigma$-finite, i.e., does not have a finite mass and
the Bernoulli model can be parameterised in $p$, in $q=\sqrt{p}$, or in $r=\log(p)$. A uniform prior on $p$ does not induce a uniform prior on $q$ (or conversely) and there is no possible prior on $r$, which varies on $(-\infty,0)$. | Can anyone give a concrete example to illustrate what is an uniform prior?
The notion of uniform prior understood as a prior with a constant density $\pi(\theta)=c$ is not well-defined (or even meaningful) as it depends on both
the dominating measure that determines the den |
54,791 | Can anyone give a concrete example to illustrate what is an uniform prior? | When the prior distribution $\pi$, of the parameter $\theta$ to be estimated is the Uniform distribution, i.e. $\pi(\theta)\sim U(a,b)$, we refer to prior $\pi$ as a uniform or uninformative prior. I'm not sure what's not to understand here except the basics of Bayesian inference and the Uniform distribution.
The best way to understand the uniform distribution is via a Monte Carlo sample of fair die rolls. The probability of every outcome $P(X=x_i),~ i=1,...,6$ is equal to $1/6$ and the histogram of the empirical distribution should be approximating a horizontal line (i.e. is uniform). | Can anyone give a concrete example to illustrate what is an uniform prior? | When the prior distribution $\pi$, of the parameter $\theta$ to be estimated is the Uniform distribution, i.e. $\pi(\theta)\sim U(a,b)$, we refer to prior $\pi$ as a uniform or uninformative prior. I' | Can anyone give a concrete example to illustrate what is an uniform prior?
When the prior distribution $\pi$, of the parameter $\theta$ to be estimated is the Uniform distribution, i.e. $\pi(\theta)\sim U(a,b)$, we refer to prior $\pi$ as a uniform or uninformative prior. I'm not sure what's not to understand here except the basics of Bayesian inference and the Uniform distribution.
The best way to understand the uniform distribution is via a Monte Carlo sample of fair die rolls. The probability of every outcome $P(X=x_i),~ i=1,...,6$ is equal to $1/6$ and the histogram of the empirical distribution should be approximating a horizontal line (i.e. is uniform). | Can anyone give a concrete example to illustrate what is an uniform prior?
When the prior distribution $\pi$, of the parameter $\theta$ to be estimated is the Uniform distribution, i.e. $\pi(\theta)\sim U(a,b)$, we refer to prior $\pi$ as a uniform or uninformative prior. I' |
54,792 | Big Sample size, Small coefficients, significant results. What should I do? | I think it's been asked before. It's useful to realize that, without a prespecified sample size and alpha level, the $p$-value is just a measure of the sample size you ultimately wind up with. Not appealing. An approach I use is this: at what sample size would a 0.05 level be appropriate? Scale accordingly. For instance, I feel the 0.05 level is often suited to problems where there are 100 observations. That is: I would say WOW that is an interesting finding if it had a 1/20 chance of being a false positive. So if you had a sample size of 5,000, that's 50 times larger than 100. So divide your 0.05 level by 50 and come up with 0.001 as a significance level. This is in line with what Fisher advocated: don't do significance testing with p-values, compare them to the power of the study. The sample size is the simplest/rawest measure of the study's power. An overpowered study with a conventional 0.05 cut off makes absolutely no sense.
Usually, it is never advisable to choose a significance cut-off after viewing data and results. One might believe it might be kosher to arbitrarily choose a more stringent significance criterion post hoc. Actually, it only deceives readers into thinking you ran a better controlled trial than you did. Think of it this way: if you observed p = 0.04, you wouldn't be asking this question; the analysis would be a tidy inferential package.
Another way to look at it is this: just report the CI and that the analysis was statistically significant. For instance, you might have a 95% CI for a hazard ratio that goes from (0.01, 0.16) - the null is 1. It suffices to say that the p-value is really freakin' small, so you don't need to clutter the page displaying p=0.0000000023 (don't do this... only show p out to its precision, if 3 decimal places show p < 0.001 and never round to 0.000 - that shows you don't know what a p-value means.). | Big Sample size, Small coefficients, significant results. What should I do? | I think it's been asked before. It's useful to realize that, without a prespecified sample size and alpha level, the $p$-value is just a measure of the sample size you ultimately wind up with. Not app | Big Sample size, Small coefficients, significant results. What should I do?
I think it's been asked before. It's useful to realize that, without a prespecified sample size and alpha level, the $p$-value is just a measure of the sample size you ultimately wind up with. Not appealing. An approach I use is this: at what sample size would a 0.05 level be appropriate? Scale accordingly. For instance, I feel the 0.05 level is often suited to problems where there are 100 observations. That is: I would say WOW that is an interesting finding if it had a 1/20 chance of being a false positive. So if you had a sample size of 5,000, that's 50 times larger than 100. So divide your 0.05 level by 50 and come up with 0.001 as a significance level. This is in line with what Fisher advocated: don't do significance testing with p-values, compare them to the power of the study. The sample size is the simplest/rawest measure of the study's power. An overpowered study with a conventional 0.05 cut off makes absolutely no sense.
Usually, it is never advisable to choose a significance cut-off after viewing data and results. One might believe it might be kosher to arbitrarily choose a more stringent significance criterion post hoc. Actually, it only deceives readers into thinking you ran a better controlled trial than you did. Think of it this way: if you observed p = 0.04, you wouldn't be asking this question; the analysis would be a tidy inferential package.
Another way to look at it is this: just report the CI and that the analysis was statistically significant. For instance, you might have a 95% CI for a hazard ratio that goes from (0.01, 0.16) - the null is 1. It suffices to say that the p-value is really freakin' small, so you don't need to clutter the page displaying p=0.0000000023 (don't do this... only show p out to its precision, if 3 decimal places show p < 0.001 and never round to 0.000 - that shows you don't know what a p-value means.). | Big Sample size, Small coefficients, significant results. What should I do?
I think it's been asked before. It's useful to realize that, without a prespecified sample size and alpha level, the $p$-value is just a measure of the sample size you ultimately wind up with. Not app |
54,793 | Big Sample size, Small coefficients, significant results. What should I do? | You have encountered the gulf between "statistically significant" and "meaningful". As you point out, with sufficient sample size, you can assign statistical significance to arbitrarily small differences - there is no difference too small that can't be called "significant" with large enough N. You need to use domain knowledge to determine what is a "meaningful" difference. You might find, for example, that a new drug increases a person's lifespan by 10 seconds - even though you can be very confident that that increase is not due to random variation in your data, it's hardly a meaningful increase in lifespan.
Some of this will come from knowing about your problem and what people in the field consider meaningful. You could also try to think of future studies that might replicate your findings, and the typical N that they might use. If future studies will likely have a much lower N, you could calculate the effect size needed to replicate your findings in data of that size, and only report significant, meaningful, and feasibly reproducible results. | Big Sample size, Small coefficients, significant results. What should I do? | You have encountered the gulf between "statistically significant" and "meaningful". As you point out, with sufficient sample size, you can assign statistical significance to arbitrarily small differen | Big Sample size, Small coefficients, significant results. What should I do?
You have encountered the gulf between "statistically significant" and "meaningful". As you point out, with sufficient sample size, you can assign statistical significance to arbitrarily small differences - there is no difference too small that can't be called "significant" with large enough N. You need to use domain knowledge to determine what is a "meaningful" difference. You might find, for example, that a new drug increases a person's lifespan by 10 seconds - even though you can be very confident that that increase is not due to random variation in your data, it's hardly a meaningful increase in lifespan.
Some of this will come from knowing about your problem and what people in the field consider meaningful. You could also try to think of future studies that might replicate your findings, and the typical N that they might use. If future studies will likely have a much lower N, you could calculate the effect size needed to replicate your findings in data of that size, and only report significant, meaningful, and feasibly reproducible results. | Big Sample size, Small coefficients, significant results. What should I do?
You have encountered the gulf between "statistically significant" and "meaningful". As you point out, with sufficient sample size, you can assign statistical significance to arbitrarily small differen |
54,794 | Big Sample size, Small coefficients, significant results. What should I do? | When you have many samples and the observed effect is very small (small for the specified application), you can safely conclude that the independent variables do not have an important effect on the dependent variable. The effect size can be “statistically significant” and unimportant at the same time.
Using small sample size and ignoring the results from the large samples is inappropriate. You owe that to the people that read your paper and design some new experiments based on your observations. | Big Sample size, Small coefficients, significant results. What should I do? | When you have many samples and the observed effect is very small (small for the specified application), you can safely conclude that the independent variables do not have an important effect on the de | Big Sample size, Small coefficients, significant results. What should I do?
When you have many samples and the observed effect is very small (small for the specified application), you can safely conclude that the independent variables do not have an important effect on the dependent variable. The effect size can be “statistically significant” and unimportant at the same time.
Using small sample size and ignoring the results from the large samples is inappropriate. You owe that to the people that read your paper and design some new experiments based on your observations. | Big Sample size, Small coefficients, significant results. What should I do?
When you have many samples and the observed effect is very small (small for the specified application), you can safely conclude that the independent variables do not have an important effect on the de |
54,795 | Big Sample size, Small coefficients, significant results. What should I do? | I think you should decide on an "expected minimal effect size", i.e. the minimal coefficients you care to include in your model. Say, do you care about coefficients less than 0.0001, or 1, or 100? To clarify, the effect size is the degree to which the null hypothesis is false, or how large the coefficient actually is. It's a parameter of the population. On the other hand, the expected minimal effect size is the minimal amount of departure from the null you care to detect. It's a parameter of the test.
Now that you have the sample size $N = 35000$, as well as some expected minimal effect size, a power analysis should reveal the relationship between $\alpha$ and $\beta$ given there parameters. Next, make another decision about how to balance your significance level and power by choosing a pair of $\alpha$ and $\beta$. (Technically, all these parameters must be decided before looking at the data, but at this point, I guess you can just pretend you didn't see them.) Then, carry out your test, compare $p$ with $\alpha$, and draw a conclusion accordingly.
By the way, I believe there are no reasons to exclude any record, unless you are doing cross-validation, for example. More data generally leads to more accurate inference, and additionally, discarding sample points in a selective manner may introduce bias. | Big Sample size, Small coefficients, significant results. What should I do? | I think you should decide on an "expected minimal effect size", i.e. the minimal coefficients you care to include in your model. Say, do you care about coefficients less than 0.0001, or 1, or 100? To | Big Sample size, Small coefficients, significant results. What should I do?
I think you should decide on an "expected minimal effect size", i.e. the minimal coefficients you care to include in your model. Say, do you care about coefficients less than 0.0001, or 1, or 100? To clarify, the effect size is the degree to which the null hypothesis is false, or how large the coefficient actually is. It's a parameter of the population. On the other hand, the expected minimal effect size is the minimal amount of departure from the null you care to detect. It's a parameter of the test.
Now that you have the sample size $N = 35000$, as well as some expected minimal effect size, a power analysis should reveal the relationship between $\alpha$ and $\beta$ given there parameters. Next, make another decision about how to balance your significance level and power by choosing a pair of $\alpha$ and $\beta$. (Technically, all these parameters must be decided before looking at the data, but at this point, I guess you can just pretend you didn't see them.) Then, carry out your test, compare $p$ with $\alpha$, and draw a conclusion accordingly.
By the way, I believe there are no reasons to exclude any record, unless you are doing cross-validation, for example. More data generally leads to more accurate inference, and additionally, discarding sample points in a selective manner may introduce bias. | Big Sample size, Small coefficients, significant results. What should I do?
I think you should decide on an "expected minimal effect size", i.e. the minimal coefficients you care to include in your model. Say, do you care about coefficients less than 0.0001, or 1, or 100? To |
54,796 | Combining together principal components from PCA performed on different subsets of a large dataset | Note: though the old answer (below the line) was accepted, the comment below alerted me to the fact that I had misinterpreted the question. My old answer pertains to comparing PCs on different batches of observations (i.e. different rows). But the question is actually about doing PCs on different batches of variables (i.e. different columns). I will now address this.
In order to reduce dimensionality, a PCA calculates orthogonal vectors from the entire set of variables. If you do not do the PCA on all variables, you are by definition not achieving this basic goal. By doing PCAs on 5000 variables at a time and retaining 500 PCs from each of the 12 batches, you are at risk of capturing plenty of redundant information in your final set of 6000 PCs. If there are a few dominant axes of variation, these would be captured over and over in each of the 12 batches. You could check the extent to which this is true by doing another PCA on your aggregated 6000 PCs.
As for better solutions, I'm not an expert, but here are a couple of thoughts. (i) There are Incremental PCA methods specifically designed for this, and I think they work by loading a few rows into memory at a time. (ii) As that implies, I think you need to use all variables (columns) to do the PCA, but you do not need to use all observations (rows). So a simple option is to do the PCA on a subset of the observations instead and then apply them to the rest of the dataset.
You're correct that this is a problem: based on how this has been done, the PCs cannot be compared with each other across batches [of observations].
This is mainly because even small differences in the covariance structure between batches will lead to different orthogonal vectors being identified. In other words, PC1 on batch 1 and PC1 on batch 2 represent different things! If you examine the loadings of some of the PCs across batches, you will see these differences. But even if the covariance structure was identical for some magical reason, a PC might have reversed coefficient signs in a different batch because these are arbitrary.
The simplest thing to do would be to do a PCA on all the data simultaneously. If that is too much of a computational challenge, you can do it on a random subset of the data and then apply that PCA to the remaining data. This has been discussed in a number of questions on this site, for e.g.
How is PCA applied to new data?
As an aside, I note that you are applying a PCA to binary data. Though this can be done, there is valuable discussion here about what that implies and possible better alternatives:
Doing principal component analysis or factor analysis on binary data
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? | Combining together principal components from PCA performed on different subsets of a large dataset | Note: though the old answer (below the line) was accepted, the comment below alerted me to the fact that I had misinterpreted the question. My old answer pertains to comparing PCs on different batches | Combining together principal components from PCA performed on different subsets of a large dataset
Note: though the old answer (below the line) was accepted, the comment below alerted me to the fact that I had misinterpreted the question. My old answer pertains to comparing PCs on different batches of observations (i.e. different rows). But the question is actually about doing PCs on different batches of variables (i.e. different columns). I will now address this.
In order to reduce dimensionality, a PCA calculates orthogonal vectors from the entire set of variables. If you do not do the PCA on all variables, you are by definition not achieving this basic goal. By doing PCAs on 5000 variables at a time and retaining 500 PCs from each of the 12 batches, you are at risk of capturing plenty of redundant information in your final set of 6000 PCs. If there are a few dominant axes of variation, these would be captured over and over in each of the 12 batches. You could check the extent to which this is true by doing another PCA on your aggregated 6000 PCs.
As for better solutions, I'm not an expert, but here are a couple of thoughts. (i) There are Incremental PCA methods specifically designed for this, and I think they work by loading a few rows into memory at a time. (ii) As that implies, I think you need to use all variables (columns) to do the PCA, but you do not need to use all observations (rows). So a simple option is to do the PCA on a subset of the observations instead and then apply them to the rest of the dataset.
You're correct that this is a problem: based on how this has been done, the PCs cannot be compared with each other across batches [of observations].
This is mainly because even small differences in the covariance structure between batches will lead to different orthogonal vectors being identified. In other words, PC1 on batch 1 and PC1 on batch 2 represent different things! If you examine the loadings of some of the PCs across batches, you will see these differences. But even if the covariance structure was identical for some magical reason, a PC might have reversed coefficient signs in a different batch because these are arbitrary.
The simplest thing to do would be to do a PCA on all the data simultaneously. If that is too much of a computational challenge, you can do it on a random subset of the data and then apply that PCA to the remaining data. This has been discussed in a number of questions on this site, for e.g.
How is PCA applied to new data?
As an aside, I note that you are applying a PCA to binary data. Though this can be done, there is valuable discussion here about what that implies and possible better alternatives:
Doing principal component analysis or factor analysis on binary data
Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? | Combining together principal components from PCA performed on different subsets of a large dataset
Note: though the old answer (below the line) was accepted, the comment below alerted me to the fact that I had misinterpreted the question. My old answer pertains to comparing PCs on different batches |
54,797 | How to form groups before randomizing the treatment assignment? | A reasonable approach here is to use block randomisation, where you create non-randomised blocks of subjects, grouping together like subjects (e.g., by prior skill variables), and then you create random groups by randomly allocating people from the blocks into the different treatment groups. This gives you randomised treatment groups, but it also reduces colinearity between the treatment group and the prior skill variable, which has later advantages when you come to analyse your model via regression methods.
For simplicity, suppose you have $N = sC$ subjects, where $s \in \mathbb{N}$ (i.e., suppose your number of subjects is an exact multiple of the desired group size), and suppose that each subject has a covariate $z_i$ representing their prior skill. In this case, you would form $s$ blocks of $C$ subjects by ordering the subjects by their score and then taking consecutive blocks over those scores. So you would have one group of $C$ subjects with the lowest scores, another group of $C$ subjects with the next lowest scores, etc. You would then allocated each block of $C$ subjects to the $C$ random treatment groups (via a random permutation). For the simple case where you have an exact multiple of the desired group size, you can implement this in R as follows:
#Generate mock data containing ID and Score for each subject
N <- 400;
DATA <- data.frame(ID = 1:N, Score = ceiling(runif(N)*100));
#Allocated blocks to subjects
S <- 40;
C <- N/S;
BBB <- rep(1:S, each = C);
RRR <- rank(DATA$Score, ties = 'first');
DATA$Block <- BBB[RRR];
#Randomise into treatments
set.seed(12345);
TTT <- rep(0, N);
for (s in 1:S) { TTT[((s-1)*C+1):(s*C)] <- order(runif(C)); }
DATA$Treat <- TTT[RRR]; | How to form groups before randomizing the treatment assignment? | A reasonable approach here is to use block randomisation, where you create non-randomised blocks of subjects, grouping together like subjects (e.g., by prior skill variables), and then you create rand | How to form groups before randomizing the treatment assignment?
A reasonable approach here is to use block randomisation, where you create non-randomised blocks of subjects, grouping together like subjects (e.g., by prior skill variables), and then you create random groups by randomly allocating people from the blocks into the different treatment groups. This gives you randomised treatment groups, but it also reduces colinearity between the treatment group and the prior skill variable, which has later advantages when you come to analyse your model via regression methods.
For simplicity, suppose you have $N = sC$ subjects, where $s \in \mathbb{N}$ (i.e., suppose your number of subjects is an exact multiple of the desired group size), and suppose that each subject has a covariate $z_i$ representing their prior skill. In this case, you would form $s$ blocks of $C$ subjects by ordering the subjects by their score and then taking consecutive blocks over those scores. So you would have one group of $C$ subjects with the lowest scores, another group of $C$ subjects with the next lowest scores, etc. You would then allocated each block of $C$ subjects to the $C$ random treatment groups (via a random permutation). For the simple case where you have an exact multiple of the desired group size, you can implement this in R as follows:
#Generate mock data containing ID and Score for each subject
N <- 400;
DATA <- data.frame(ID = 1:N, Score = ceiling(runif(N)*100));
#Allocated blocks to subjects
S <- 40;
C <- N/S;
BBB <- rep(1:S, each = C);
RRR <- rank(DATA$Score, ties = 'first');
DATA$Block <- BBB[RRR];
#Randomise into treatments
set.seed(12345);
TTT <- rep(0, N);
for (s in 1:S) { TTT[((s-1)*C+1):(s*C)] <- order(runif(C)); }
DATA$Treat <- TTT[RRR]; | How to form groups before randomizing the treatment assignment?
A reasonable approach here is to use block randomisation, where you create non-randomised blocks of subjects, grouping together like subjects (e.g., by prior skill variables), and then you create rand |
54,798 | How to form groups before randomizing the treatment assignment? | You want to balance prior skills across classes, because they may influence the outcome and confound the results. This is analogous to clinical trials where covariates that are known to influence the prognosis need to be controlled for. This is done at the level of assignment through stratified randomization (https://www.statisticshowto.datasciencecentral.com/stratified-randomization/).
In your case, that would consist in dividing the sample of baseline measures in $k$ quantiles. Then, each of the $k$ subgroups of students is randomly but evenly assigned to the $C$ classes. A good value of $k$ will depend on $N$ and $C$. | How to form groups before randomizing the treatment assignment? | You want to balance prior skills across classes, because they may influence the outcome and confound the results. This is analogous to clinical trials where covariates that are known to influence the | How to form groups before randomizing the treatment assignment?
You want to balance prior skills across classes, because they may influence the outcome and confound the results. This is analogous to clinical trials where covariates that are known to influence the prognosis need to be controlled for. This is done at the level of assignment through stratified randomization (https://www.statisticshowto.datasciencecentral.com/stratified-randomization/).
In your case, that would consist in dividing the sample of baseline measures in $k$ quantiles. Then, each of the $k$ subgroups of students is randomly but evenly assigned to the $C$ classes. A good value of $k$ will depend on $N$ and $C$. | How to form groups before randomizing the treatment assignment?
You want to balance prior skills across classes, because they may influence the outcome and confound the results. This is analogous to clinical trials where covariates that are known to influence the |
54,799 | How to form groups before randomizing the treatment assignment? | What is nice is that regardless of how you allocate the students to the classes, as long as you randomly assign the classes to treatment, your effect estimate will be free of confounding. Ideally, though, you want the class composition to be the same on average between the treatment groups (i.e., for the distribution of baseline skill to be the same across treated classrooms as across control classrooms). Here's one way you could do this:
Perform a match on students with respect to baseline skill. That means for each student, find a student with similar baseline skill and make them a pair. You could try to minimize some global measure of pairwise imbalance to ensure no one is paired with someone too different from them. For example, if you were to use a greedy algorithm, the first students to be paired would be paired with students close to them, while the last students to be paired will have to be paired with whoever is left, who might not be so close. An optimal matching algorithm might instead yield a set of pairs for which the distance between each member is small on average, and large distance are penalized. One member of each pair will eventually go to treatment and the other to control. This type of matching (i.e., before treatment has been assigned) is called non-bipartite matching. There are a few software packages that can do this. One I found is the nbpMatching R package. I believe the designmatch R package can do this as well.
Once you have your matched pairs, we want to assign the pairs into classroom blocks. It doesn't really matter how this is done because the treatment effect will be unbiased regardless, but to reduce the within-classroom treatment effect variability you could create strata of the pairs based on average pairwise baseline skill and then make each stratum (or randomly slices block of a stratum) into a classroom-block. This way, each classroom-block will contain pairs roughly homogeneous in baseline skill level. Within each classroom block, randomly assign one member of each pair to treatment and the other to control. The treated students become their own classroom and the control students become their own classroom (i.e., each classroom-block splits into two classrooms, one treated and the other control).
The value of this design is that it makes extensive use of pairing and stratification. Students are cross-classified both into student-pairs and classrooms. Because the students within pairs are similar to each other, conditioning on pair membership in the analysis will greatly improve the precision of your effect estimate. Likewise, because classrooms within classroom-blocks are similar to each other, conditioning on classroom-block membership will improve the precision of the estimate. Allowing the treatment effect to vary across classroom-blocks also allows you to assess the extent to which the average classroom-level baseline skill might change the treatment effect.
Your final analysis could be a multilevel model with student performance as the outcome, a random effect for student-pairs, fixed or random effects for classrooms, fixed effects for classroom-blocks, and a treatment by classroom-block interaction. The effect of treatment overall could be computed as the average of the classroom-block-specific treatment effects, and you can use the classroom-block-specific treatment effect to assess the degree to which the treatment effect varies across classrooms with different levels of average baseline skill. If that seems like too much, you can run a GEE or regression model of the outcome on treatment alone (or with some fixed effects) that accounts for the pairing and nesting within classrooms and classroom-blocks using cluster-robust standard errors. No matter what analysis you perform, as long as you are comparing the treatment group means, you effect estimate will be unbiased. All this design stuff is just a matter of improving precision and adding interpretational benefits. | How to form groups before randomizing the treatment assignment? | What is nice is that regardless of how you allocate the students to the classes, as long as you randomly assign the classes to treatment, your effect estimate will be free of confounding. Ideally, tho | How to form groups before randomizing the treatment assignment?
What is nice is that regardless of how you allocate the students to the classes, as long as you randomly assign the classes to treatment, your effect estimate will be free of confounding. Ideally, though, you want the class composition to be the same on average between the treatment groups (i.e., for the distribution of baseline skill to be the same across treated classrooms as across control classrooms). Here's one way you could do this:
Perform a match on students with respect to baseline skill. That means for each student, find a student with similar baseline skill and make them a pair. You could try to minimize some global measure of pairwise imbalance to ensure no one is paired with someone too different from them. For example, if you were to use a greedy algorithm, the first students to be paired would be paired with students close to them, while the last students to be paired will have to be paired with whoever is left, who might not be so close. An optimal matching algorithm might instead yield a set of pairs for which the distance between each member is small on average, and large distance are penalized. One member of each pair will eventually go to treatment and the other to control. This type of matching (i.e., before treatment has been assigned) is called non-bipartite matching. There are a few software packages that can do this. One I found is the nbpMatching R package. I believe the designmatch R package can do this as well.
Once you have your matched pairs, we want to assign the pairs into classroom blocks. It doesn't really matter how this is done because the treatment effect will be unbiased regardless, but to reduce the within-classroom treatment effect variability you could create strata of the pairs based on average pairwise baseline skill and then make each stratum (or randomly slices block of a stratum) into a classroom-block. This way, each classroom-block will contain pairs roughly homogeneous in baseline skill level. Within each classroom block, randomly assign one member of each pair to treatment and the other to control. The treated students become their own classroom and the control students become their own classroom (i.e., each classroom-block splits into two classrooms, one treated and the other control).
The value of this design is that it makes extensive use of pairing and stratification. Students are cross-classified both into student-pairs and classrooms. Because the students within pairs are similar to each other, conditioning on pair membership in the analysis will greatly improve the precision of your effect estimate. Likewise, because classrooms within classroom-blocks are similar to each other, conditioning on classroom-block membership will improve the precision of the estimate. Allowing the treatment effect to vary across classroom-blocks also allows you to assess the extent to which the average classroom-level baseline skill might change the treatment effect.
Your final analysis could be a multilevel model with student performance as the outcome, a random effect for student-pairs, fixed or random effects for classrooms, fixed effects for classroom-blocks, and a treatment by classroom-block interaction. The effect of treatment overall could be computed as the average of the classroom-block-specific treatment effects, and you can use the classroom-block-specific treatment effect to assess the degree to which the treatment effect varies across classrooms with different levels of average baseline skill. If that seems like too much, you can run a GEE or regression model of the outcome on treatment alone (or with some fixed effects) that accounts for the pairing and nesting within classrooms and classroom-blocks using cluster-robust standard errors. No matter what analysis you perform, as long as you are comparing the treatment group means, you effect estimate will be unbiased. All this design stuff is just a matter of improving precision and adding interpretational benefits. | How to form groups before randomizing the treatment assignment?
What is nice is that regardless of how you allocate the students to the classes, as long as you randomly assign the classes to treatment, your effect estimate will be free of confounding. Ideally, tho |
54,800 | How to form groups before randomizing the treatment assignment? | You can do better by randomizing treatment at the student level and then forming the classes deterministically. Consider the following procedure.
1) Match the N students based on baseline skill and any other important covariates you want to include, e.g. gender, race, etc. This will form N/2 student pairs. References on how to do this are below.
2) Randomly assign one student in each pair to treatment A and the other to treatment B.
The randomization is now done. Your next steps are to form the classrooms. The following forms classrooms balanced on baseline skill.
3) Create C/2 class pairs labeled (1-A, 1-B), (2-A, 2B), ..., (C/2-A, C/2-B).
4) Sort the pairs of students by the mean baseline skill score within each pair. Don't sort by individual skill scores or any individual variables. Only sort by the pairs' mean values within each pair.
5) Place the highest ranked pair of students in class 1 with the student randomized to A going into 1-A and the student randomized to B going into 1-B. Place the next highest rank pair into class 2. Repeat until all C/2 class pairs have 1 student in them. Then start again at class 1 for the next highest ranked pair of students.
You now have C/2 classes that are fairly well balanced between each other on baseline skill. However, that isn't essential to your inference. What is important is that you have treatment A and B very well balanced on the student level and your randomization has occured at the student level.
If you randomize the classes to A and B, you have a cluster randomized trial and should analyze it as such. Here you have a student randomized trial where the intervention is delivered in clusters. This makes it easier to justify a wider range of analysis methods.
I really like Noah's answer in this thread. Our ideas are very similar. I recommend reading his post too.
References:
1) Optimal multivariate matching before randomization:
https://www.ncbi.nlm.nih.gov/pubmed/15054030
2) Optimal Nonbipartite Matching and Its Statistical Applications:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3501247/
3) nbpMatching:
https://cran.r-project.org/web/packages/nbpMatching/nbpMatching.pdf
4) Tutorials:
http://biostat.mc.vanderbilt.edu/MatchedRandomization | How to form groups before randomizing the treatment assignment? | You can do better by randomizing treatment at the student level and then forming the classes deterministically. Consider the following procedure.
1) Match the N students based on baseline skill and a | How to form groups before randomizing the treatment assignment?
You can do better by randomizing treatment at the student level and then forming the classes deterministically. Consider the following procedure.
1) Match the N students based on baseline skill and any other important covariates you want to include, e.g. gender, race, etc. This will form N/2 student pairs. References on how to do this are below.
2) Randomly assign one student in each pair to treatment A and the other to treatment B.
The randomization is now done. Your next steps are to form the classrooms. The following forms classrooms balanced on baseline skill.
3) Create C/2 class pairs labeled (1-A, 1-B), (2-A, 2B), ..., (C/2-A, C/2-B).
4) Sort the pairs of students by the mean baseline skill score within each pair. Don't sort by individual skill scores or any individual variables. Only sort by the pairs' mean values within each pair.
5) Place the highest ranked pair of students in class 1 with the student randomized to A going into 1-A and the student randomized to B going into 1-B. Place the next highest rank pair into class 2. Repeat until all C/2 class pairs have 1 student in them. Then start again at class 1 for the next highest ranked pair of students.
You now have C/2 classes that are fairly well balanced between each other on baseline skill. However, that isn't essential to your inference. What is important is that you have treatment A and B very well balanced on the student level and your randomization has occured at the student level.
If you randomize the classes to A and B, you have a cluster randomized trial and should analyze it as such. Here you have a student randomized trial where the intervention is delivered in clusters. This makes it easier to justify a wider range of analysis methods.
I really like Noah's answer in this thread. Our ideas are very similar. I recommend reading his post too.
References:
1) Optimal multivariate matching before randomization:
https://www.ncbi.nlm.nih.gov/pubmed/15054030
2) Optimal Nonbipartite Matching and Its Statistical Applications:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3501247/
3) nbpMatching:
https://cran.r-project.org/web/packages/nbpMatching/nbpMatching.pdf
4) Tutorials:
http://biostat.mc.vanderbilt.edu/MatchedRandomization | How to form groups before randomizing the treatment assignment?
You can do better by randomizing treatment at the student level and then forming the classes deterministically. Consider the following procedure.
1) Match the N students based on baseline skill and a |
Subsets and Splits