idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,401 | Confidence bands in case of fitting ARIMA in R? | I've included an example below to show one method of how to calculate Bartlett's approximations and add them to a graph of the autocorrelation function.
In the example, I've done things by hand rather than rely on a particular package hence the code is longer than it perhaps need be. I don't claim the code to be efficient, but it does get the correct results.
The results in my example can be compared for accuracy to Figure C1.2. in Case 1 of Pankratz (1983) which uses the same dataset that I've used. Don't worry if you don't have the book because the content of Case 1 is available free to download.
Note that with slight modifications, the code below can be adapted to plot the standard errors on the pacf, which Scortchi has correctly pointed out should equal $n^{-1/2}$.
# Import data from the web
inventories <- scan("http://robjhyndman.com/tsdldata/books/pankratz.dat", skip=5, nlines=5, sep="")
# Calculate sample size and mean
n <- length(inventories)
mean.inventories <- sum(inventories)/n
# Express the data in deviations from the mean
z.bar <- rep(mean.inventories,n)
deviations <- inventories - z.bar
# Calculate the sum of squared deviations from the mean
squaredDeviations <- deviations^2
sumOfSquaredDeviations <-sum(squaredDeviations)
# Create empty vector to store autocorrelation coefficients
r <- c()
# Use a for loop to fill the vector with the coefficients
for (k in 1:n) {
ends <- n - k
starts <- 1 + k
r[k] <- sum(deviations[1:(ends)]*deviations[(starts):(n)])/sumOfSquaredDeviations
}
# Create empty vector to store Bartlett's standard errors
bart.error <- c()
# Use a for loop to fill the vector with the standard errors
for (k in 1:n) {
ends <- k-1
bart.error[k] <- ((1 + sum((2*r[0:(ends)]^2)))^0.5)*(n^-0.5)
}
# Plot the autocorrelation function
plot(r[1:(n/4)],
type="h",
main="Autocorrelation Function",
xlab="Lag",
ylab="ACF",
ylim=c(-1,1),
las=1)
abline(h=0)
# Add Bartlett's standard errors to the plot
lines(2*bart.error[1:(n/4)])
lines(2*-bart.error[1:(n/4)])
After running the code you should see the following plot: | Confidence bands in case of fitting ARIMA in R? | I've included an example below to show one method of how to calculate Bartlett's approximations and add them to a graph of the autocorrelation function.
In the example, I've done things by hand rathe | Confidence bands in case of fitting ARIMA in R?
I've included an example below to show one method of how to calculate Bartlett's approximations and add them to a graph of the autocorrelation function.
In the example, I've done things by hand rather than rely on a particular package hence the code is longer than it perhaps need be. I don't claim the code to be efficient, but it does get the correct results.
The results in my example can be compared for accuracy to Figure C1.2. in Case 1 of Pankratz (1983) which uses the same dataset that I've used. Don't worry if you don't have the book because the content of Case 1 is available free to download.
Note that with slight modifications, the code below can be adapted to plot the standard errors on the pacf, which Scortchi has correctly pointed out should equal $n^{-1/2}$.
# Import data from the web
inventories <- scan("http://robjhyndman.com/tsdldata/books/pankratz.dat", skip=5, nlines=5, sep="")
# Calculate sample size and mean
n <- length(inventories)
mean.inventories <- sum(inventories)/n
# Express the data in deviations from the mean
z.bar <- rep(mean.inventories,n)
deviations <- inventories - z.bar
# Calculate the sum of squared deviations from the mean
squaredDeviations <- deviations^2
sumOfSquaredDeviations <-sum(squaredDeviations)
# Create empty vector to store autocorrelation coefficients
r <- c()
# Use a for loop to fill the vector with the coefficients
for (k in 1:n) {
ends <- n - k
starts <- 1 + k
r[k] <- sum(deviations[1:(ends)]*deviations[(starts):(n)])/sumOfSquaredDeviations
}
# Create empty vector to store Bartlett's standard errors
bart.error <- c()
# Use a for loop to fill the vector with the standard errors
for (k in 1:n) {
ends <- k-1
bart.error[k] <- ((1 + sum((2*r[0:(ends)]^2)))^0.5)*(n^-0.5)
}
# Plot the autocorrelation function
plot(r[1:(n/4)],
type="h",
main="Autocorrelation Function",
xlab="Lag",
ylab="ACF",
ylim=c(-1,1),
las=1)
abline(h=0)
# Add Bartlett's standard errors to the plot
lines(2*bart.error[1:(n/4)])
lines(2*-bart.error[1:(n/4)])
After running the code you should see the following plot: | Confidence bands in case of fitting ARIMA in R?
I've included an example below to show one method of how to calculate Bartlett's approximations and add them to a graph of the autocorrelation function.
In the example, I've done things by hand rathe |
52,402 | Do skewness and kurtosis uniquely determine type of distribution? | No it's not enough. These are only the third and fourth standardized moments, & distributions may differ in higher-order moments. Note also that some moments may not exist. | Do skewness and kurtosis uniquely determine type of distribution? | No it's not enough. These are only the third and fourth standardized moments, & distributions may differ in higher-order moments. Note also that some moments may not exist. | Do skewness and kurtosis uniquely determine type of distribution?
No it's not enough. These are only the third and fourth standardized moments, & distributions may differ in higher-order moments. Note also that some moments may not exist. | Do skewness and kurtosis uniquely determine type of distribution?
No it's not enough. These are only the third and fourth standardized moments, & distributions may differ in higher-order moments. Note also that some moments may not exist. |
52,403 | Do skewness and kurtosis uniquely determine type of distribution? | The general answer is no, they are just moments. To keep it simple: Does the mean (first moment) determine the type of distribution ?
This works only if you are working on a certain family of distribution. For example you can define a Gaussian with only the mean and the variance. This is what happened in your example, they are fitting models.
On finite interval, the knowledge of all the moments define a unique distribution if and only if the Hankel matrices $H_n$ are all positive, where $H_{n i,j} =m_{i+j} $ and $m_i$ the ith moment.
On an infinite interval, there is some condition too but no equivalence. For example, the log-normal law is not defined by its moments. | Do skewness and kurtosis uniquely determine type of distribution? | The general answer is no, they are just moments. To keep it simple: Does the mean (first moment) determine the type of distribution ?
This works only if you are working on a certain family of distribu | Do skewness and kurtosis uniquely determine type of distribution?
The general answer is no, they are just moments. To keep it simple: Does the mean (first moment) determine the type of distribution ?
This works only if you are working on a certain family of distribution. For example you can define a Gaussian with only the mean and the variance. This is what happened in your example, they are fitting models.
On finite interval, the knowledge of all the moments define a unique distribution if and only if the Hankel matrices $H_n$ are all positive, where $H_{n i,j} =m_{i+j} $ and $m_i$ the ith moment.
On an infinite interval, there is some condition too but no equivalence. For example, the log-normal law is not defined by its moments. | Do skewness and kurtosis uniquely determine type of distribution?
The general answer is no, they are just moments. To keep it simple: Does the mean (first moment) determine the type of distribution ?
This works only if you are working on a certain family of distribu |
52,404 | Covariance of binary and continuous variable | This answer (and all the comments from the second comment onwards on the main question) comes from a question, now-closed as a duplicate of the one above by ophelie, from Thevesh Theva who asked for a proof of
$$\operatorname{cov}(X,Y) = E[Y\mid X=1] - E[Y\mid X=0],
\tag{1}$$ which is a false result. In fact, for $X \sim \text{Bernoulli}(p)$ for which $E[X]=p$,
\begin{align}\require{cancel}
\operatorname{cov}(X,Y) &= E[XY]-E[X]E[Y]\\
&= \big(E[XY\mid X=1]\cdot p + E[XY\mid X=0]\cdot(1-p)\big) - p\cdot E[Y]\\
&= E[1\cdot Y\mid X=1]\cdot p + \cancel{E[0\cdot Y\mid X=0]}\cdot(1-p) - p\cdot E[Y]\\
&= E[Y\mid X=1]\cdot p - p\cdot \big(E[Y\mid X=1]\cdot p + E[Y\mid X=0]\cdot (1-p)\big)\\
&= E[Y\mid X=1]\cdot p(1-p) - E[Y\mid X=0]\cdot p(1-p)\\
&= \big(E[Y\mid X=1] - E[Y\mid X=0]\big)\cdot p(1-p) \tag{2}
\end{align}
where we have an extra factor of $p(1-p)$ (with maximum value $\frac 14$)
compared to Thevesh Theva's claim $(1)$. This result also matches the one asked for above by ophelie which is
proved more succinctly in this answer by Skullduggery.
The claim $(1)$ is a misreading of a result about the ratio of two covariances:
$$\frac{\operatorname{cov}(X,Y)}{\operatorname{cov}(X,Z)} =
\frac{E[Y\mid X=1] - E[Y\mid X=0]}{E[Z\mid X=1] - E[Z\mid X=0]}$$
in which the $p(1-p)$ common factor in the numerator and denominator has cancelled out, misleading unwary readers into believing that the authors of the paper are claiming that $\operatorname{cov}(X,Y) = E[Y\mid X=1] - E[Y\mid X=0]$. | Covariance of binary and continuous variable | This answer (and all the comments from the second comment onwards on the main question) comes from a question, now-closed as a duplicate of the one above by ophelie, from Thevesh Theva who asked for a | Covariance of binary and continuous variable
This answer (and all the comments from the second comment onwards on the main question) comes from a question, now-closed as a duplicate of the one above by ophelie, from Thevesh Theva who asked for a proof of
$$\operatorname{cov}(X,Y) = E[Y\mid X=1] - E[Y\mid X=0],
\tag{1}$$ which is a false result. In fact, for $X \sim \text{Bernoulli}(p)$ for which $E[X]=p$,
\begin{align}\require{cancel}
\operatorname{cov}(X,Y) &= E[XY]-E[X]E[Y]\\
&= \big(E[XY\mid X=1]\cdot p + E[XY\mid X=0]\cdot(1-p)\big) - p\cdot E[Y]\\
&= E[1\cdot Y\mid X=1]\cdot p + \cancel{E[0\cdot Y\mid X=0]}\cdot(1-p) - p\cdot E[Y]\\
&= E[Y\mid X=1]\cdot p - p\cdot \big(E[Y\mid X=1]\cdot p + E[Y\mid X=0]\cdot (1-p)\big)\\
&= E[Y\mid X=1]\cdot p(1-p) - E[Y\mid X=0]\cdot p(1-p)\\
&= \big(E[Y\mid X=1] - E[Y\mid X=0]\big)\cdot p(1-p) \tag{2}
\end{align}
where we have an extra factor of $p(1-p)$ (with maximum value $\frac 14$)
compared to Thevesh Theva's claim $(1)$. This result also matches the one asked for above by ophelie which is
proved more succinctly in this answer by Skullduggery.
The claim $(1)$ is a misreading of a result about the ratio of two covariances:
$$\frac{\operatorname{cov}(X,Y)}{\operatorname{cov}(X,Z)} =
\frac{E[Y\mid X=1] - E[Y\mid X=0]}{E[Z\mid X=1] - E[Z\mid X=0]}$$
in which the $p(1-p)$ common factor in the numerator and denominator has cancelled out, misleading unwary readers into believing that the authors of the paper are claiming that $\operatorname{cov}(X,Y) = E[Y\mid X=1] - E[Y\mid X=0]$. | Covariance of binary and continuous variable
This answer (and all the comments from the second comment onwards on the main question) comes from a question, now-closed as a duplicate of the one above by ophelie, from Thevesh Theva who asked for a |
52,405 | Covariance of binary and continuous variable | \begin{eqnarray}
Cov(y,d) &=& E(y \cdot d) - E(y) \cdot E(d) \\
&=& p \cdot E(y|d=1)-[p \cdot E(y|d=1) + (1-p) \cdot E(y|d=0)] \cdot p \\
&=& p \cdot (1-p) \cdot [E(y|d=1) - E(y|d=0)]
\end{eqnarray} | Covariance of binary and continuous variable | \begin{eqnarray}
Cov(y,d) &=& E(y \cdot d) - E(y) \cdot E(d) \\
&=& p \cdot E(y|d=1)-[p \cdot E(y|d=1) + (1-p) \cdot E(y|d=0)] \cdot p \\
&=& p \cdot (1-p) \cdot [E(y|d=1) - E(y|d=0)]
\end{eqnarray} | Covariance of binary and continuous variable
\begin{eqnarray}
Cov(y,d) &=& E(y \cdot d) - E(y) \cdot E(d) \\
&=& p \cdot E(y|d=1)-[p \cdot E(y|d=1) + (1-p) \cdot E(y|d=0)] \cdot p \\
&=& p \cdot (1-p) \cdot [E(y|d=1) - E(y|d=0)]
\end{eqnarray} | Covariance of binary and continuous variable
\begin{eqnarray}
Cov(y,d) &=& E(y \cdot d) - E(y) \cdot E(d) \\
&=& p \cdot E(y|d=1)-[p \cdot E(y|d=1) + (1-p) \cdot E(y|d=0)] \cdot p \\
&=& p \cdot (1-p) \cdot [E(y|d=1) - E(y|d=0)]
\end{eqnarray} |
52,406 | Analysis of variance not statistically significant... but is there still a pattern to the data? | You're thinking about your ANOVA incorrectly. It's OK, lots of people are taught ANOVA that way. The ANOVA does not mean there are any significant differences between levels of the predictor variable. None of them can be significant and yet the ANOVA is. It means that the pattern of data has meaning. Simply report your significant ANOVA and describe the pattern of data. That sounds exactly like what you want to do anyway.
As a suggestion for improvement, it would be a much more meaningful analysis if you did a regression and had some idea about the mathematical relationship between your two variables. It looks a bit exponential but even a simple line would fit pretty good. In fact, a comparison between an ANOVA and linear regression here would show very little advantage from all of the degrees of freedom of the ANOVA and allow you to make a more direct statement about the relationship between the variables. | Analysis of variance not statistically significant... but is there still a pattern to the data? | You're thinking about your ANOVA incorrectly. It's OK, lots of people are taught ANOVA that way. The ANOVA does not mean there are any significant differences between levels of the predictor variabl | Analysis of variance not statistically significant... but is there still a pattern to the data?
You're thinking about your ANOVA incorrectly. It's OK, lots of people are taught ANOVA that way. The ANOVA does not mean there are any significant differences between levels of the predictor variable. None of them can be significant and yet the ANOVA is. It means that the pattern of data has meaning. Simply report your significant ANOVA and describe the pattern of data. That sounds exactly like what you want to do anyway.
As a suggestion for improvement, it would be a much more meaningful analysis if you did a regression and had some idea about the mathematical relationship between your two variables. It looks a bit exponential but even a simple line would fit pretty good. In fact, a comparison between an ANOVA and linear regression here would show very little advantage from all of the degrees of freedom of the ANOVA and allow you to make a more direct statement about the relationship between the variables. | Analysis of variance not statistically significant... but is there still a pattern to the data?
You're thinking about your ANOVA incorrectly. It's OK, lots of people are taught ANOVA that way. The ANOVA does not mean there are any significant differences between levels of the predictor variabl |
52,407 | Analysis of variance not statistically significant... but is there still a pattern to the data? | You are running into one of the fundamental problems with p-values: They are partly dependent on sample sizes.
So, when you increase sample size, smaller effect sizes become significant. This accounts for both changes that you report 1) More comparisons become significant because smaller effect sizes are (this seems like a good thing) 2) The test of heterogeneity becomes significant for the same reason.
It is better, in my view, to concentrate on effect sizes and confidence intervals. | Analysis of variance not statistically significant... but is there still a pattern to the data? | You are running into one of the fundamental problems with p-values: They are partly dependent on sample sizes.
So, when you increase sample size, smaller effect sizes become significant. This accounts | Analysis of variance not statistically significant... but is there still a pattern to the data?
You are running into one of the fundamental problems with p-values: They are partly dependent on sample sizes.
So, when you increase sample size, smaller effect sizes become significant. This accounts for both changes that you report 1) More comparisons become significant because smaller effect sizes are (this seems like a good thing) 2) The test of heterogeneity becomes significant for the same reason.
It is better, in my view, to concentrate on effect sizes and confidence intervals. | Analysis of variance not statistically significant... but is there still a pattern to the data?
You are running into one of the fundamental problems with p-values: They are partly dependent on sample sizes.
So, when you increase sample size, smaller effect sizes become significant. This accounts |
52,408 | Formal definition of random assignment | While Michael Chernick gave a good answer, I do not think that the people who are involved in treatment effect estimation think in terms of finite populations and randomization-based inference. Economists (Angrist and Imbens are well-known econometricians) usually don't; if the OP comes from the same tradition, that is the central issue of this question.
Economists have a model perspective instead, where there's a conceptual population from which units are taken, and there's some sort of an implicit permutation invariance, or "labels do not matter", assumption being made for these sampled units. It is this permutation invariance that is being characterized and quantified in the definition given by the OP. In finite populations, though, every unit is assumed to be unique, and denying a certain treatment for it in the randomization mechanism would produce an unestimable treatment effect. Switching from the model-based inference to randomization-based inference is very difficult; this may have been done in the cited paper, but not very clearly. | Formal definition of random assignment | While Michael Chernick gave a good answer, I do not think that the people who are involved in treatment effect estimation think in terms of finite populations and randomization-based inference. Econom | Formal definition of random assignment
While Michael Chernick gave a good answer, I do not think that the people who are involved in treatment effect estimation think in terms of finite populations and randomization-based inference. Economists (Angrist and Imbens are well-known econometricians) usually don't; if the OP comes from the same tradition, that is the central issue of this question.
Economists have a model perspective instead, where there's a conceptual population from which units are taken, and there's some sort of an implicit permutation invariance, or "labels do not matter", assumption being made for these sampled units. It is this permutation invariance that is being characterized and quantified in the definition given by the OP. In finite populations, though, every unit is assumed to be unique, and denying a certain treatment for it in the randomization mechanism would produce an unestimable treatment effect. Switching from the model-based inference to randomization-based inference is very difficult; this may have been done in the cited paper, but not very clearly. | Formal definition of random assignment
While Michael Chernick gave a good answer, I do not think that the people who are involved in treatment effect estimation think in terms of finite populations and randomization-based inference. Econom |
52,409 | Formal definition of random assignment | This definition of random assignment seems to be assigning with equal probability. To assign 0 weight any of the possible assignments could create bias and should be considered a nonrandom assignemnt by any definition. However sampling with unequal nonzero weights can be an acceptable procedure (e.g. sampling randomly proportional to size or stratified random sampling with unequal samples per stratum are survey sampling examples). They fit into a more general definition of random sampling. If one is estimating a mean a weighted average can be used to get a unbiased estimate of the population mean. By excluding a possible outcome you change the population and it is not apprpriate to draw an inference to the large population that you chose not to sample from. Also because potential samples were excluded it is impossible to make a weighted adjustment to guarantee an unbiased estimation for the mean of the unrestricted population. | Formal definition of random assignment | This definition of random assignment seems to be assigning with equal probability. To assign 0 weight any of the possible assignments could create bias and should be considered a nonrandom assignemnt | Formal definition of random assignment
This definition of random assignment seems to be assigning with equal probability. To assign 0 weight any of the possible assignments could create bias and should be considered a nonrandom assignemnt by any definition. However sampling with unequal nonzero weights can be an acceptable procedure (e.g. sampling randomly proportional to size or stratified random sampling with unequal samples per stratum are survey sampling examples). They fit into a more general definition of random sampling. If one is estimating a mean a weighted average can be used to get a unbiased estimate of the population mean. By excluding a possible outcome you change the population and it is not apprpriate to draw an inference to the large population that you chose not to sample from. Also because potential samples were excluded it is impossible to make a weighted adjustment to guarantee an unbiased estimation for the mean of the unrestricted population. | Formal definition of random assignment
This definition of random assignment seems to be assigning with equal probability. To assign 0 weight any of the possible assignments could create bias and should be considered a nonrandom assignemnt |
52,410 | Formal definition of random assignment | One thing that you'll notice in the AIR paper is that they do not condition on covariates $X$. You can generalize the AIR exposition by doing so.
Let $X$ be an indicator for whether a subject is male. Also suppose that you want men to be more likely to receive treatment than women. You can have $$
\begin{equation*}\Pr(z = c \mid X=1) = \Pr(z = c^\prime \mid X=1)
\end{equation*}$$ and
$$\begin{equation*}\Pr(z = c \mid X=0) = \Pr(z = c^\prime \mid X=0),\end{equation*}$$ but $$\begin{equation*}\Pr(z = c \mid X=1) > \Pr(z = c \mid X=0)\end{equation*}$$ and still satisfy random assignment in this context. This would justify stratified sampling, for example.
The difference between this generalization and one in which arbitrary vectors are just excluded is that here each person in a particular strata has the same probability of entering treatment, while your suggestion would target specific people to be less likely to enter treatment. If you could do this in a systematic way based upon observable features of the observations, as in the stratification case, you'll be in the clear, but an unsystematic holding back of some members of the population can bias your results. | Formal definition of random assignment | One thing that you'll notice in the AIR paper is that they do not condition on covariates $X$. You can generalize the AIR exposition by doing so.
Let $X$ be an indicator for whether a subject is male | Formal definition of random assignment
One thing that you'll notice in the AIR paper is that they do not condition on covariates $X$. You can generalize the AIR exposition by doing so.
Let $X$ be an indicator for whether a subject is male. Also suppose that you want men to be more likely to receive treatment than women. You can have $$
\begin{equation*}\Pr(z = c \mid X=1) = \Pr(z = c^\prime \mid X=1)
\end{equation*}$$ and
$$\begin{equation*}\Pr(z = c \mid X=0) = \Pr(z = c^\prime \mid X=0),\end{equation*}$$ but $$\begin{equation*}\Pr(z = c \mid X=1) > \Pr(z = c \mid X=0)\end{equation*}$$ and still satisfy random assignment in this context. This would justify stratified sampling, for example.
The difference between this generalization and one in which arbitrary vectors are just excluded is that here each person in a particular strata has the same probability of entering treatment, while your suggestion would target specific people to be less likely to enter treatment. If you could do this in a systematic way based upon observable features of the observations, as in the stratification case, you'll be in the clear, but an unsystematic holding back of some members of the population can bias your results. | Formal definition of random assignment
One thing that you'll notice in the AIR paper is that they do not condition on covariates $X$. You can generalize the AIR exposition by doing so.
Let $X$ be an indicator for whether a subject is male |
52,411 | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA | Regarding Bonferroni (and multiple comparisons issues in general) Jacob Cohen, in his book on regression, said "this is a subject on which reasonable people can differ". There are arguments for not doing such corrections at all (see, e.g., this piece by Andrew Gelman). I find such arguments persuasive.
If you reduce chance of type 1 error then (other things being equal) you increase chance of type 2 error (that is, you reduce power). In many fields, the "given" values are .05 for type 1 and .8 for power (.2 for type 2). But these may or may not be sensible. Type 2 error may be much worse than type 1 error.
In my view, the important thing is not statistical significance, and certainly not whether it is below .05, but effect size and precision of effect size.
If you correct for multiple comparisons, it is really a question of what "multiple" is. Paraphrasing Cohen, should it be for one experiment? One article? One set of results? Results about one topic? All results you ever do? Or what?
From a purely practical point of view, you may be required to do some particular thing to satisfy journal editors or other supervisors. | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA | Regarding Bonferroni (and multiple comparisons issues in general) Jacob Cohen, in his book on regression, said "this is a subject on which reasonable people can differ". There are arguments for not do | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA
Regarding Bonferroni (and multiple comparisons issues in general) Jacob Cohen, in his book on regression, said "this is a subject on which reasonable people can differ". There are arguments for not doing such corrections at all (see, e.g., this piece by Andrew Gelman). I find such arguments persuasive.
If you reduce chance of type 1 error then (other things being equal) you increase chance of type 2 error (that is, you reduce power). In many fields, the "given" values are .05 for type 1 and .8 for power (.2 for type 2). But these may or may not be sensible. Type 2 error may be much worse than type 1 error.
In my view, the important thing is not statistical significance, and certainly not whether it is below .05, but effect size and precision of effect size.
If you correct for multiple comparisons, it is really a question of what "multiple" is. Paraphrasing Cohen, should it be for one experiment? One article? One set of results? Results about one topic? All results you ever do? Or what?
From a purely practical point of view, you may be required to do some particular thing to satisfy journal editors or other supervisors. | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA
Regarding Bonferroni (and multiple comparisons issues in general) Jacob Cohen, in his book on regression, said "this is a subject on which reasonable people can differ". There are arguments for not do |
52,412 | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA | A couple of points - too long for a comment:
If you are protecting your pre-planned tests with Bonferroni correction, than there is no need to run the original ANOVA. The "double protection" only looses power.
Most of the standard "post-hoc" tests for ANOVA do not need the protection of the original F-test. Essentially the only approach that needs it is when the post-hoc tests are not adjusted at all. This is usually not recommended, because this approach only provides weak control of the familywise type I error rate (i.e. the error rate of the pairwise comparisons is controlled only if all the means are equal)
You are doing strange things with the sphericity assumption. Approximately, it means that any pairwise differences have the same variance. If you are willing to assume that the entire design satisfies it, then any subset should as well. So if you are using GG correction for the subset, you should be using it for the entire set (the reverse does not have to be true). If the GG correction changes the results substantially (not just moving the p-value from 0.048 to 0.052), then the sphericity is probably not satisfied.
You keep confusing "not significant" with "there is no difference". While this is tempting to do, this thinking leads to all sort of apparent paradoxes. | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA | A couple of points - too long for a comment:
If you are protecting your pre-planned tests with Bonferroni correction, than there is no need to run the original ANOVA. The "double protection" only loo | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA
A couple of points - too long for a comment:
If you are protecting your pre-planned tests with Bonferroni correction, than there is no need to run the original ANOVA. The "double protection" only looses power.
Most of the standard "post-hoc" tests for ANOVA do not need the protection of the original F-test. Essentially the only approach that needs it is when the post-hoc tests are not adjusted at all. This is usually not recommended, because this approach only provides weak control of the familywise type I error rate (i.e. the error rate of the pairwise comparisons is controlled only if all the means are equal)
You are doing strange things with the sphericity assumption. Approximately, it means that any pairwise differences have the same variance. If you are willing to assume that the entire design satisfies it, then any subset should as well. So if you are using GG correction for the subset, you should be using it for the entire set (the reverse does not have to be true). If the GG correction changes the results substantially (not just moving the p-value from 0.048 to 0.052), then the sphericity is probably not satisfied.
You keep confusing "not significant" with "there is no difference". While this is tempting to do, this thinking leads to all sort of apparent paradoxes. | Bonferroni and Greenhouse-Geisser corrections with repeated-measures ANOVA
A couple of points - too long for a comment:
If you are protecting your pre-planned tests with Bonferroni correction, than there is no need to run the original ANOVA. The "double protection" only loo |
52,413 | What can I do if my logistic regression model doesn't predict anything? | What do you mean by doesn't predict? Are you implying the model is doing the same as randomly guessing?
Maybe your cutoff (for predicting a 'positive' result) is not adequate? You may way want to try producing some ROC curves based on data you currently have to choose an appropriate cutoff. You would want take into consideration the 'cost' of making a false positive as compared to a false negative when choosing this cutoff.
If you are still not doing well then your predictors are probably not associated with the response. | What can I do if my logistic regression model doesn't predict anything? | What do you mean by doesn't predict? Are you implying the model is doing the same as randomly guessing?
Maybe your cutoff (for predicting a 'positive' result) is not adequate? You may way want to tr | What can I do if my logistic regression model doesn't predict anything?
What do you mean by doesn't predict? Are you implying the model is doing the same as randomly guessing?
Maybe your cutoff (for predicting a 'positive' result) is not adequate? You may way want to try producing some ROC curves based on data you currently have to choose an appropriate cutoff. You would want take into consideration the 'cost' of making a false positive as compared to a false negative when choosing this cutoff.
If you are still not doing well then your predictors are probably not associated with the response. | What can I do if my logistic regression model doesn't predict anything?
What do you mean by doesn't predict? Are you implying the model is doing the same as randomly guessing?
Maybe your cutoff (for predicting a 'positive' result) is not adequate? You may way want to tr |
52,414 | What can I do if my logistic regression model doesn't predict anything? | A general strategy when a model has no predictive power is to start over.
But does it really have no predictive power? That is does it do no better than flipping a coin?
In general, and with only rare exceptions, models will do better on the data they were trained on then on new data.
Beyond that, some more context might help. | What can I do if my logistic regression model doesn't predict anything? | A general strategy when a model has no predictive power is to start over.
But does it really have no predictive power? That is does it do no better than flipping a coin?
In general, and with only ra | What can I do if my logistic regression model doesn't predict anything?
A general strategy when a model has no predictive power is to start over.
But does it really have no predictive power? That is does it do no better than flipping a coin?
In general, and with only rare exceptions, models will do better on the data they were trained on then on new data.
Beyond that, some more context might help. | What can I do if my logistic regression model doesn't predict anything?
A general strategy when a model has no predictive power is to start over.
But does it really have no predictive power? That is does it do no better than flipping a coin?
In general, and with only ra |
52,415 | What can I do if my logistic regression model doesn't predict anything? | What were you plotting for predicted values and actual values? The model predicts either log-odds, or some other value depending on what you ask predict to return. It could be probability. Actual values are just 0,1. One way around this is to bin your actual values over subranges of the predictor and get the means (probability of 1) or make log-odds values.
You need to specify in your question what you're asking predict to return and what the "actual" values you're comparing it to are. | What can I do if my logistic regression model doesn't predict anything? | What were you plotting for predicted values and actual values? The model predicts either log-odds, or some other value depending on what you ask predict to return. It could be probability. Actual v | What can I do if my logistic regression model doesn't predict anything?
What were you plotting for predicted values and actual values? The model predicts either log-odds, or some other value depending on what you ask predict to return. It could be probability. Actual values are just 0,1. One way around this is to bin your actual values over subranges of the predictor and get the means (probability of 1) or make log-odds values.
You need to specify in your question what you're asking predict to return and what the "actual" values you're comparing it to are. | What can I do if my logistic regression model doesn't predict anything?
What were you plotting for predicted values and actual values? The model predicts either log-odds, or some other value depending on what you ask predict to return. It could be probability. Actual v |
52,416 | Analyze and visualize participants response towards particular condition | For two, avoid dynamite plots (see Drummond & Vowler, 2011), and utilize dot plots since you only have 15 participants. You can super-impose confidence lines over the dot plots, and you can create a category axis label to label the dots/bars/lines, foregoing the need to differentiate between categories using color, point/line symbols or other hashings.
I will post back an example using your data later, but for now the paper cited above has several examples perfectly applicable to your situation, and one is inserted below.
Since you tagged the question r, this previous question has applicable code snippets to generate similar charts, Alternative graphics to “handle bar” plots.
Citation
Drummond, Gordon B. & Sarah L. Vowler. 2011. Show the data, don't
conceal them. The
Journal of Physiology 598(8): 1861-1863. PDF available from
publisher.
Note this article is
being simultaneously published in 2011 in The Journal of Physiology,
Experimental Physiology, The British Journal of Pharmacology, Advances
in Physiology Education, Microcirculation, and Clinical and
Experimental Pharmacology and Physiology
Below is an example extended to your data. I have posted full examples of generating similar plots in R using ggplot2 and in SPSS on my blog in this post, Avoid Dynamite Plots! Visualizing dot plots with super-imposed confidence intervals in SPSS and R. | Analyze and visualize participants response towards particular condition | For two, avoid dynamite plots (see Drummond & Vowler, 2011), and utilize dot plots since you only have 15 participants. You can super-impose confidence lines over the dot plots, and you can create a c | Analyze and visualize participants response towards particular condition
For two, avoid dynamite plots (see Drummond & Vowler, 2011), and utilize dot plots since you only have 15 participants. You can super-impose confidence lines over the dot plots, and you can create a category axis label to label the dots/bars/lines, foregoing the need to differentiate between categories using color, point/line symbols or other hashings.
I will post back an example using your data later, but for now the paper cited above has several examples perfectly applicable to your situation, and one is inserted below.
Since you tagged the question r, this previous question has applicable code snippets to generate similar charts, Alternative graphics to “handle bar” plots.
Citation
Drummond, Gordon B. & Sarah L. Vowler. 2011. Show the data, don't
conceal them. The
Journal of Physiology 598(8): 1861-1863. PDF available from
publisher.
Note this article is
being simultaneously published in 2011 in The Journal of Physiology,
Experimental Physiology, The British Journal of Pharmacology, Advances
in Physiology Education, Microcirculation, and Clinical and
Experimental Pharmacology and Physiology
Below is an example extended to your data. I have posted full examples of generating similar plots in R using ggplot2 and in SPSS on my blog in this post, Avoid Dynamite Plots! Visualizing dot plots with super-imposed confidence intervals in SPSS and R. | Analyze and visualize participants response towards particular condition
For two, avoid dynamite plots (see Drummond & Vowler, 2011), and utilize dot plots since you only have 15 participants. You can super-impose confidence lines over the dot plots, and you can create a c |
52,417 | Analyze and visualize participants response towards particular condition | @AndyW has a good answer. I think dot plots, or even box plots are good approaches, although I think bar graphs are OK. One thing I would recommend is that you rotate your figure 90 degrees. Then, more visual would go to the right, and more auditory would go to the left. The advantage of this is that you could drop the legend and list your conditions on the left hand side. They would be easy to read, because they would be aligned with their corresponding graph elements (whatever you end up going with), and because people read horizontally from left to right. You could probably wrap the text, but you could also abbreviate somewhat, perhaps "Vis - & Aud +" (you'd need to come up with something for neutral). Thus, the figure needn't take up any more space than it does now. | Analyze and visualize participants response towards particular condition | @AndyW has a good answer. I think dot plots, or even box plots are good approaches, although I think bar graphs are OK. One thing I would recommend is that you rotate your figure 90 degrees. Then, | Analyze and visualize participants response towards particular condition
@AndyW has a good answer. I think dot plots, or even box plots are good approaches, although I think bar graphs are OK. One thing I would recommend is that you rotate your figure 90 degrees. Then, more visual would go to the right, and more auditory would go to the left. The advantage of this is that you could drop the legend and list your conditions on the left hand side. They would be easy to read, because they would be aligned with their corresponding graph elements (whatever you end up going with), and because people read horizontally from left to right. You could probably wrap the text, but you could also abbreviate somewhat, perhaps "Vis - & Aud +" (you'd need to come up with something for neutral). Thus, the figure needn't take up any more space than it does now. | Analyze and visualize participants response towards particular condition
@AndyW has a good answer. I think dot plots, or even box plots are good approaches, although I think bar graphs are OK. One thing I would recommend is that you rotate your figure 90 degrees. Then, |
52,418 | What is the variance-covariance matrix of the OLS residual vector? | First and foremost, your model is typically referred to as "general" instead of "generalised".
I show you the calculation for $\textrm{Var} ( \hat{\beta} )$ so that you can continue it for $\textrm{Var}(\hat{\epsilon}) = \textrm{Var} ( Y - X\hat{\beta})$.
The OLS estimator of your vector $\beta$ is
$\hat{\beta} = (X'X)^{-1} X'Y$,
provided that $X$ has full rank.
Its variance is obtained as follows.
$\textrm{Var} ( \hat{\beta} ) = [(X'X)^{-1} X'] \times \textrm{Var} ( Y ) \times [(X'X)^{-1} X']' \qquad$ (if needed, see the 'Properties' section here).
Now, if it is assumed that $\textrm{Var} ( Y ) = \sigma^2 I$, where $I$ is the identity matrix, then the previous line gives
$\textrm{Var} ( \hat{\beta} ) = \sigma^{2} (X'X)^{-1}$.
More details are provided in, e.g., wikipedia. | What is the variance-covariance matrix of the OLS residual vector? | First and foremost, your model is typically referred to as "general" instead of "generalised".
I show you the calculation for $\textrm{Var} ( \hat{\beta} )$ so that you can continue it for $\textrm{V | What is the variance-covariance matrix of the OLS residual vector?
First and foremost, your model is typically referred to as "general" instead of "generalised".
I show you the calculation for $\textrm{Var} ( \hat{\beta} )$ so that you can continue it for $\textrm{Var}(\hat{\epsilon}) = \textrm{Var} ( Y - X\hat{\beta})$.
The OLS estimator of your vector $\beta$ is
$\hat{\beta} = (X'X)^{-1} X'Y$,
provided that $X$ has full rank.
Its variance is obtained as follows.
$\textrm{Var} ( \hat{\beta} ) = [(X'X)^{-1} X'] \times \textrm{Var} ( Y ) \times [(X'X)^{-1} X']' \qquad$ (if needed, see the 'Properties' section here).
Now, if it is assumed that $\textrm{Var} ( Y ) = \sigma^2 I$, where $I$ is the identity matrix, then the previous line gives
$\textrm{Var} ( \hat{\beta} ) = \sigma^{2} (X'X)^{-1}$.
More details are provided in, e.g., wikipedia. | What is the variance-covariance matrix of the OLS residual vector?
First and foremost, your model is typically referred to as "general" instead of "generalised".
I show you the calculation for $\textrm{Var} ( \hat{\beta} )$ so that you can continue it for $\textrm{V |
52,419 | What is the variance-covariance matrix of the OLS residual vector? | See Wikipedia under Studentized residual#How to studentize for the variance of a single residual:
$$\mbox{var}(\widehat{\varepsilon}_i)=\sigma^2(1-h_{ii})$$
where $h_{ii}$ is the ith diagonal entry in the hat matrix $H=X(X^T X)^{-1}X^T$.
And Hat matrix#Uncorrelated errors for the variance-covariance matrix of the residuals. | What is the variance-covariance matrix of the OLS residual vector? | See Wikipedia under Studentized residual#How to studentize for the variance of a single residual:
$$\mbox{var}(\widehat{\varepsilon}_i)=\sigma^2(1-h_{ii})$$
where $h_{ii}$ is the ith diagonal entry in | What is the variance-covariance matrix of the OLS residual vector?
See Wikipedia under Studentized residual#How to studentize for the variance of a single residual:
$$\mbox{var}(\widehat{\varepsilon}_i)=\sigma^2(1-h_{ii})$$
where $h_{ii}$ is the ith diagonal entry in the hat matrix $H=X(X^T X)^{-1}X^T$.
And Hat matrix#Uncorrelated errors for the variance-covariance matrix of the residuals. | What is the variance-covariance matrix of the OLS residual vector?
See Wikipedia under Studentized residual#How to studentize for the variance of a single residual:
$$\mbox{var}(\widehat{\varepsilon}_i)=\sigma^2(1-h_{ii})$$
where $h_{ii}$ is the ith diagonal entry in |
52,420 | Likelihood ratio test | For logistic regression you use the asymptotic distribution of the log of likelihood ratio test statistic for variable selection (testing hypotheses or model selection). In the case of linear regression, due to the assumed normality for the error distribution, there is no need to use asymptotics, and the likelihood ratio test static trivially reduces to a ratio of chi-squares, that is, F-distribution. | Likelihood ratio test | For logistic regression you use the asymptotic distribution of the log of likelihood ratio test statistic for variable selection (testing hypotheses or model selection). In the case of linear regress | Likelihood ratio test
For logistic regression you use the asymptotic distribution of the log of likelihood ratio test statistic for variable selection (testing hypotheses or model selection). In the case of linear regression, due to the assumed normality for the error distribution, there is no need to use asymptotics, and the likelihood ratio test static trivially reduces to a ratio of chi-squares, that is, F-distribution. | Likelihood ratio test
For logistic regression you use the asymptotic distribution of the log of likelihood ratio test statistic for variable selection (testing hypotheses or model selection). In the case of linear regress |
52,421 | Difference between experimental data and observational data? | wow, that's a tough one :-)
That question is far more widely relevant than just in data mining. It comes up in medicine and in the social sciences including psychology all the time.
The distinction is necessary when it comes to drawing conclusions about causality, that is, when you want to know if something (e.g. a medical treatment) causes another thing (e.g. recovery of a patient). Hordes of scientists and philosophers debate whether you can draw conclusions about causality from observational studies or not. You might want to look at the question statistics and causal inference?.
So what is an experiment? Concisely, an experiment is often defined as random assignment of observational units to different conditions, and conditions differ by the treatment of observational units. Treatment is a generic term, which translates most easily in medical applications (e.g. patients are treated differently under different conditions), but it also applies to other areas. There are variations of experiments --- you might want to start by reading the wikipedia entries for Experiment and randomized experiment --- but the one crucial point is random assignment of subjects to conditions.
With that in mind, it is definitely not possible to do an experiment for all kinds of hypotheses you want to test. For example, you sometimes can't do experiments for ethical reasons, e.g. you don't want people to suffer because of a treatment. In other cases, it might be physically impossible to conduct an experiment.
So whereas experimentation (controlled randomized assignment to treatment conditions) is the primary way to draw conclusions about causality --- and for some, it is the only way --- people still want to do something empirical in those cases where experiments are not possible. That's when you want to do an observational study.
To define an observational study, I draw on Paul Rosenbaums entry in the encyclopedia of statistics in behavioral science: An observational study is "an empiric comparison of treated and control groups in which the objective is to elucidate cause-and-effect relationships
[. . . in which it] is not feasible to use controlled experimentation, in the sense of being able to impose the procedures or treatments whose effects it is desired to discover, or to assign subjects at random to different procedures." In an observational study, you try to measure as many variables as possible, and you want to test hypotheses about what changes in a set of those variables are associated with changes in other sets of variables, often with the goal of drawing conclusions about causality in these associations (see Under what conditions does correlation imply causation
In what ways can observational studies lead to errors? Primarily if you want to draw conclusions about causality. The issue that arises is that there might always be the chance that some variables you did not observe are the "real" causes (often called "unmeasured confounding"), so you might falsely assume that one of your measured variables is causing something, whereas "in truth" it is one of the unmeasured confounders. In experiments, the general assumption is that by random assignment potential confounders will get canceled out.
If you want to know more, start by going through the links provided, and look at publications from people like Paul Rosenbaum or the book-link provided by iopsych: Experimental and Quasi-Experimental Designs for Generalized Causal Inference (Shadish, Cook, and Campbell, (2002) | Difference between experimental data and observational data? | wow, that's a tough one :-)
That question is far more widely relevant than just in data mining. It comes up in medicine and in the social sciences including psychology all the time.
The distinction i | Difference between experimental data and observational data?
wow, that's a tough one :-)
That question is far more widely relevant than just in data mining. It comes up in medicine and in the social sciences including psychology all the time.
The distinction is necessary when it comes to drawing conclusions about causality, that is, when you want to know if something (e.g. a medical treatment) causes another thing (e.g. recovery of a patient). Hordes of scientists and philosophers debate whether you can draw conclusions about causality from observational studies or not. You might want to look at the question statistics and causal inference?.
So what is an experiment? Concisely, an experiment is often defined as random assignment of observational units to different conditions, and conditions differ by the treatment of observational units. Treatment is a generic term, which translates most easily in medical applications (e.g. patients are treated differently under different conditions), but it also applies to other areas. There are variations of experiments --- you might want to start by reading the wikipedia entries for Experiment and randomized experiment --- but the one crucial point is random assignment of subjects to conditions.
With that in mind, it is definitely not possible to do an experiment for all kinds of hypotheses you want to test. For example, you sometimes can't do experiments for ethical reasons, e.g. you don't want people to suffer because of a treatment. In other cases, it might be physically impossible to conduct an experiment.
So whereas experimentation (controlled randomized assignment to treatment conditions) is the primary way to draw conclusions about causality --- and for some, it is the only way --- people still want to do something empirical in those cases where experiments are not possible. That's when you want to do an observational study.
To define an observational study, I draw on Paul Rosenbaums entry in the encyclopedia of statistics in behavioral science: An observational study is "an empiric comparison of treated and control groups in which the objective is to elucidate cause-and-effect relationships
[. . . in which it] is not feasible to use controlled experimentation, in the sense of being able to impose the procedures or treatments whose effects it is desired to discover, or to assign subjects at random to different procedures." In an observational study, you try to measure as many variables as possible, and you want to test hypotheses about what changes in a set of those variables are associated with changes in other sets of variables, often with the goal of drawing conclusions about causality in these associations (see Under what conditions does correlation imply causation
In what ways can observational studies lead to errors? Primarily if you want to draw conclusions about causality. The issue that arises is that there might always be the chance that some variables you did not observe are the "real" causes (often called "unmeasured confounding"), so you might falsely assume that one of your measured variables is causing something, whereas "in truth" it is one of the unmeasured confounders. In experiments, the general assumption is that by random assignment potential confounders will get canceled out.
If you want to know more, start by going through the links provided, and look at publications from people like Paul Rosenbaum or the book-link provided by iopsych: Experimental and Quasi-Experimental Designs for Generalized Causal Inference (Shadish, Cook, and Campbell, (2002) | Difference between experimental data and observational data?
wow, that's a tough one :-)
That question is far more widely relevant than just in data mining. It comes up in medicine and in the social sciences including psychology all the time.
The distinction i |
52,422 | Difference between experimental data and observational data? | Very much in a nutshell: only data for which you have all covariates under control, and have either randomization over possible confounders or enough information on them to properly account for them, can be truly called experimental. This could e.g. be the case in plant research where genetically identical and similarly grown plants are feasible: you can then make sure that only your variable of interest differs between groups of interest.
The place where (in statistically correct research) this matters most, is in trying to find a causal relation. A classical example is people taking aspirin, and the effect it has on heart disease: if you pick 100 people who take aspirin and 100 people who don't, and then somehow measure their heart condition, then even if the aspirin takers are at a lower risk frmo this research, you cannot conclude that people should all take aspirin: perhaps the aspirin taking and heart 'improvement' are both consequences of 'better living' or similar.
So, basically (since in reality we almost always want to show that A is a consequence f B): if it is available/attainable: prefer experimental data. | Difference between experimental data and observational data? | Very much in a nutshell: only data for which you have all covariates under control, and have either randomization over possible confounders or enough information on them to properly account for them, | Difference between experimental data and observational data?
Very much in a nutshell: only data for which you have all covariates under control, and have either randomization over possible confounders or enough information on them to properly account for them, can be truly called experimental. This could e.g. be the case in plant research where genetically identical and similarly grown plants are feasible: you can then make sure that only your variable of interest differs between groups of interest.
The place where (in statistically correct research) this matters most, is in trying to find a causal relation. A classical example is people taking aspirin, and the effect it has on heart disease: if you pick 100 people who take aspirin and 100 people who don't, and then somehow measure their heart condition, then even if the aspirin takers are at a lower risk frmo this research, you cannot conclude that people should all take aspirin: perhaps the aspirin taking and heart 'improvement' are both consequences of 'better living' or similar.
So, basically (since in reality we almost always want to show that A is a consequence f B): if it is available/attainable: prefer experimental data. | Difference between experimental data and observational data?
Very much in a nutshell: only data for which you have all covariates under control, and have either randomization over possible confounders or enough information on them to properly account for them, |
52,423 | How to test the predictive power of a model? | ROC, sensitivity, specificity, and cutoffs have gotten in the way, unfortunately. Assuming there is nothing between "good" and "bad" and that the success of the experiment was not based on an underlying continuum that should have instead formed the dependent variable, a probability model such as logistic regression would seem to be called for. You may need to do resampling to get an unbiased appraisal of the model's likely future performance. Note that even though a receiver operating characteristic curve is seldom appropriate, its area (also called c-index or concordance probability from the Wilcoxon-Mann-Whitney test) is a good summary measure of pure predictive discrimination. On the other hand, percent classified correctly is an improper scoring rule that, if optimized, will result in a bogus model.
Predicted probabilities are your friend, and they are also self-contained error rates at the point where someone forces you to make a binary decision, if they do. | How to test the predictive power of a model? | ROC, sensitivity, specificity, and cutoffs have gotten in the way, unfortunately. Assuming there is nothing between "good" and "bad" and that the success of the experiment was not based on an underlyi | How to test the predictive power of a model?
ROC, sensitivity, specificity, and cutoffs have gotten in the way, unfortunately. Assuming there is nothing between "good" and "bad" and that the success of the experiment was not based on an underlying continuum that should have instead formed the dependent variable, a probability model such as logistic regression would seem to be called for. You may need to do resampling to get an unbiased appraisal of the model's likely future performance. Note that even though a receiver operating characteristic curve is seldom appropriate, its area (also called c-index or concordance probability from the Wilcoxon-Mann-Whitney test) is a good summary measure of pure predictive discrimination. On the other hand, percent classified correctly is an improper scoring rule that, if optimized, will result in a bogus model.
Predicted probabilities are your friend, and they are also self-contained error rates at the point where someone forces you to make a binary decision, if they do. | How to test the predictive power of a model?
ROC, sensitivity, specificity, and cutoffs have gotten in the way, unfortunately. Assuming there is nothing between "good" and "bad" and that the success of the experiment was not based on an underlyi |
52,424 | How to test the predictive power of a model? | AUC is a good start. You can also calculate what percent of observations were correctly classified, and you can make a confusion matrix.
However, the best single thing you can do is calculate these values using a "test" dataset, who's observations were not used to train the model. This is the only true test of a predictive model. | How to test the predictive power of a model? | AUC is a good start. You can also calculate what percent of observations were correctly classified, and you can make a confusion matrix.
However, the best single thing you can do is calculate these v | How to test the predictive power of a model?
AUC is a good start. You can also calculate what percent of observations were correctly classified, and you can make a confusion matrix.
However, the best single thing you can do is calculate these values using a "test" dataset, who's observations were not used to train the model. This is the only true test of a predictive model. | How to test the predictive power of a model?
AUC is a good start. You can also calculate what percent of observations were correctly classified, and you can make a confusion matrix.
However, the best single thing you can do is calculate these v |
52,425 | Help with data analysis of small datasets | One could use a non-parametric version of ANOVA: this is called the Kruskal-Wallis test. It is based on ranking all 17 results and computing the mean ranks within each group. The mean rank of 2.0 among the controls is obviously smaller than any other mean rank (which range from 7 to 12). However, the test p-value is only 0.0937 (based on a chi-squared approximation).
Within each group (including the control), the SDs are approximately one-half the means (the "trend" in the figure).
This suggests (and justifies) basing the analysis on the logarithms of the concentrations, for which the group SDs will be approximately stable. This provides 11 degrees of freedom for estimating variation, so having just two or three measurements per group is not a limitation. This observation (that using logarithms may stabilize the residuals) is important in its own right, because it indicates how best to make estimates, how to carry out future analyses on continuations of this experiment, and supports the perception that the standard deviation of the control measurements really is relatively small. (That otherwise is a weak conclusion because there are only two control measurements.)
Regression (or equivalently, ANOVA) of the logs against the group identifiers has an overall p-value of 0.0521 (using an F-test with 5 and 11 degrees of freedom). This is suggestive but not quite low enough to be taken as "significant" by most journals. However, this is a two-sided test, whereas your hypothesis is one-sided. A crude adjustment is to halve the p-value to reflect this and report the result as "significant" with p approximately equal to 0.026.
Because this crude adjustment is merely an approximation, you might drive the point home with a permutation test. Returning to the idea of a non-parametric analysis, we ask for the chance that the average rank of the control measurement is 2.0 or less under the null hypothesis that all 17 results were randomly associated with the 17 measurements. This is equivalent to the control having either the first and second or the first and third smallest concentrations out of the 17. The chance of the first event is $\binom{2}{2}/\binom{17}{2} = 1/136$ and the chance of the second event is the same, for a total chance of $2/136$, or 1.47%. You could even conservatively characterize the results as "all the control concentrations were among the lowest three." The chance of this is $\binom{3}{2} / \binom{17}{2}$, equal to 2.2%. In any case you have a significant result for $p \lt 0.022$. | Help with data analysis of small datasets | One could use a non-parametric version of ANOVA: this is called the Kruskal-Wallis test. It is based on ranking all 17 results and computing the mean ranks within each group. The mean rank of 2.0 am | Help with data analysis of small datasets
One could use a non-parametric version of ANOVA: this is called the Kruskal-Wallis test. It is based on ranking all 17 results and computing the mean ranks within each group. The mean rank of 2.0 among the controls is obviously smaller than any other mean rank (which range from 7 to 12). However, the test p-value is only 0.0937 (based on a chi-squared approximation).
Within each group (including the control), the SDs are approximately one-half the means (the "trend" in the figure).
This suggests (and justifies) basing the analysis on the logarithms of the concentrations, for which the group SDs will be approximately stable. This provides 11 degrees of freedom for estimating variation, so having just two or three measurements per group is not a limitation. This observation (that using logarithms may stabilize the residuals) is important in its own right, because it indicates how best to make estimates, how to carry out future analyses on continuations of this experiment, and supports the perception that the standard deviation of the control measurements really is relatively small. (That otherwise is a weak conclusion because there are only two control measurements.)
Regression (or equivalently, ANOVA) of the logs against the group identifiers has an overall p-value of 0.0521 (using an F-test with 5 and 11 degrees of freedom). This is suggestive but not quite low enough to be taken as "significant" by most journals. However, this is a two-sided test, whereas your hypothesis is one-sided. A crude adjustment is to halve the p-value to reflect this and report the result as "significant" with p approximately equal to 0.026.
Because this crude adjustment is merely an approximation, you might drive the point home with a permutation test. Returning to the idea of a non-parametric analysis, we ask for the chance that the average rank of the control measurement is 2.0 or less under the null hypothesis that all 17 results were randomly associated with the 17 measurements. This is equivalent to the control having either the first and second or the first and third smallest concentrations out of the 17. The chance of the first event is $\binom{2}{2}/\binom{17}{2} = 1/136$ and the chance of the second event is the same, for a total chance of $2/136$, or 1.47%. You could even conservatively characterize the results as "all the control concentrations were among the lowest three." The chance of this is $\binom{3}{2} / \binom{17}{2}$, equal to 2.2%. In any case you have a significant result for $p \lt 0.022$. | Help with data analysis of small datasets
One could use a non-parametric version of ANOVA: this is called the Kruskal-Wallis test. It is based on ranking all 17 results and computing the mean ranks within each group. The mean rank of 2.0 am |
52,426 | Help with data analysis of small datasets | There is a time to make formal statistical inferences from sample to population, and a time to simply report on your descriptive results and let your audience make informal inferences--or not, as they see fit. This looks like the latter. With two control values, you are one step away from having no variation on which to base any findings.
"I was surprised however, that median and Wilcoxon-Mann- Whitney tests did not reveal statistical significant results." I recommend that you familiarize yourself with the literature on statistical power. | Help with data analysis of small datasets | There is a time to make formal statistical inferences from sample to population, and a time to simply report on your descriptive results and let your audience make informal inferences--or not, as they | Help with data analysis of small datasets
There is a time to make formal statistical inferences from sample to population, and a time to simply report on your descriptive results and let your audience make informal inferences--or not, as they see fit. This looks like the latter. With two control values, you are one step away from having no variation on which to base any findings.
"I was surprised however, that median and Wilcoxon-Mann- Whitney tests did not reveal statistical significant results." I recommend that you familiarize yourself with the literature on statistical power. | Help with data analysis of small datasets
There is a time to make formal statistical inferences from sample to population, and a time to simply report on your descriptive results and let your audience make informal inferences--or not, as they |
52,427 | Help with data analysis of small datasets | This is an interesting data set. It seems like a good idea to follow @whuber's advice and do the analysis on the log scale. However, there is more than one hypothesis here. For you could have the hypothesis
$$H_{0}:\text{samples 1-5 have the same mean and variance on the log scale,}$$
$$\text{and this is different from the control mean}$$
But you could also have:
$$H_{1}:\text{samples 1-3 have the same mean and variance on the log scale,}$$
$$\text{and this is different from the control mean, and samples 4-5 have}$$
$$\text{the same mean and variance but different from both control group}$$
$$\text{and samples 1-3}$$
$H_{1}$ appears to be the most plausible hypothesis to me as judged by eye. You can also have:
$$H_{2}:\text{samples 1-5 have different means and variances on the log}$$
$$\text{scale, and are different from the control group}$$
Each of these hypothesis, if true, would constitute some sort of "significant" result. In any case, once you have decided that they are different, the interest then shifts to saying "well, exactly how are they different?"
I think you have a less significant result because you are testing $H_{2}$ which has many parameters.
For $H_0$ we have $\text{mean}\pm\text{standard dev}$ of $5.0\pm 0.62$ and $3.8\pm 0.22$, showing a clear difference, the behrens fisher statistic is
$$T=\frac{5.0-3.8}{\sqrt{\frac{0.62^2}{15}+\frac{0.22^2}{2}}}=5.39$$
The two-sample T statistic is about $2.64$, but the assumption of equal variance is not supported by the data, especially as the control group was by far the lowest variance, and is nearly triple the variance of the pooled sample.
More later as I have to stop for now... | Help with data analysis of small datasets | This is an interesting data set. It seems like a good idea to follow @whuber's advice and do the analysis on the log scale. However, there is more than one hypothesis here. For you could have the h | Help with data analysis of small datasets
This is an interesting data set. It seems like a good idea to follow @whuber's advice and do the analysis on the log scale. However, there is more than one hypothesis here. For you could have the hypothesis
$$H_{0}:\text{samples 1-5 have the same mean and variance on the log scale,}$$
$$\text{and this is different from the control mean}$$
But you could also have:
$$H_{1}:\text{samples 1-3 have the same mean and variance on the log scale,}$$
$$\text{and this is different from the control mean, and samples 4-5 have}$$
$$\text{the same mean and variance but different from both control group}$$
$$\text{and samples 1-3}$$
$H_{1}$ appears to be the most plausible hypothesis to me as judged by eye. You can also have:
$$H_{2}:\text{samples 1-5 have different means and variances on the log}$$
$$\text{scale, and are different from the control group}$$
Each of these hypothesis, if true, would constitute some sort of "significant" result. In any case, once you have decided that they are different, the interest then shifts to saying "well, exactly how are they different?"
I think you have a less significant result because you are testing $H_{2}$ which has many parameters.
For $H_0$ we have $\text{mean}\pm\text{standard dev}$ of $5.0\pm 0.62$ and $3.8\pm 0.22$, showing a clear difference, the behrens fisher statistic is
$$T=\frac{5.0-3.8}{\sqrt{\frac{0.62^2}{15}+\frac{0.22^2}{2}}}=5.39$$
The two-sample T statistic is about $2.64$, but the assumption of equal variance is not supported by the data, especially as the control group was by far the lowest variance, and is nearly triple the variance of the pooled sample.
More later as I have to stop for now... | Help with data analysis of small datasets
This is an interesting data set. It seems like a good idea to follow @whuber's advice and do the analysis on the log scale. However, there is more than one hypothesis here. For you could have the h |
52,428 | Multidimensional scaling pseudo-code | There are different kind of MDS (e.g., see this brief review). Here are two pointers:
the smacof R package, developed by Jan de Leeuw and Patrick Mair has a nice vignette, Multidimensional Scaling Using Majorization: SMACOF in R (or see, the Journal of Statistical Software (2009) 31(3)) -- R code is available, of course.
there are some handouts on Multidimensional Scaling, by Forrest Young, where several algorithms are discussed (including INDSCAL (Individual Difference Scaling, or weighted MDS) and ALSCAL, with Fortran source code by the same author) -- this two keywords should help you to find other source code (mostly Fortran, C, or Lisp).
You can also look for "Manifold learning" which should give you a lot of techniques for dimension reduction (Isomap, PCA, MDS, etc.); the term was coined by the Machine Learning community, among others, and they probably have a different view on MDS compared to psychometricians. | Multidimensional scaling pseudo-code | There are different kind of MDS (e.g., see this brief review). Here are two pointers:
the smacof R package, developed by Jan de Leeuw and Patrick Mair has a nice vignette, Multidimensional Scaling Us | Multidimensional scaling pseudo-code
There are different kind of MDS (e.g., see this brief review). Here are two pointers:
the smacof R package, developed by Jan de Leeuw and Patrick Mair has a nice vignette, Multidimensional Scaling Using Majorization: SMACOF in R (or see, the Journal of Statistical Software (2009) 31(3)) -- R code is available, of course.
there are some handouts on Multidimensional Scaling, by Forrest Young, where several algorithms are discussed (including INDSCAL (Individual Difference Scaling, or weighted MDS) and ALSCAL, with Fortran source code by the same author) -- this two keywords should help you to find other source code (mostly Fortran, C, or Lisp).
You can also look for "Manifold learning" which should give you a lot of techniques for dimension reduction (Isomap, PCA, MDS, etc.); the term was coined by the Machine Learning community, among others, and they probably have a different view on MDS compared to psychometricians. | Multidimensional scaling pseudo-code
There are different kind of MDS (e.g., see this brief review). Here are two pointers:
the smacof R package, developed by Jan de Leeuw and Patrick Mair has a nice vignette, Multidimensional Scaling Us |
52,429 | Multidimensional scaling pseudo-code | If you have the Statistics Toolbox in MATLAB you can read the source code of mdscale.m. While it's not pseudocode, it will definitely help you understand MDS better and gives you one approach to coding it.
In MATLAB you can type
edit mdscale
and that will open up an editor window that shows you the mdscale.m script that does the work. If you don't have MATLAB, check out Scikits.learn. It has some code for MDS. A lot of times reading Python code is very similar to reading pseudocode! | Multidimensional scaling pseudo-code | If you have the Statistics Toolbox in MATLAB you can read the source code of mdscale.m. While it's not pseudocode, it will definitely help you understand MDS better and gives you one approach to codin | Multidimensional scaling pseudo-code
If you have the Statistics Toolbox in MATLAB you can read the source code of mdscale.m. While it's not pseudocode, it will definitely help you understand MDS better and gives you one approach to coding it.
In MATLAB you can type
edit mdscale
and that will open up an editor window that shows you the mdscale.m script that does the work. If you don't have MATLAB, check out Scikits.learn. It has some code for MDS. A lot of times reading Python code is very similar to reading pseudocode! | Multidimensional scaling pseudo-code
If you have the Statistics Toolbox in MATLAB you can read the source code of mdscale.m. While it's not pseudocode, it will definitely help you understand MDS better and gives you one approach to codin |
52,430 | Should percentages be reported with decimal places? | It depends on the size of the differences between classes. In most applications, saying the 73% prefer option A and 27% prefer option B is perfectly acceptable. But if you're dealing in an election where candidate X has 50.15% of votes and candidate Y has 49.86%, the decimal places are very much necessary.
Of course, you need to take care to make sure that all classes add up to 100%. In my electoral example above, they add up to 100.01%. In that case you might even consider adding a third decimal place. | Should percentages be reported with decimal places? | It depends on the size of the differences between classes. In most applications, saying the 73% prefer option A and 27% prefer option B is perfectly acceptable. But if you're dealing in an election wh | Should percentages be reported with decimal places?
It depends on the size of the differences between classes. In most applications, saying the 73% prefer option A and 27% prefer option B is perfectly acceptable. But if you're dealing in an election where candidate X has 50.15% of votes and candidate Y has 49.86%, the decimal places are very much necessary.
Of course, you need to take care to make sure that all classes add up to 100%. In my electoral example above, they add up to 100.01%. In that case you might even consider adding a third decimal place. | Should percentages be reported with decimal places?
It depends on the size of the differences between classes. In most applications, saying the 73% prefer option A and 27% prefer option B is perfectly acceptable. But if you're dealing in an election wh |
52,431 | Should percentages be reported with decimal places? | Different organisations often have conflicting rules for the precision in reporting of results. Ultimately there is a trade-off between when seeing the extra digits is useful, versus cases where unnecessary and excessive precision "can swamp the reader, overcomplicate the story and obscure the message" — a subject explored by Tim Cole (2015) in a piece that I found gave a useful guide to "sensible" precision in reporting, and a comparison of leading style manuals. His advice on percentages was as follows:
Integers, or one decimal place for values under 10%. Values over 90% may need one decimal place if their complement is informative. Use two or more decimal places only if the range of values is less than 0.1%
Examples: 0.1%, 5.3%, 27%, 89%, 99.6%
By "complement" he is referring to cases where one might be interested in the "other lot", e.g. if I tell you 98% of patients in a trial got better, you may well be interested in the 2% who did not, and in that case another decimal place to distinguish whether that "2%" really means "2.4% or "1.6%" would actually be useful.
References
Cole, T. J. (2015). Too many digits: the presentation of numerical data. Archives of disease in childhood, 100(7), 608-609. http://dx.doi.org/10.1136/archdischild-2014-307149 | Should percentages be reported with decimal places? | Different organisations often have conflicting rules for the precision in reporting of results. Ultimately there is a trade-off between when seeing the extra digits is useful, versus cases where unnec | Should percentages be reported with decimal places?
Different organisations often have conflicting rules for the precision in reporting of results. Ultimately there is a trade-off between when seeing the extra digits is useful, versus cases where unnecessary and excessive precision "can swamp the reader, overcomplicate the story and obscure the message" — a subject explored by Tim Cole (2015) in a piece that I found gave a useful guide to "sensible" precision in reporting, and a comparison of leading style manuals. His advice on percentages was as follows:
Integers, or one decimal place for values under 10%. Values over 90% may need one decimal place if their complement is informative. Use two or more decimal places only if the range of values is less than 0.1%
Examples: 0.1%, 5.3%, 27%, 89%, 99.6%
By "complement" he is referring to cases where one might be interested in the "other lot", e.g. if I tell you 98% of patients in a trial got better, you may well be interested in the 2% who did not, and in that case another decimal place to distinguish whether that "2%" really means "2.4% or "1.6%" would actually be useful.
References
Cole, T. J. (2015). Too many digits: the presentation of numerical data. Archives of disease in childhood, 100(7), 608-609. http://dx.doi.org/10.1136/archdischild-2014-307149 | Should percentages be reported with decimal places?
Different organisations often have conflicting rules for the precision in reporting of results. Ultimately there is a trade-off between when seeing the extra digits is useful, versus cases where unnec |
52,432 | Should percentages be reported with decimal places? | This is a significant figures issue, and is dependent upon the precision of the numbers underlying the percentages. The technically correct number of significant figures is not dependent upon downstream use or the differences between percentage values.
If you're trying to express a percentage describing 5 items out of 7, it would be absurd to claim that it's 71.4285714285% - you simply don't have the precision to back up all those decimal places. When doing division, your answer should have as many significant figures and the fewest number of sig figs in your starting numbers. Here, you only have 1 significant figure, so the percentage should really just be 70%, not even 71%. If you had another example where you want to express 71428 items out of 100000, then you are justified in using more significant figures, all the way out to 71.428%.
Even if you have great precision, it's often preferable to truncate for human readability. Depending on your domain, adding those two extra decimal places may or may not make a difference. You should never over-report significant figures, but you may be justified in under-reporting them if your statistical precision is greater than what's needed for your application. | Should percentages be reported with decimal places? | This is a significant figures issue, and is dependent upon the precision of the numbers underlying the percentages. The technically correct number of significant figures is not dependent upon downstre | Should percentages be reported with decimal places?
This is a significant figures issue, and is dependent upon the precision of the numbers underlying the percentages. The technically correct number of significant figures is not dependent upon downstream use or the differences between percentage values.
If you're trying to express a percentage describing 5 items out of 7, it would be absurd to claim that it's 71.4285714285% - you simply don't have the precision to back up all those decimal places. When doing division, your answer should have as many significant figures and the fewest number of sig figs in your starting numbers. Here, you only have 1 significant figure, so the percentage should really just be 70%, not even 71%. If you had another example where you want to express 71428 items out of 100000, then you are justified in using more significant figures, all the way out to 71.428%.
Even if you have great precision, it's often preferable to truncate for human readability. Depending on your domain, adding those two extra decimal places may or may not make a difference. You should never over-report significant figures, but you may be justified in under-reporting them if your statistical precision is greater than what's needed for your application. | Should percentages be reported with decimal places?
This is a significant figures issue, and is dependent upon the precision of the numbers underlying the percentages. The technically correct number of significant figures is not dependent upon downstre |
52,433 | Should percentages be reported with decimal places? | The goal is to make it easy for the reader to understand the important differences. Too many digits obscures the meaningful difference between values in a table. Too few leaves out important information. Here's a great discussion: https://newmr.org/blog/how-many-significant-digits-should-you-display-in-your-presentation/ and here's a much more detailed analysis: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4483789/ | Should percentages be reported with decimal places? | The goal is to make it easy for the reader to understand the important differences. Too many digits obscures the meaningful difference between values in a table. Too few leaves out important informati | Should percentages be reported with decimal places?
The goal is to make it easy for the reader to understand the important differences. Too many digits obscures the meaningful difference between values in a table. Too few leaves out important information. Here's a great discussion: https://newmr.org/blog/how-many-significant-digits-should-you-display-in-your-presentation/ and here's a much more detailed analysis: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4483789/ | Should percentages be reported with decimal places?
The goal is to make it easy for the reader to understand the important differences. Too many digits obscures the meaningful difference between values in a table. Too few leaves out important informati |
52,434 | Subsets not significantly different but superset is | It seems to be a question of test power. If you only look at a subset you have a lot less participants and therefore a lot less power to find an effect of similar size.
With a reduced sample size you can only find a much bigger effect. So it is NOT recommended to only look at the subsets in this case. Unless there is an interaction (i.e., do the results point into the same direction for both men and women?).
Furthermore, there is no need to use a Wilcoxon test only because your data is not normally distributed (unless it heavily deviates). Probably you can still use the t.test (for example one of the user here, whuber, recently advocated the t.test in a similar case, because the normally assumption does not necessarily hold for the data but for the sampling distribution. quoting him: "The reason is that the sampling distributions of the means are approximately normal, even though the distributions of the data are not").
However, if you still don't want to use the t.test there are more powerful 'assumption free' parametric alternatives, especially permutation tests. See the answer to my question here (whubers quote is also from there): Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
In my case the results were even a little bit better (i.e., smaller p) than when using the t.test. So I would recommend this permutation test based on the coin package. I could provide you with the necessary r-commands if you provide some sample data in your question.
Update: The effect of outliers on the t-test
If you look at the help of t.test in R ?t.test, you will find the following example:
t.test(1:10,y=c(7:20)) # P = .00001855
t.test(1:10,y=c(7:20, 200)) # P = .1245 -- NOT significant anymore
Although in the second case you have a much extremer difference in the means, the outlier leads to the counterintuitive finding that the data is not significant anymore. Hence, a method to deal with outliers (e.g. Winsorizing, here) is recommended for parametric tests as the t if the data permits. | Subsets not significantly different but superset is | It seems to be a question of test power. If you only look at a subset you have a lot less participants and therefore a lot less power to find an effect of similar size.
With a reduced sample size you | Subsets not significantly different but superset is
It seems to be a question of test power. If you only look at a subset you have a lot less participants and therefore a lot less power to find an effect of similar size.
With a reduced sample size you can only find a much bigger effect. So it is NOT recommended to only look at the subsets in this case. Unless there is an interaction (i.e., do the results point into the same direction for both men and women?).
Furthermore, there is no need to use a Wilcoxon test only because your data is not normally distributed (unless it heavily deviates). Probably you can still use the t.test (for example one of the user here, whuber, recently advocated the t.test in a similar case, because the normally assumption does not necessarily hold for the data but for the sampling distribution. quoting him: "The reason is that the sampling distributions of the means are approximately normal, even though the distributions of the data are not").
However, if you still don't want to use the t.test there are more powerful 'assumption free' parametric alternatives, especially permutation tests. See the answer to my question here (whubers quote is also from there): Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
In my case the results were even a little bit better (i.e., smaller p) than when using the t.test. So I would recommend this permutation test based on the coin package. I could provide you with the necessary r-commands if you provide some sample data in your question.
Update: The effect of outliers on the t-test
If you look at the help of t.test in R ?t.test, you will find the following example:
t.test(1:10,y=c(7:20)) # P = .00001855
t.test(1:10,y=c(7:20, 200)) # P = .1245 -- NOT significant anymore
Although in the second case you have a much extremer difference in the means, the outlier leads to the counterintuitive finding that the data is not significant anymore. Hence, a method to deal with outliers (e.g. Winsorizing, here) is recommended for parametric tests as the t if the data permits. | Subsets not significantly different but superset is
It seems to be a question of test power. If you only look at a subset you have a lot less participants and therefore a lot less power to find an effect of similar size.
With a reduced sample size you |
52,435 | Subsets not significantly different but superset is | This is not necessarily an issue of statistical power; it could also be an example of confounding.
Example:
One category of $O$ is more common in males but the other is more common in females
The distribution of $A$ differs between males and females
Within each sex separately, the distribution of $A$ is exactly the same for both $O$ categories
Then there will still be an overall difference in the distribution of $A$ between the $O$ categories. | Subsets not significantly different but superset is | This is not necessarily an issue of statistical power; it could also be an example of confounding.
Example:
One category of $O$ is more common in males but the other is more common in females
T | Subsets not significantly different but superset is
This is not necessarily an issue of statistical power; it could also be an example of confounding.
Example:
One category of $O$ is more common in males but the other is more common in females
The distribution of $A$ differs between males and females
Within each sex separately, the distribution of $A$ is exactly the same for both $O$ categories
Then there will still be an overall difference in the distribution of $A$ between the $O$ categories. | Subsets not significantly different but superset is
This is not necessarily an issue of statistical power; it could also be an example of confounding.
Example:
One category of $O$ is more common in males but the other is more common in females
T |
52,436 | What to do about ties in voting results? | To give some context, I don't view this as a "statistical" question as much of a "group preference" question. Economists and policy wonks do a lot of thinking about questions of how to convert individual preferences into a "will of the people." You will find lots of interesting reading if you search the web for "political economy" and "voting system" together.
Reasonable people will disagree on which candidate "wins" for the example you gave. There is no objectively correct voting system. Baked into the idea of a voting system are assumptions about fairness and representation. Each voting system has advantageous and disadvantageous properties. (Digging into the pros/cons of various voting systems is super interesting but would take many pages. I recommend by starting with Wikipedia's entry on "Voting Systems". Also be sure to read about Arrow's Impossibility Theorem.)
Don't expect to find a universally accepted "best" voting system. Rather, pick a voting system that seems most reasonable for your domain (i.e. the upsides of the system outweigh the downsides). Make sure that your constituency buys-in to the voting system if you want the results to be taken seriously.
Resolving this described situation cannot be done with statistics alone. You will need to choose a voting system -- that choice will drive the ballot choice and how the ballots are converted into results (e.g. one or more winners, some sort of ranking, or some sort of scoring).
This is a big question. I think you'll enjoy digging into these topics.
Finally, for what it is worth, I have strong reservations about the described ballot (0 for no hire, +1 for maybe, +2 for hire) as a way for a board to pick a CEO. | What to do about ties in voting results? | To give some context, I don't view this as a "statistical" question as much of a "group preference" question. Economists and policy wonks do a lot of thinking about questions of how to convert individ | What to do about ties in voting results?
To give some context, I don't view this as a "statistical" question as much of a "group preference" question. Economists and policy wonks do a lot of thinking about questions of how to convert individual preferences into a "will of the people." You will find lots of interesting reading if you search the web for "political economy" and "voting system" together.
Reasonable people will disagree on which candidate "wins" for the example you gave. There is no objectively correct voting system. Baked into the idea of a voting system are assumptions about fairness and representation. Each voting system has advantageous and disadvantageous properties. (Digging into the pros/cons of various voting systems is super interesting but would take many pages. I recommend by starting with Wikipedia's entry on "Voting Systems". Also be sure to read about Arrow's Impossibility Theorem.)
Don't expect to find a universally accepted "best" voting system. Rather, pick a voting system that seems most reasonable for your domain (i.e. the upsides of the system outweigh the downsides). Make sure that your constituency buys-in to the voting system if you want the results to be taken seriously.
Resolving this described situation cannot be done with statistics alone. You will need to choose a voting system -- that choice will drive the ballot choice and how the ballots are converted into results (e.g. one or more winners, some sort of ranking, or some sort of scoring).
This is a big question. I think you'll enjoy digging into these topics.
Finally, for what it is worth, I have strong reservations about the described ballot (0 for no hire, +1 for maybe, +2 for hire) as a way for a board to pick a CEO. | What to do about ties in voting results?
To give some context, I don't view this as a "statistical" question as much of a "group preference" question. Economists and policy wonks do a lot of thinking about questions of how to convert individ |
52,437 | What to do about ties in voting results? | You're asking an intriguing question. I agree with the comments that are showing some apprehension at the "one-man-one-vote" system. I also agree that knowing the basic statistics (like standard deviation and mean) will not give you an insight into the will of the voters.
I would like to play off of David James's answer, keying in on stakeholders. Instead of a vote, perhaps you could give stakeholders a virtual account, which they must "spend" on the candidates.
If they had $100 each, then perhaps one stakeholder would show a strong preference for candidate A by spending all $100 on him or her. Another stakeholder might like candidate B slightly more than candidate A and spend $60/$40. A third stakeholder might find all three candidates equally (un)appealing and spend $33/$33/$34.
A variation would be to give different stakeholders accounts of different sizes. For example, perhaps the exiting CEO gets $200 and a worker's representative gets $150.
You could even ask for an open vote, where each stakeholder explains his reasoning.
Highest earner wins the position. Or maybe the top two get the most careful look and a runoff.
This betting technique is an adaptation of what is done in Blind Man's Bluff: The Untold Story of American Submarine Espionage (1998, S. Sontag and C.Drew) A B-52 bomber collided with an air tanker, and they lost an H-bomb.
Craven asked a group of submarine and salvage experts to place Las-Vegas-style bets on the probability of each of the different scenarios that might describe the bomb's loss.... Each scenario left the weapon in a different location.... He was relying on Bayes' theorem of subjective probability. (pp. 58-59)
Whatever you choose, please make sure that the rules are clear before you start voting on candidates. A perception that the rules changed will not help the transition. | What to do about ties in voting results? | You're asking an intriguing question. I agree with the comments that are showing some apprehension at the "one-man-one-vote" system. I also agree that knowing the basic statistics (like standard devia | What to do about ties in voting results?
You're asking an intriguing question. I agree with the comments that are showing some apprehension at the "one-man-one-vote" system. I also agree that knowing the basic statistics (like standard deviation and mean) will not give you an insight into the will of the voters.
I would like to play off of David James's answer, keying in on stakeholders. Instead of a vote, perhaps you could give stakeholders a virtual account, which they must "spend" on the candidates.
If they had $100 each, then perhaps one stakeholder would show a strong preference for candidate A by spending all $100 on him or her. Another stakeholder might like candidate B slightly more than candidate A and spend $60/$40. A third stakeholder might find all three candidates equally (un)appealing and spend $33/$33/$34.
A variation would be to give different stakeholders accounts of different sizes. For example, perhaps the exiting CEO gets $200 and a worker's representative gets $150.
You could even ask for an open vote, where each stakeholder explains his reasoning.
Highest earner wins the position. Or maybe the top two get the most careful look and a runoff.
This betting technique is an adaptation of what is done in Blind Man's Bluff: The Untold Story of American Submarine Espionage (1998, S. Sontag and C.Drew) A B-52 bomber collided with an air tanker, and they lost an H-bomb.
Craven asked a group of submarine and salvage experts to place Las-Vegas-style bets on the probability of each of the different scenarios that might describe the bomb's loss.... Each scenario left the weapon in a different location.... He was relying on Bayes' theorem of subjective probability. (pp. 58-59)
Whatever you choose, please make sure that the rules are clear before you start voting on candidates. A perception that the rules changed will not help the transition. | What to do about ties in voting results?
You're asking an intriguing question. I agree with the comments that are showing some apprehension at the "one-man-one-vote" system. I also agree that knowing the basic statistics (like standard devia |
52,438 | What to do about ties in voting results? | A little OT, but one of my favourite nuggets of science is Arrow's theorem, so in case you're not familiar here's the wikipedia page:
http://en.wikipedia.org/wiki/Arrow's_impossibility_theorem
And all from a PhD thesis, too. Quite inspiring really. Mine was rubbish. | What to do about ties in voting results? | A little OT, but one of my favourite nuggets of science is Arrow's theorem, so in case you're not familiar here's the wikipedia page:
http://en.wikipedia.org/wiki/Arrow's_impossibility_theorem
And all | What to do about ties in voting results?
A little OT, but one of my favourite nuggets of science is Arrow's theorem, so in case you're not familiar here's the wikipedia page:
http://en.wikipedia.org/wiki/Arrow's_impossibility_theorem
And all from a PhD thesis, too. Quite inspiring really. Mine was rubbish. | What to do about ties in voting results?
A little OT, but one of my favourite nuggets of science is Arrow's theorem, so in case you're not familiar here's the wikipedia page:
http://en.wikipedia.org/wiki/Arrow's_impossibility_theorem
And all |
52,439 | Are these equivalent representations of the same hierarchical Bayesian model? | Updated Response: You still don't have a full specification for model #2. However, I can sort of guess what you mean -- correct me if I'm wrong. The trouble is that the statements $Y = \beta_1 X$ & $Y = \beta_0 + \beta_1 X$ are not probabilistic.
[ Aside: In a mathematical sense, you're defining a set of linear equations. If you have only one datapoint then you can solve $Y = \beta_1 X$ for the only value of $\beta_1$ which satisfies this equation. If you have more than one value of $Y$ and $X_1$ then this represents an overconstrained system -- without a solution. ]
My hunch is that you mean the following: $$Y \sim N( \beta_1 X_1, \sigma^2)$$ for some value of $\sigma^2$ which is assumed to be known -- (or perhaps you also want to estimate it). This places your random effects model within the context of a standard linear mixed effects model.
If we then assume that $Y$ is distributed as I have stated, then:
Answer: No, they are not equal. In the first model, $\beta_0$ is essentially an intercept (Imagine multiplying $\beta_0$ by $X_0$ where $X_0$ is always equal to 1). In the second model, $\beta_0$ represents a random offset from $\beta_1$. Note that this model (#2) wouldn't make sense to someone who doesn't do Bayes... It would be identical to running a linear regression where you have two predictors that are perfectly multi-collinear, but since you've made distributional assumptions within a Bayesian model, you can estimate it. That said, I'm not sure you should.
Note: You can run the first model without the added distributional assumption (which I included above more for model #2). However, I've never seen this sort of specification. I think it would be identical to stating that $(Y - \beta_1 X_1) \sim Gamma(6,3)$. In other words, an error term which is distributed as a gamma random variable. My hunch is that you instead want a random intercept model (with normally distributed errors). If that be the case, use model #1, don't use model #2. Yes $\beta_0$ and $\beta_1$ will exhibit autocorrelation until you standardize the levels of your categorical variable -- make sure that the summation across all individuals is equal to zero (which you can do by subtracting the mean of $X_1$ from every single observation of $X_1$). | Are these equivalent representations of the same hierarchical Bayesian model? | Updated Response: You still don't have a full specification for model #2. However, I can sort of guess what you mean -- correct me if I'm wrong. The trouble is that the statements $Y = \beta_1 X$ & | Are these equivalent representations of the same hierarchical Bayesian model?
Updated Response: You still don't have a full specification for model #2. However, I can sort of guess what you mean -- correct me if I'm wrong. The trouble is that the statements $Y = \beta_1 X$ & $Y = \beta_0 + \beta_1 X$ are not probabilistic.
[ Aside: In a mathematical sense, you're defining a set of linear equations. If you have only one datapoint then you can solve $Y = \beta_1 X$ for the only value of $\beta_1$ which satisfies this equation. If you have more than one value of $Y$ and $X_1$ then this represents an overconstrained system -- without a solution. ]
My hunch is that you mean the following: $$Y \sim N( \beta_1 X_1, \sigma^2)$$ for some value of $\sigma^2$ which is assumed to be known -- (or perhaps you also want to estimate it). This places your random effects model within the context of a standard linear mixed effects model.
If we then assume that $Y$ is distributed as I have stated, then:
Answer: No, they are not equal. In the first model, $\beta_0$ is essentially an intercept (Imagine multiplying $\beta_0$ by $X_0$ where $X_0$ is always equal to 1). In the second model, $\beta_0$ represents a random offset from $\beta_1$. Note that this model (#2) wouldn't make sense to someone who doesn't do Bayes... It would be identical to running a linear regression where you have two predictors that are perfectly multi-collinear, but since you've made distributional assumptions within a Bayesian model, you can estimate it. That said, I'm not sure you should.
Note: You can run the first model without the added distributional assumption (which I included above more for model #2). However, I've never seen this sort of specification. I think it would be identical to stating that $(Y - \beta_1 X_1) \sim Gamma(6,3)$. In other words, an error term which is distributed as a gamma random variable. My hunch is that you instead want a random intercept model (with normally distributed errors). If that be the case, use model #1, don't use model #2. Yes $\beta_0$ and $\beta_1$ will exhibit autocorrelation until you standardize the levels of your categorical variable -- make sure that the summation across all individuals is equal to zero (which you can do by subtracting the mean of $X_1$ from every single observation of $X_1$). | Are these equivalent representations of the same hierarchical Bayesian model?
Updated Response: You still don't have a full specification for model #2. However, I can sort of guess what you mean -- correct me if I'm wrong. The trouble is that the statements $Y = \beta_1 X$ & |
52,440 | Are these equivalent representations of the same hierarchical Bayesian model? | The only similarity in the two models is the general type of models they belong to, otherwise they are not similar in general as pointed out by M. Tibbits.
Both these models belong the class of hierarchical models with varying slope (cf Gelman and Hill 2006 for detailed treatment)
The answer for "why not" are many and one of them is pointed out by M. Tibbits. A few more are:
In the model 2 the slope on an average is 6/3, where as it is 0 for model 1.(This description can be made much better if we had data but it is "almost" accurate given the limited description)
You would expect the data in model 1 to be distributed randomly along a approximately horizontal line where as in model 2 you would expect data to be randomly distributed along a line of slope approx. beta_0 and zero intercept.
These answers can be best varified if you similated the data based on your hierarchical setting and look at the results of the analysis.
Thanks,
S. | Are these equivalent representations of the same hierarchical Bayesian model? | The only similarity in the two models is the general type of models they belong to, otherwise they are not similar in general as pointed out by M. Tibbits.
Both these models belong the class of hierar | Are these equivalent representations of the same hierarchical Bayesian model?
The only similarity in the two models is the general type of models they belong to, otherwise they are not similar in general as pointed out by M. Tibbits.
Both these models belong the class of hierarchical models with varying slope (cf Gelman and Hill 2006 for detailed treatment)
The answer for "why not" are many and one of them is pointed out by M. Tibbits. A few more are:
In the model 2 the slope on an average is 6/3, where as it is 0 for model 1.(This description can be made much better if we had data but it is "almost" accurate given the limited description)
You would expect the data in model 1 to be distributed randomly along a approximately horizontal line where as in model 2 you would expect data to be randomly distributed along a line of slope approx. beta_0 and zero intercept.
These answers can be best varified if you similated the data based on your hierarchical setting and look at the results of the analysis.
Thanks,
S. | Are these equivalent representations of the same hierarchical Bayesian model?
The only similarity in the two models is the general type of models they belong to, otherwise they are not similar in general as pointed out by M. Tibbits.
Both these models belong the class of hierar |
52,441 | Are these equivalent representations of the same hierarchical Bayesian model? | You already have good answers and have accepted one, but I'm not sure anyone's put it plainly enough for even me to intuit. At the core, your two models are:
[1] $Y=\beta_0 + \beta_1X_1 + \epsilon$
[2] $Y=(\beta_0 + \beta_1)X_1 + \epsilon$ = $\beta_0X_1 + \beta_1X_1 + \epsilon$
Yes, the $\beta$'s (and $\epsilon$) have distributions in your example, but in my mind that only smears the value out and doesn't change the form of the formulae. In that light, they're obviously different: your second model essentially fixes the intercept $\beta_0=0$ and then makes the slope $\beta_1$ a bit more complicated. | Are these equivalent representations of the same hierarchical Bayesian model? | You already have good answers and have accepted one, but I'm not sure anyone's put it plainly enough for even me to intuit. At the core, your two models are:
[1] $Y=\beta_0 + \beta_1X_1 + \epsilon$
[2 | Are these equivalent representations of the same hierarchical Bayesian model?
You already have good answers and have accepted one, but I'm not sure anyone's put it plainly enough for even me to intuit. At the core, your two models are:
[1] $Y=\beta_0 + \beta_1X_1 + \epsilon$
[2] $Y=(\beta_0 + \beta_1)X_1 + \epsilon$ = $\beta_0X_1 + \beta_1X_1 + \epsilon$
Yes, the $\beta$'s (and $\epsilon$) have distributions in your example, but in my mind that only smears the value out and doesn't change the form of the formulae. In that light, they're obviously different: your second model essentially fixes the intercept $\beta_0=0$ and then makes the slope $\beta_1$ a bit more complicated. | Are these equivalent representations of the same hierarchical Bayesian model?
You already have good answers and have accepted one, but I'm not sure anyone's put it plainly enough for even me to intuit. At the core, your two models are:
[1] $Y=\beta_0 + \beta_1X_1 + \epsilon$
[2 |
52,442 | Are these equivalent representations of the same hierarchical Bayesian model? | If $x_1=0$, then $Y$~$(\beta_0,\sigma^2)$ in model 1 and $Y$~$(0,\sigma^2)$ in model 2.
If $x_1=1$, then $Y$~$(\beta_0+\beta_1,\sigma^2)$ in model 1 and $Y$~$(\beta_1,\sigma^2)$ in model 2.
Look for example at the first line: is $\beta_0$ a random variable or is a zero constant? | Are these equivalent representations of the same hierarchical Bayesian model? | If $x_1=0$, then $Y$~$(\beta_0,\sigma^2)$ in model 1 and $Y$~$(0,\sigma^2)$ in model 2.
If $x_1=1$, then $Y$~$(\beta_0+\beta_1,\sigma^2)$ in model 1 and $Y$~$(\beta_1,\sigma^2)$ in model 2.
Look for e | Are these equivalent representations of the same hierarchical Bayesian model?
If $x_1=0$, then $Y$~$(\beta_0,\sigma^2)$ in model 1 and $Y$~$(0,\sigma^2)$ in model 2.
If $x_1=1$, then $Y$~$(\beta_0+\beta_1,\sigma^2)$ in model 1 and $Y$~$(\beta_1,\sigma^2)$ in model 2.
Look for example at the first line: is $\beta_0$ a random variable or is a zero constant? | Are these equivalent representations of the same hierarchical Bayesian model?
If $x_1=0$, then $Y$~$(\beta_0,\sigma^2)$ in model 1 and $Y$~$(0,\sigma^2)$ in model 2.
If $x_1=1$, then $Y$~$(\beta_0+\beta_1,\sigma^2)$ in model 1 and $Y$~$(\beta_1,\sigma^2)$ in model 2.
Look for e |
52,443 | Is there an analytical expression for the distribution of the max of a normal k sample? | Properly normalized, it's closely approximated by a Gumbel distribution as shown by Extreme value theory. Alternative names are provided in the links. | Is there an analytical expression for the distribution of the max of a normal k sample? | Properly normalized, it's closely approximated by a Gumbel distribution as shown by Extreme value theory. Alternative names are provided in the links. | Is there an analytical expression for the distribution of the max of a normal k sample?
Properly normalized, it's closely approximated by a Gumbel distribution as shown by Extreme value theory. Alternative names are provided in the links. | Is there an analytical expression for the distribution of the max of a normal k sample?
Properly normalized, it's closely approximated by a Gumbel distribution as shown by Extreme value theory. Alternative names are provided in the links. |
52,444 | Is there an analytical expression for the distribution of the max of a normal k sample? | You will find exact expressions for the full pdf of the $n^{th}$ order statistics (as a function of $n$, the sample size) in the following paper:
Percentage Points and Modes of Order Statistics from the Normal Distribution
Shanti S. Gupta
Source: Ann. Math. Statist. Volume 32, Number 3 (1961), 888-893.
Also includes exact expression for the medians and means of $n^{th}$ order statistics as a function of $n$ (i could type a few here but the paper is un-gated). Some of these expressions are surprisingly simple.
H/T to John D Cook for the pointer. | Is there an analytical expression for the distribution of the max of a normal k sample? | You will find exact expressions for the full pdf of the $n^{th}$ order statistics (as a function of $n$, the sample size) in the following paper:
Percentage Points and Modes of Order Statistics from | Is there an analytical expression for the distribution of the max of a normal k sample?
You will find exact expressions for the full pdf of the $n^{th}$ order statistics (as a function of $n$, the sample size) in the following paper:
Percentage Points and Modes of Order Statistics from the Normal Distribution
Shanti S. Gupta
Source: Ann. Math. Statist. Volume 32, Number 3 (1961), 888-893.
Also includes exact expression for the medians and means of $n^{th}$ order statistics as a function of $n$ (i could type a few here but the paper is un-gated). Some of these expressions are surprisingly simple.
H/T to John D Cook for the pointer. | Is there an analytical expression for the distribution of the max of a normal k sample?
You will find exact expressions for the full pdf of the $n^{th}$ order statistics (as a function of $n$, the sample size) in the following paper:
Percentage Points and Modes of Order Statistics from |
52,445 | Classification after factor analysis | One solution to your 1. question is to use cross-validation. You compute classification accuracy for models with different number of components and then pick one with the highest classification accuracy. You can check the references below:
PLS Dimension Reduction for Classification with Microarray Data
Rasch-based high-dimensionality data reduction and class prediction with applications to microarray gene expression data
To my experience factor rotation does not improve classification accuracy. Please report your results. | Classification after factor analysis | One solution to your 1. question is to use cross-validation. You compute classification accuracy for models with different number of components and then pick one with the highest classification accura | Classification after factor analysis
One solution to your 1. question is to use cross-validation. You compute classification accuracy for models with different number of components and then pick one with the highest classification accuracy. You can check the references below:
PLS Dimension Reduction for Classification with Microarray Data
Rasch-based high-dimensionality data reduction and class prediction with applications to microarray gene expression data
To my experience factor rotation does not improve classification accuracy. Please report your results. | Classification after factor analysis
One solution to your 1. question is to use cross-validation. You compute classification accuracy for models with different number of components and then pick one with the highest classification accura |
52,446 | Classification after factor analysis | Caution: I'm assuming that when you said "classification", you are rather referring to cluster analysis (as understood in French), that is an unsupervised method for allocating individuals in homogeneous groups without any prior information/label. It's not obvious to me how class membership might come into play in your question.
I'll take a different perspective from the other answers and suggest you to try to do a data reduction (through PCA) of your $p$ variables followed by a mix of Ward's hierarchical and k-means clustering (this is called mixed clustering in the French literature, the basic idea is that HC is combined to a weighted k-means to consolidate the partition) on the first two or three factorial axes. This was proposed by Ludovic Lebart et coll. and is actually implemented in the FactoClass package.
The advantages are as follows:
If any part of your survey is not clearly unidimensional, you will be able to gauge item contribution to the second axis, and this may help to flag those items for further inspection;
Clustering is done on the PCA scores (or you can work with a multiple correspondence analysis, though in the case of binary items it amounts to yield the same results than a scaled PCA), and thanks to the mixed clustering the resulting partition is more stable and allow to spot potential extreme respondents; you can also introduce supplementary variable (like gender, SES or age), which is useful to inspect between-group homogeneity.
In this case, no rotation is supposed to be applied to the principal axes. Considering a subspace with q < p allows to remove random fluctuations which often make the variance in the p - q remaining axes. This can be viewed as some kind of "smoothing" on the data. Instead of PCA, as I said, you can use Multiple Correspondence Analysis (MCA), which is basically a non-linear PCA where numerical scores are assigned to respondents and modalities of dummy-coded variables.
I have had some success using this method in characterizing clinical subgroups assessed on a wide range testing battery for neuropsychological impairment, and this generally yields results that are more or less comparable (wrt. interpretation) to model-based clustering (aka latent trait analysis, in the psychometric literature). The FactoClass package relies on ade4 for the factorial methods, and allows to visualize clusters in the factorial space, as shown below:
Now, the problem with so-called tandem approach is that there is no guarantee that the low-dimensional representation that is produced by PCA or MCA will be an optimal representation for identifying cluster structures. This is nicely discussed in Hwang et al. (2006), but I'm not aware of any implementation of the algorithm they proposed. Basically, the idea is to combine MCA and k-means in a single step, which amounts to minimize two criteria simultaneously (the standard homeogeneity criterion and the residual SS).
References
Lebart, L, Morineau, A, and Piron, M (2000). Statistique exploratoire multidimensionnelle (3rd ed.). Dunod.
Hwang, H, Dillon, WR, and Takane, Y (2006). An extension of multiple correspondence analysis for identifying heterogeneous subgroups of respondents. Psychometrika, 71, 161-171. | Classification after factor analysis | Caution: I'm assuming that when you said "classification", you are rather referring to cluster analysis (as understood in French), that is an unsupervised method for allocating individuals in homogene | Classification after factor analysis
Caution: I'm assuming that when you said "classification", you are rather referring to cluster analysis (as understood in French), that is an unsupervised method for allocating individuals in homogeneous groups without any prior information/label. It's not obvious to me how class membership might come into play in your question.
I'll take a different perspective from the other answers and suggest you to try to do a data reduction (through PCA) of your $p$ variables followed by a mix of Ward's hierarchical and k-means clustering (this is called mixed clustering in the French literature, the basic idea is that HC is combined to a weighted k-means to consolidate the partition) on the first two or three factorial axes. This was proposed by Ludovic Lebart et coll. and is actually implemented in the FactoClass package.
The advantages are as follows:
If any part of your survey is not clearly unidimensional, you will be able to gauge item contribution to the second axis, and this may help to flag those items for further inspection;
Clustering is done on the PCA scores (or you can work with a multiple correspondence analysis, though in the case of binary items it amounts to yield the same results than a scaled PCA), and thanks to the mixed clustering the resulting partition is more stable and allow to spot potential extreme respondents; you can also introduce supplementary variable (like gender, SES or age), which is useful to inspect between-group homogeneity.
In this case, no rotation is supposed to be applied to the principal axes. Considering a subspace with q < p allows to remove random fluctuations which often make the variance in the p - q remaining axes. This can be viewed as some kind of "smoothing" on the data. Instead of PCA, as I said, you can use Multiple Correspondence Analysis (MCA), which is basically a non-linear PCA where numerical scores are assigned to respondents and modalities of dummy-coded variables.
I have had some success using this method in characterizing clinical subgroups assessed on a wide range testing battery for neuropsychological impairment, and this generally yields results that are more or less comparable (wrt. interpretation) to model-based clustering (aka latent trait analysis, in the psychometric literature). The FactoClass package relies on ade4 for the factorial methods, and allows to visualize clusters in the factorial space, as shown below:
Now, the problem with so-called tandem approach is that there is no guarantee that the low-dimensional representation that is produced by PCA or MCA will be an optimal representation for identifying cluster structures. This is nicely discussed in Hwang et al. (2006), but I'm not aware of any implementation of the algorithm they proposed. Basically, the idea is to combine MCA and k-means in a single step, which amounts to minimize two criteria simultaneously (the standard homeogeneity criterion and the residual SS).
References
Lebart, L, Morineau, A, and Piron, M (2000). Statistique exploratoire multidimensionnelle (3rd ed.). Dunod.
Hwang, H, Dillon, WR, and Takane, Y (2006). An extension of multiple correspondence analysis for identifying heterogeneous subgroups of respondents. Psychometrika, 71, 161-171. | Classification after factor analysis
Caution: I'm assuming that when you said "classification", you are rather referring to cluster analysis (as understood in French), that is an unsupervised method for allocating individuals in homogene |
52,447 | Classification after factor analysis | One approach that side-steps cross-validation to determine the optimal number of factors is to use the nonparametric Bayesian approaches for factor analysis. These approaches let the number of factors to be unbounded and eventually decided by the data. See this paper that uses such an approach for classification based on factor analysis. | Classification after factor analysis | One approach that side-steps cross-validation to determine the optimal number of factors is to use the nonparametric Bayesian approaches for factor analysis. These approaches let the number of factors | Classification after factor analysis
One approach that side-steps cross-validation to determine the optimal number of factors is to use the nonparametric Bayesian approaches for factor analysis. These approaches let the number of factors to be unbounded and eventually decided by the data. See this paper that uses such an approach for classification based on factor analysis. | Classification after factor analysis
One approach that side-steps cross-validation to determine the optimal number of factors is to use the nonparametric Bayesian approaches for factor analysis. These approaches let the number of factors |
52,448 | What is numerical overflow? | It means that the algorithm generated a variable that is greater than the maximum allowed for that type of variable. That is due to the fact that computers use a finite number of bits to represent numbers, so it is not possible to represent ANY number, but only a limited subset of them.
The actual value depends on the type of variable and the architecture of the system.
Why that happens during a MLE I'm not sure, my best call would be that you should change the starting parameters. | What is numerical overflow? | It means that the algorithm generated a variable that is greater than the maximum allowed for that type of variable. That is due to the fact that computers use a finite number of bits to represent num | What is numerical overflow?
It means that the algorithm generated a variable that is greater than the maximum allowed for that type of variable. That is due to the fact that computers use a finite number of bits to represent numbers, so it is not possible to represent ANY number, but only a limited subset of them.
The actual value depends on the type of variable and the architecture of the system.
Why that happens during a MLE I'm not sure, my best call would be that you should change the starting parameters. | What is numerical overflow?
It means that the algorithm generated a variable that is greater than the maximum allowed for that type of variable. That is due to the fact that computers use a finite number of bits to represent num |
52,449 | What is numerical overflow? | You can probably avoid your overflow problems by working with the log of the likelihood function rather than the likelihood function itself. Both have the same maximum. | What is numerical overflow? | You can probably avoid your overflow problems by working with the log of the likelihood function rather than the likelihood function itself. Both have the same maximum. | What is numerical overflow?
You can probably avoid your overflow problems by working with the log of the likelihood function rather than the likelihood function itself. Both have the same maximum. | What is numerical overflow?
You can probably avoid your overflow problems by working with the log of the likelihood function rather than the likelihood function itself. Both have the same maximum. |
52,450 | What is numerical overflow? | As stated by nico, numerical overflow is when computation finds a number that is too great for the limited number of bits allocated by software to store the number. For example, if your software uses 32 bits to store integers, then computing an integer that is greater than 2,147,483,648 (or smaller than -2,147,483,648) will cause overflow.
One common reason for numerical overflow is trying to divide by a very small (close to zero) number. If the absolute values of your numbers are not too large, Look at your data and try to figure out where you might be dividing by a very small number. | What is numerical overflow? | As stated by nico, numerical overflow is when computation finds a number that is too great for the limited number of bits allocated by software to store the number. For example, if your software uses | What is numerical overflow?
As stated by nico, numerical overflow is when computation finds a number that is too great for the limited number of bits allocated by software to store the number. For example, if your software uses 32 bits to store integers, then computing an integer that is greater than 2,147,483,648 (or smaller than -2,147,483,648) will cause overflow.
One common reason for numerical overflow is trying to divide by a very small (close to zero) number. If the absolute values of your numbers are not too large, Look at your data and try to figure out where you might be dividing by a very small number. | What is numerical overflow?
As stated by nico, numerical overflow is when computation finds a number that is too great for the limited number of bits allocated by software to store the number. For example, if your software uses |
52,451 | Basic question regarding variance and stdev of a sample | The second question seems to ask for a prediction interval for one future observation. Such an interval is readily calculated under the assumptions that (a) the future observation is from the same distribution and (b) is independent of the previous sample. When the underlying distribution is Normal, we just have to erect an interval around the difference of two Gaussian random variables. Note that the interval will be wider than suggested by a naive application of a t-test or z-test, because it has to accommodate the variance of the future value, too. This rules out all the answers I have seen posted so far, so I guess I had better quote one explicitly. Hahn & Meeker's formula for the endpoints of this prediction interval is
$$m \pm t \times \sqrt{1 + \frac{1}{n}} \times s$$
where $m$ is the sample mean, $t$ is an appropriate two-sided critical value of Student's $t$ (for $n-1$ df), $s$ is the sample standard deviation, and $n$ is the sample size. Note in particular the factor of $\sqrt{1+1/n}$ instead of $\sqrt{1/n}$. That's a big difference!
This interval is used like any other interval: the requested test simply examines whether the new value lies within the prediction interval. If so, the new value is consistent with the sample; if not, we reject the hypothesis that it was independently drawn from the same distribution as the sample. Generalizations from one future value to $k$ future values or to the mean (or max or min) of $k$ future values, etc., exist.
There is a extensive literature on prediction intervals especially in a regression context. Any decent regression textbook will have formulas. You could begin with the Wikipedia entry ;-). Hahn & Meeker's Statistical Intervals is still in print and is an accessible read.
The first question has an an answer that is so routine nobody seems yet to have given it here (although some of the links provide details). For completeness, then, I will close by remarking that when the population has approximately a Normal distribution, the sample standard deviation is distributed as the square root of a scaled chi-square variate of $n-1$ df whose expectation is the population variance. That means (roughly) we expect the sample sd to be close to the population sd and the ratio of the two will usually be $1 + O(1/\sqrt{n-1})$. Unlike parallel statements for the sample mean (which invoke the CLT), this statement relies fairly strongly on the assumption of a Normal population. | Basic question regarding variance and stdev of a sample | The second question seems to ask for a prediction interval for one future observation. Such an interval is readily calculated under the assumptions that (a) the future observation is from the same di | Basic question regarding variance and stdev of a sample
The second question seems to ask for a prediction interval for one future observation. Such an interval is readily calculated under the assumptions that (a) the future observation is from the same distribution and (b) is independent of the previous sample. When the underlying distribution is Normal, we just have to erect an interval around the difference of two Gaussian random variables. Note that the interval will be wider than suggested by a naive application of a t-test or z-test, because it has to accommodate the variance of the future value, too. This rules out all the answers I have seen posted so far, so I guess I had better quote one explicitly. Hahn & Meeker's formula for the endpoints of this prediction interval is
$$m \pm t \times \sqrt{1 + \frac{1}{n}} \times s$$
where $m$ is the sample mean, $t$ is an appropriate two-sided critical value of Student's $t$ (for $n-1$ df), $s$ is the sample standard deviation, and $n$ is the sample size. Note in particular the factor of $\sqrt{1+1/n}$ instead of $\sqrt{1/n}$. That's a big difference!
This interval is used like any other interval: the requested test simply examines whether the new value lies within the prediction interval. If so, the new value is consistent with the sample; if not, we reject the hypothesis that it was independently drawn from the same distribution as the sample. Generalizations from one future value to $k$ future values or to the mean (or max or min) of $k$ future values, etc., exist.
There is a extensive literature on prediction intervals especially in a regression context. Any decent regression textbook will have formulas. You could begin with the Wikipedia entry ;-). Hahn & Meeker's Statistical Intervals is still in print and is an accessible read.
The first question has an an answer that is so routine nobody seems yet to have given it here (although some of the links provide details). For completeness, then, I will close by remarking that when the population has approximately a Normal distribution, the sample standard deviation is distributed as the square root of a scaled chi-square variate of $n-1$ df whose expectation is the population variance. That means (roughly) we expect the sample sd to be close to the population sd and the ratio of the two will usually be $1 + O(1/\sqrt{n-1})$. Unlike parallel statements for the sample mean (which invoke the CLT), this statement relies fairly strongly on the assumption of a Normal population. | Basic question regarding variance and stdev of a sample
The second question seems to ask for a prediction interval for one future observation. Such an interval is readily calculated under the assumptions that (a) the future observation is from the same di |
52,452 | Basic question regarding variance and stdev of a sample | I'm finding it rather tricky to see what you are asking:
If you want to know whether the Var(S) is different from the population variance, then see this previous answer.
If you want to determine whether the mean(S) and the mean(X) are the same, then look at Independent two-sample t-tests.
If you want to test whether mean(S) is equal to the population mean, then see @Srikant answer above, i.e. a one-sample t-test. | Basic question regarding variance and stdev of a sample | I'm finding it rather tricky to see what you are asking:
If you want to know whether the Var(S) is different from the population variance, then see this previous answer.
If you want to determine whet | Basic question regarding variance and stdev of a sample
I'm finding it rather tricky to see what you are asking:
If you want to know whether the Var(S) is different from the population variance, then see this previous answer.
If you want to determine whether the mean(S) and the mean(X) are the same, then look at Independent two-sample t-tests.
If you want to test whether mean(S) is equal to the population mean, then see @Srikant answer above, i.e. a one-sample t-test. | Basic question regarding variance and stdev of a sample
I'm finding it rather tricky to see what you are asking:
If you want to know whether the Var(S) is different from the population variance, then see this previous answer.
If you want to determine whet |
52,453 | Basic question regarding variance and stdev of a sample | My first answer was full of errors. Here is a corrected version:
The correct way to test is as follows:
z = (mean(S) - mu) / (stdev(S) / sqrt(n) )
See: Student's t-test
Note the following:
The sample size is accounted for when you divide the standard deviation by the square root of the sample size.
You should also note that the z-test is for testing whether the true mean of the population is some particular value. It does not make sense to substitute x instead of mu in the above statistic. | Basic question regarding variance and stdev of a sample | My first answer was full of errors. Here is a corrected version:
The correct way to test is as follows:
z = (mean(S) - mu) / (stdev(S) / sqrt(n) )
See: Student's t-test
Note the following:
The sample | Basic question regarding variance and stdev of a sample
My first answer was full of errors. Here is a corrected version:
The correct way to test is as follows:
z = (mean(S) - mu) / (stdev(S) / sqrt(n) )
See: Student's t-test
Note the following:
The sample size is accounted for when you divide the standard deviation by the square root of the sample size.
You should also note that the z-test is for testing whether the true mean of the population is some particular value. It does not make sense to substitute x instead of mu in the above statistic. | Basic question regarding variance and stdev of a sample
My first answer was full of errors. Here is a corrected version:
The correct way to test is as follows:
z = (mean(S) - mu) / (stdev(S) / sqrt(n) )
See: Student's t-test
Note the following:
The sample |
52,454 | Basic question regarding variance and stdev of a sample | I think you need to nail down the question you are asking, before you can compute an answer. I think this question is way too vague to answer: "test whether it is an vis-a-vis the general population".
The only question I think you can answer is this one: If the new value came from the same population as the others, what is the chance that it will be so far (or further) from the sample mean? That is the question that your equation will begin to answer, although it is not quite right. Here is a corrected equation that includes n.
t = (x - mean(S))/(stdev(S)/sqrt(n))
Compute the corresponding P value (with n-1 degrees of freedom) and you've answered the question. | Basic question regarding variance and stdev of a sample | I think you need to nail down the question you are asking, before you can compute an answer. I think this question is way too vague to answer: "test whether it is an vis-a-vis the general population". | Basic question regarding variance and stdev of a sample
I think you need to nail down the question you are asking, before you can compute an answer. I think this question is way too vague to answer: "test whether it is an vis-a-vis the general population".
The only question I think you can answer is this one: If the new value came from the same population as the others, what is the chance that it will be so far (or further) from the sample mean? That is the question that your equation will begin to answer, although it is not quite right. Here is a corrected equation that includes n.
t = (x - mean(S))/(stdev(S)/sqrt(n))
Compute the corresponding P value (with n-1 degrees of freedom) and you've answered the question. | Basic question regarding variance and stdev of a sample
I think you need to nail down the question you are asking, before you can compute an answer. I think this question is way too vague to answer: "test whether it is an vis-a-vis the general population". |
52,455 | Basic question regarding variance and stdev of a sample | 1) The standard deviation of the sample (stdev(S)) is an unbiased estimate of the standard deviation of the population.
2) Given we have estimated both the population mean and variance we need to take this into account when we evaluate whether a new observation x is a member of this population.
We don't use Z = (x - mean(S))/stdev(S), but rather:
t = (x - mean(S))/(stdev(S)*sqrt(1 + 1/n)), where n is the sample size of the first sample. We the compare t with a t-distribution with n-1 degrees of freedom to give a p-value. See here:
http://en.wikipedia.org/wiki/Prediction_interval#Unknown_mean.2C_unknown_variance
This accounts for the sample size both in the divisor (sqrt(1 + 1/n)) and in the degrees of freedom of the t-distribution. | Basic question regarding variance and stdev of a sample | 1) The standard deviation of the sample (stdev(S)) is an unbiased estimate of the standard deviation of the population.
2) Given we have estimated both the population mean and variance we need to take | Basic question regarding variance and stdev of a sample
1) The standard deviation of the sample (stdev(S)) is an unbiased estimate of the standard deviation of the population.
2) Given we have estimated both the population mean and variance we need to take this into account when we evaluate whether a new observation x is a member of this population.
We don't use Z = (x - mean(S))/stdev(S), but rather:
t = (x - mean(S))/(stdev(S)*sqrt(1 + 1/n)), where n is the sample size of the first sample. We the compare t with a t-distribution with n-1 degrees of freedom to give a p-value. See here:
http://en.wikipedia.org/wiki/Prediction_interval#Unknown_mean.2C_unknown_variance
This accounts for the sample size both in the divisor (sqrt(1 + 1/n)) and in the degrees of freedom of the t-distribution. | Basic question regarding variance and stdev of a sample
1) The standard deviation of the sample (stdev(S)) is an unbiased estimate of the standard deviation of the population.
2) Given we have estimated both the population mean and variance we need to take |
52,456 | Basic question regarding variance and stdev of a sample | "how is stdev(S) related to the standard deviation of the entire population?"
I don't know if the "Confidence Interval" concept might be what you are looking for?
Stdev(S) is an Estimate of the standard deviation of the entire population. To see how good an estimate, confidence intervals could be computed, and these would be dependent on the sample size.
See for e.g., Simulation and the Monte Carlo Method, Rubinstein & Kroese. | Basic question regarding variance and stdev of a sample | "how is stdev(S) related to the standard deviation of the entire population?"
I don't know if the "Confidence Interval" concept might be what you are looking for?
Stdev(S) is an Estimate of the stand | Basic question regarding variance and stdev of a sample
"how is stdev(S) related to the standard deviation of the entire population?"
I don't know if the "Confidence Interval" concept might be what you are looking for?
Stdev(S) is an Estimate of the standard deviation of the entire population. To see how good an estimate, confidence intervals could be computed, and these would be dependent on the sample size.
See for e.g., Simulation and the Monte Carlo Method, Rubinstein & Kroese. | Basic question regarding variance and stdev of a sample
"how is stdev(S) related to the standard deviation of the entire population?"
I don't know if the "Confidence Interval" concept might be what you are looking for?
Stdev(S) is an Estimate of the stand |
52,457 | Is density estimation the same as parameter estimation? | I understand this argument and can buy it as being technically true. However, the goal of language is to communicate ideas, and statistics has decided that “density estimation”, for better or for worse, refers to doing density estimation with minimal assumptions about the density as to keep from being restricted to a particular parametric family.
Perhaps this means that the use of English words is not perfect. However, you are likely to elicit confusion (or at least strange looks) in statistics circles if you deviate from the established terminology. | Is density estimation the same as parameter estimation? | I understand this argument and can buy it as being technically true. However, the goal of language is to communicate ideas, and statistics has decided that “density estimation”, for better or for wors | Is density estimation the same as parameter estimation?
I understand this argument and can buy it as being technically true. However, the goal of language is to communicate ideas, and statistics has decided that “density estimation”, for better or for worse, refers to doing density estimation with minimal assumptions about the density as to keep from being restricted to a particular parametric family.
Perhaps this means that the use of English words is not perfect. However, you are likely to elicit confusion (or at least strange looks) in statistics circles if you deviate from the established terminology. | Is density estimation the same as parameter estimation?
I understand this argument and can buy it as being technically true. However, the goal of language is to communicate ideas, and statistics has decided that “density estimation”, for better or for wors |
52,458 | Is density estimation the same as parameter estimation? | No, it's not the same. Density estimation is about estimating the distribution of the data. This can be achieved with a parametric model, for example, fitting a Gaussian mixture to the data. In such a case, to find the distribution means to estimate its parameters since the distribution is defined by its parameters. But there are also non-parametric approaches to density estimation, like Kernel density estimation, using histograms, etc where we are not estimating any parameters but rather finding the density itself. | Is density estimation the same as parameter estimation? | No, it's not the same. Density estimation is about estimating the distribution of the data. This can be achieved with a parametric model, for example, fitting a Gaussian mixture to the data. In such a | Is density estimation the same as parameter estimation?
No, it's not the same. Density estimation is about estimating the distribution of the data. This can be achieved with a parametric model, for example, fitting a Gaussian mixture to the data. In such a case, to find the distribution means to estimate its parameters since the distribution is defined by its parameters. But there are also non-parametric approaches to density estimation, like Kernel density estimation, using histograms, etc where we are not estimating any parameters but rather finding the density itself. | Is density estimation the same as parameter estimation?
No, it's not the same. Density estimation is about estimating the distribution of the data. This can be achieved with a parametric model, for example, fitting a Gaussian mixture to the data. In such a |
52,459 | Help with a proof regarding empirical CDF | Define $$Y_i(x)=\mathbb I_{\{X_i\leq x\}}$$ $\forall i\in\{1, 2,\ldots, n\}.$
Notice $$Y_i(x) \overset{\text{iid}}{\sim}\mathcal{Ber}(\theta)\tag 1\label 1$$ where $\theta := F(x) . $
Now express (how?) $$n \hat F_n(x) =\sum_{i=1}^n Y_i(x) ;\tag 2$$
Use $\eqref 1$ above to yield $\operatorname{Var}(F_n(x)). $ | Help with a proof regarding empirical CDF | Define $$Y_i(x)=\mathbb I_{\{X_i\leq x\}}$$ $\forall i\in\{1, 2,\ldots, n\}.$
Notice $$Y_i(x) \overset{\text{iid}}{\sim}\mathcal{Ber}(\theta)\tag 1\label 1$$ where $\theta := F(x) . $
Now express (how | Help with a proof regarding empirical CDF
Define $$Y_i(x)=\mathbb I_{\{X_i\leq x\}}$$ $\forall i\in\{1, 2,\ldots, n\}.$
Notice $$Y_i(x) \overset{\text{iid}}{\sim}\mathcal{Ber}(\theta)\tag 1\label 1$$ where $\theta := F(x) . $
Now express (how?) $$n \hat F_n(x) =\sum_{i=1}^n Y_i(x) ;\tag 2$$
Use $\eqref 1$ above to yield $\operatorname{Var}(F_n(x)). $ | Help with a proof regarding empirical CDF
Define $$Y_i(x)=\mathbb I_{\{X_i\leq x\}}$$ $\forall i\in\{1, 2,\ldots, n\}.$
Notice $$Y_i(x) \overset{\text{iid}}{\sim}\mathcal{Ber}(\theta)\tag 1\label 1$$ where $\theta := F(x) . $
Now express (how |
52,460 | Help with a proof regarding empirical CDF | Note that you can write $\mathop{\hat{F}_n}\left(x\right)$ as $\mathop{\hat{F}_n}\left(x\right) = \mathop{R_n}\left(x\right)/n$, where $\mathop{R_n}\left(x\right) \sim \mathop{\text{Binomial}}\left(n, \mathop{F}\left(x\right)\right)$.
Proof.
$\mathop{R_n}\left(x\right) \mathrel{:=}\sum_{i=1}^n \mathop{\mathbf{1}_{\left(-\infty,\, x\right]}}\left(X_i\right)$ counts the number of successes, meaning the number of $X_i$s in $\left(-\infty, x\right]$, in $n$ independent Bernoulli trials with success probability $\mathop{\mathbb{P}}\left(X_i \in \left(-\infty, x\right] \right) = \mathop{F}\left(x\right)$ each. Hence, $\mathop{R_n}\left(x\right) \sim \mathop{\text{Binomial}}\left(n, \mathop{F}\left(x\right)\right)$. By definition, $\mathop{\hat{F}_n}\left(x\right) = n^{-1}\sum_{i=1}^n \mathop{\mathbf{1}_{\left(-\infty,\, x\right]}}\left(X_i\right)$ and the statement follows. | Help with a proof regarding empirical CDF | Note that you can write $\mathop{\hat{F}_n}\left(x\right)$ as $\mathop{\hat{F}_n}\left(x\right) = \mathop{R_n}\left(x\right)/n$, where $\mathop{R_n}\left(x\right) \sim \mathop{\text{Binomial}}\left(n, | Help with a proof regarding empirical CDF
Note that you can write $\mathop{\hat{F}_n}\left(x\right)$ as $\mathop{\hat{F}_n}\left(x\right) = \mathop{R_n}\left(x\right)/n$, where $\mathop{R_n}\left(x\right) \sim \mathop{\text{Binomial}}\left(n, \mathop{F}\left(x\right)\right)$.
Proof.
$\mathop{R_n}\left(x\right) \mathrel{:=}\sum_{i=1}^n \mathop{\mathbf{1}_{\left(-\infty,\, x\right]}}\left(X_i\right)$ counts the number of successes, meaning the number of $X_i$s in $\left(-\infty, x\right]$, in $n$ independent Bernoulli trials with success probability $\mathop{\mathbb{P}}\left(X_i \in \left(-\infty, x\right] \right) = \mathop{F}\left(x\right)$ each. Hence, $\mathop{R_n}\left(x\right) \sim \mathop{\text{Binomial}}\left(n, \mathop{F}\left(x\right)\right)$. By definition, $\mathop{\hat{F}_n}\left(x\right) = n^{-1}\sum_{i=1}^n \mathop{\mathbf{1}_{\left(-\infty,\, x\right]}}\left(X_i\right)$ and the statement follows. | Help with a proof regarding empirical CDF
Note that you can write $\mathop{\hat{F}_n}\left(x\right)$ as $\mathop{\hat{F}_n}\left(x\right) = \mathop{R_n}\left(x\right)/n$, where $\mathop{R_n}\left(x\right) \sim \mathop{\text{Binomial}}\left(n, |
52,461 | Asymptotics of MLE without closed form solutions | I'm not sure what you mean by "asymptotics for $\hat\theta_n$", but if you are asking about the limiting distribution of the MLE, then the short answer is that a properly standardized version of $\hat\theta_n$ converges to the standard normal distribution.
More precisely, in a multidimensional parameter case with $\theta\in \Theta\subseteq\mathbb{R}^p,$ assuming the model is regular (i.e. the support of the distribution does not depend on $\theta$ and the log-likelihood can be computed, etc.) it can be shown that
$$
\mathcal{I}_n(\theta_0)^{1/2}(\hat\theta_n - \theta_0) \overset{d}{\to} N_p(0_p,I_p),\tag{*}\label{a}
$$
where $0_p$ denotes the $p\times 1$ zero vector and $I_p$ is the $p\times p$ identity matrix. Assuming independence across the $n$ samples,
$$
\mathcal{I}_n(\theta_0) = -nE_{\theta_0}\left(\frac{\partial\log L(\theta;Y_1)}{\partial\theta\partial\theta^\top}\right),
$$
is the expected Fisher information matrix for all observations and $L(\theta;Y_1)$ is the likelihood function for a single observation.
In practice, $\eqref{a}$ is useless since the true parameter value $\theta_0$ is unknown. However, the MLE is consistent, i.e.
$$
\hat\theta_n\overset{P}\to \theta_0
$$
so under appropriate technical condition, also $I(\hat\theta_n)\overset{P}\to I(\theta_0)$. Thus we have that $\eqref a$ is asymptotically equivalent to
$$
\mathcal{I}_n(\hat \theta_n)^{1/2}(\hat\theta_n - \theta_0) \overset{d}{\to} N_p(0_p,I_p).\tag{**}\label b
$$
$\mathcal{I}_n(\theta)$ is not always easy to compute, because the expectation involved may be intractable, but may we still be able to calculate the hessian matrix of the log-likelihood. That is, we can calculate the observed information
$$
\mathcal{J}_n(\theta) = -\frac{\partial\log L(\theta)}{\partial\theta\partial\theta^\top},
$$
where $L(\theta)$ denotes the full likelihood.
Now, we could bypass this computational problem if in $\eqref{b}$ we could replace $\mathcal{I}_n(\hat\theta_n)$ by $\mathcal{J}_n(\hat\theta_n).$
It turns out that, under appropriate conditions, we can invoke the Law of Large Numbers to have
$$
n^{-1}\mathcal{J}_n(\theta)\overset{P}\to E_{\theta_0}\left(\frac{\partial\log L(\theta;Y_1)}{\partial\theta\partial\theta^\top}\right).
$$
Thus such a replacement is legitimate and it leads to
$$
\mathcal{J}_n(\hat \theta_n)^{1/2}(\hat\theta_n - \theta_0) \overset{d}{\to} N_p(0_p,I_p),\tag{***}
$$
which is asymptotically equivalent to $\eqref b.$ This is typically re-written as
$$
\hat\theta_n\, \dot\sim\, N_p(\theta_0, I_n(\hat\theta_n)^{-1}),
$$
where "$\dot\sim$" means "distributed, for a large sample size, as". In practice, we deal with problems of fixed sample sizes so we pretend it to be $\sim$ although this may not necessarily be the case.
If you are only interested in a single component of $\hat\theta_n = (\hat\theta_{n,1},\ldots,\hat\theta_{n,p})$, say $\hat\theta_{n,i}$, then by the properties of the multivariate normal distribution we have
$$
\hat\theta_{n,i}\,\dot\sim N(\mu_{0,i}, J_{n}(\theta)^{ii}),
$$
where $J_{n}(\theta)^{ii}$ is the cell $(i,i)$ of $J_{n}(\hat\theta_n)^{-1}$.
Using this result, we can get an approximate confidence interval of level $1-\alpha$ for $\theta_{0,i}$ as
$$
\hat\theta_{n,i} \pm z_{1-\alpha/2}\hat{\text{se}},
$$
where $\hat{\text{se}} = \sqrt{J_n(\hat\theta_n)^{ii}}$ is the estimated standard error of $\hat\theta_{n,i}.$ These are known as Wald-type confidence intervals. | Asymptotics of MLE without closed form solutions | I'm not sure what you mean by "asymptotics for $\hat\theta_n$", but if you are asking about the limiting distribution of the MLE, then the short answer is that a properly standardized version of $\hat | Asymptotics of MLE without closed form solutions
I'm not sure what you mean by "asymptotics for $\hat\theta_n$", but if you are asking about the limiting distribution of the MLE, then the short answer is that a properly standardized version of $\hat\theta_n$ converges to the standard normal distribution.
More precisely, in a multidimensional parameter case with $\theta\in \Theta\subseteq\mathbb{R}^p,$ assuming the model is regular (i.e. the support of the distribution does not depend on $\theta$ and the log-likelihood can be computed, etc.) it can be shown that
$$
\mathcal{I}_n(\theta_0)^{1/2}(\hat\theta_n - \theta_0) \overset{d}{\to} N_p(0_p,I_p),\tag{*}\label{a}
$$
where $0_p$ denotes the $p\times 1$ zero vector and $I_p$ is the $p\times p$ identity matrix. Assuming independence across the $n$ samples,
$$
\mathcal{I}_n(\theta_0) = -nE_{\theta_0}\left(\frac{\partial\log L(\theta;Y_1)}{\partial\theta\partial\theta^\top}\right),
$$
is the expected Fisher information matrix for all observations and $L(\theta;Y_1)$ is the likelihood function for a single observation.
In practice, $\eqref{a}$ is useless since the true parameter value $\theta_0$ is unknown. However, the MLE is consistent, i.e.
$$
\hat\theta_n\overset{P}\to \theta_0
$$
so under appropriate technical condition, also $I(\hat\theta_n)\overset{P}\to I(\theta_0)$. Thus we have that $\eqref a$ is asymptotically equivalent to
$$
\mathcal{I}_n(\hat \theta_n)^{1/2}(\hat\theta_n - \theta_0) \overset{d}{\to} N_p(0_p,I_p).\tag{**}\label b
$$
$\mathcal{I}_n(\theta)$ is not always easy to compute, because the expectation involved may be intractable, but may we still be able to calculate the hessian matrix of the log-likelihood. That is, we can calculate the observed information
$$
\mathcal{J}_n(\theta) = -\frac{\partial\log L(\theta)}{\partial\theta\partial\theta^\top},
$$
where $L(\theta)$ denotes the full likelihood.
Now, we could bypass this computational problem if in $\eqref{b}$ we could replace $\mathcal{I}_n(\hat\theta_n)$ by $\mathcal{J}_n(\hat\theta_n).$
It turns out that, under appropriate conditions, we can invoke the Law of Large Numbers to have
$$
n^{-1}\mathcal{J}_n(\theta)\overset{P}\to E_{\theta_0}\left(\frac{\partial\log L(\theta;Y_1)}{\partial\theta\partial\theta^\top}\right).
$$
Thus such a replacement is legitimate and it leads to
$$
\mathcal{J}_n(\hat \theta_n)^{1/2}(\hat\theta_n - \theta_0) \overset{d}{\to} N_p(0_p,I_p),\tag{***}
$$
which is asymptotically equivalent to $\eqref b.$ This is typically re-written as
$$
\hat\theta_n\, \dot\sim\, N_p(\theta_0, I_n(\hat\theta_n)^{-1}),
$$
where "$\dot\sim$" means "distributed, for a large sample size, as". In practice, we deal with problems of fixed sample sizes so we pretend it to be $\sim$ although this may not necessarily be the case.
If you are only interested in a single component of $\hat\theta_n = (\hat\theta_{n,1},\ldots,\hat\theta_{n,p})$, say $\hat\theta_{n,i}$, then by the properties of the multivariate normal distribution we have
$$
\hat\theta_{n,i}\,\dot\sim N(\mu_{0,i}, J_{n}(\theta)^{ii}),
$$
where $J_{n}(\theta)^{ii}$ is the cell $(i,i)$ of $J_{n}(\hat\theta_n)^{-1}$.
Using this result, we can get an approximate confidence interval of level $1-\alpha$ for $\theta_{0,i}$ as
$$
\hat\theta_{n,i} \pm z_{1-\alpha/2}\hat{\text{se}},
$$
where $\hat{\text{se}} = \sqrt{J_n(\hat\theta_n)^{ii}}$ is the estimated standard error of $\hat\theta_{n,i}.$ These are known as Wald-type confidence intervals. | Asymptotics of MLE without closed form solutions
I'm not sure what you mean by "asymptotics for $\hat\theta_n$", but if you are asking about the limiting distribution of the MLE, then the short answer is that a properly standardized version of $\hat |
52,462 | Asymptotics of MLE without closed form solutions | You can use the fact the MLE is asymptotically unbiased, efficient (i.e. its variance converges to the inverse of the Fisher information), and Gaussian.
In summary, $\hat\theta \rightarrow \mathcal{N}(\theta,\mathcal{I}^{-1}(\theta))$ as the sample size, $n$, goes to infinity.
You can then approximate $\mathcal{I}(\theta)$ by $\mathcal{I}(\hat\theta)$ (the information evaluated at the MLE) to construct a confidence interval for $\theta$, etc. | Asymptotics of MLE without closed form solutions | You can use the fact the MLE is asymptotically unbiased, efficient (i.e. its variance converges to the inverse of the Fisher information), and Gaussian.
In summary, $\hat\theta \rightarrow \mathcal{N} | Asymptotics of MLE without closed form solutions
You can use the fact the MLE is asymptotically unbiased, efficient (i.e. its variance converges to the inverse of the Fisher information), and Gaussian.
In summary, $\hat\theta \rightarrow \mathcal{N}(\theta,\mathcal{I}^{-1}(\theta))$ as the sample size, $n$, goes to infinity.
You can then approximate $\mathcal{I}(\theta)$ by $\mathcal{I}(\hat\theta)$ (the information evaluated at the MLE) to construct a confidence interval for $\theta$, etc. | Asymptotics of MLE without closed form solutions
You can use the fact the MLE is asymptotically unbiased, efficient (i.e. its variance converges to the inverse of the Fisher information), and Gaussian.
In summary, $\hat\theta \rightarrow \mathcal{N} |
52,463 | Help needed regarding sample size for a poll | You can calculate so called simultaneous confidence intervals for multinomial proportions, and see if they are too wide for your purposes.
In R, it can be done like this (data contains the numbers of the respondents from the 6 categories you mentioned):
if(!require(DescTools)){
install.packages("DescTools")
library(DescTools)
}
data <- c(14, 52, 90, 39, 17, 288)
MultinomCI(data)
Output:
A matrix: 6 × 3 of type dbl
est lwr.ci upr.ci
0.028 0.000 0.0719399
0.104 0.062 0.1479399
0.180 0.138 0.2239399
0.078 0.036 0.1219399
0.034 0.000 0.0779399
0.576 0.534 0.6199399 | Help needed regarding sample size for a poll | You can calculate so called simultaneous confidence intervals for multinomial proportions, and see if they are too wide for your purposes.
In R, it can be done like this (data contains the numbers of | Help needed regarding sample size for a poll
You can calculate so called simultaneous confidence intervals for multinomial proportions, and see if they are too wide for your purposes.
In R, it can be done like this (data contains the numbers of the respondents from the 6 categories you mentioned):
if(!require(DescTools)){
install.packages("DescTools")
library(DescTools)
}
data <- c(14, 52, 90, 39, 17, 288)
MultinomCI(data)
Output:
A matrix: 6 × 3 of type dbl
est lwr.ci upr.ci
0.028 0.000 0.0719399
0.104 0.062 0.1479399
0.180 0.138 0.2239399
0.078 0.036 0.1219399
0.034 0.000 0.0779399
0.576 0.534 0.6199399 | Help needed regarding sample size for a poll
You can calculate so called simultaneous confidence intervals for multinomial proportions, and see if they are too wide for your purposes.
In R, it can be done like this (data contains the numbers of |
52,464 | Help needed regarding sample size for a poll | The simplest quick and dirty answer is to quote everything as $\pm 1/\sqrt{N}$, which in this case is 4.5%. This is the same approach used in news media about public opinion polling, in which the standard "plus or minus three percent" means they asked about 1000 people (since $1/\sqrt{1000}\approx 0.0316$). The justification is treating the observed responses like a Poisson process, in which the N obtained is the best estimate of both mean and variance, which would make $1/\sqrt{N}$ the half-width of a one-sigma confidence interval for the rate. It's a popular approach, because it's really easy, but it's nowhere near rigorously true.
What you're really trying to do is estimate a confidence region for a multinomial proportion. However, you can probably get away with ignoring the constraint that all the proportions need to sum to one, and instead separately consider independent confidence intervals for six different binomial proportions. The Wikipedia article is a decent introduction to the the topic; for detailed reference, I recommend Brown, Cai, and Dasgupta (2001), "Interval Estimation for a Binomial Proportion", Statistical Science 16 (2): 101–133, and Newcombe (1998), "Two-sided confidence intervals for the single proportion: comparison of seven methods", Statistics in Medicine 17 (8): 857–872. Of the many approaches described, my favorite is the Wilson score interval, because among the best performers, it's the easiest to explain to non-specialists (it only needs square roots, not the incomplete beta functions of the Jeffreys prior), and it's also recommended by NIST.
To use it, first pick a $z$ score to quantify your confidence in the usual Gaussian way, for example $z$=1.96 for 95% confidence. Then, for each of your six categories $\{X_i\}$, let $p=X_i/N$, and then compute the confidence interval bounds as $$\frac{Np + z^2/2 \pm z \sqrt{Np(1-p)+z^2/4}}{N+z^2}$$where the minus sign gives the lower boundary and the plus sign gives the upper. If you stare at this for a while, you may recognize that you've essentially added $z^2$ coin flips ($\,p$ = $1\!-\!p$ = 1/2) to your data, as pointed out in Agresti and Coull (1998), "Approximate is better than exact for interval estimation of binomial proportions", The American Statistician, 52 (2): 119-126. Note this means your confidence intervals are not symmetric about $p$, which remains the best point estimate of the probability of that response, but that is the small price you pay for the benefit of choosing an interval that never extends past the boundaries (0 and 1). | Help needed regarding sample size for a poll | The simplest quick and dirty answer is to quote everything as $\pm 1/\sqrt{N}$, which in this case is 4.5%. This is the same approach used in news media about public opinion polling, in which the sta | Help needed regarding sample size for a poll
The simplest quick and dirty answer is to quote everything as $\pm 1/\sqrt{N}$, which in this case is 4.5%. This is the same approach used in news media about public opinion polling, in which the standard "plus or minus three percent" means they asked about 1000 people (since $1/\sqrt{1000}\approx 0.0316$). The justification is treating the observed responses like a Poisson process, in which the N obtained is the best estimate of both mean and variance, which would make $1/\sqrt{N}$ the half-width of a one-sigma confidence interval for the rate. It's a popular approach, because it's really easy, but it's nowhere near rigorously true.
What you're really trying to do is estimate a confidence region for a multinomial proportion. However, you can probably get away with ignoring the constraint that all the proportions need to sum to one, and instead separately consider independent confidence intervals for six different binomial proportions. The Wikipedia article is a decent introduction to the the topic; for detailed reference, I recommend Brown, Cai, and Dasgupta (2001), "Interval Estimation for a Binomial Proportion", Statistical Science 16 (2): 101–133, and Newcombe (1998), "Two-sided confidence intervals for the single proportion: comparison of seven methods", Statistics in Medicine 17 (8): 857–872. Of the many approaches described, my favorite is the Wilson score interval, because among the best performers, it's the easiest to explain to non-specialists (it only needs square roots, not the incomplete beta functions of the Jeffreys prior), and it's also recommended by NIST.
To use it, first pick a $z$ score to quantify your confidence in the usual Gaussian way, for example $z$=1.96 for 95% confidence. Then, for each of your six categories $\{X_i\}$, let $p=X_i/N$, and then compute the confidence interval bounds as $$\frac{Np + z^2/2 \pm z \sqrt{Np(1-p)+z^2/4}}{N+z^2}$$where the minus sign gives the lower boundary and the plus sign gives the upper. If you stare at this for a while, you may recognize that you've essentially added $z^2$ coin flips ($\,p$ = $1\!-\!p$ = 1/2) to your data, as pointed out in Agresti and Coull (1998), "Approximate is better than exact for interval estimation of binomial proportions", The American Statistician, 52 (2): 119-126. Note this means your confidence intervals are not symmetric about $p$, which remains the best point estimate of the probability of that response, but that is the small price you pay for the benefit of choosing an interval that never extends past the boundaries (0 and 1). | Help needed regarding sample size for a poll
The simplest quick and dirty answer is to quote everything as $\pm 1/\sqrt{N}$, which in this case is 4.5%. This is the same approach used in news media about public opinion polling, in which the sta |
52,465 | Help needed regarding sample size for a poll | As the population of interest is of size $\approx 1000$ and your sample half of that population, I do not think that a binomial approximation is warranted here. Instead I would argue for modelling your data using a Hypergeometric (e.g. reason A vs. all others, and cycle through the reasons) or Multivariate hypergeometric distribution (reason A, reason B, etc. all at once).
For the first approach see this SE question including the reference mentioned.
However the problem with response bias mentioned by Henry still stands - any inference will assume that your sample stems from a random draw from that population which is unlikely to be the case. For example if one of the reasons is "the service is sending a lot of annoying emails" then anyone that left due to that reason is unlikely to respond to your survey. | Help needed regarding sample size for a poll | As the population of interest is of size $\approx 1000$ and your sample half of that population, I do not think that a binomial approximation is warranted here. Instead I would argue for modelling you | Help needed regarding sample size for a poll
As the population of interest is of size $\approx 1000$ and your sample half of that population, I do not think that a binomial approximation is warranted here. Instead I would argue for modelling your data using a Hypergeometric (e.g. reason A vs. all others, and cycle through the reasons) or Multivariate hypergeometric distribution (reason A, reason B, etc. all at once).
For the first approach see this SE question including the reference mentioned.
However the problem with response bias mentioned by Henry still stands - any inference will assume that your sample stems from a random draw from that population which is unlikely to be the case. For example if one of the reasons is "the service is sending a lot of annoying emails" then anyone that left due to that reason is unlikely to respond to your survey. | Help needed regarding sample size for a poll
As the population of interest is of size $\approx 1000$ and your sample half of that population, I do not think that a binomial approximation is warranted here. Instead I would argue for modelling you |
52,466 | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution | A quick and--usually--easy way to verify an estimator is to apply it to simulated data. I will describe this approach in a way that generalizes to any estimator in any situation.
Begin by coding your estimator. Here is an R implementation. Its input is a sample in an array x. It outputs the estimated parameter $\hat p(x).$
estimator <- function(x) 2 * (1 - 1 / mean(x))
You will need to generate random datasets. This function creates independent random Binomial$(m,p)$ variables truncated to exceed the value of $k:$
rbinom.trunc <- function(n, m, p, k = 0) qbinom(runif(n, pbinom(k, m, p), 1), m, p)
The simplest check is whether, on average, (1) the estimator's value is close to the parameter value for a range of parameter values and (2) that it gets closer as the sample size increases. That needs a double loop, implemented below using the outer function in R, which takes care of running the following simulate function for a specified sample size n and parameter value p:
simulate <- Vectorize(function(n, p, n.sim) {
mean(apply(matrix(rbinom.trunc(n * n.sim, 2, p), n), 2, estimator))
}, c("n", "p"))
The third argument n.sim is the number of samples to generate. These are placed into a matrix, one sample per column, and the estimator is applied to each column to produce its estimate $\hat p.$ simulate returns the mean of all these estimates. (For a more detailed study, portray the entire set of estimates graphically with a histogram, probability plot, frequency plot, or whatever.) Here is an example of its use, where the values of $n,$ $p,$ and $\hat p$ are collected into a data frame for visualization:
n <- rev(c(2, 5, 10, 20, 50))
p <- c(0.05, 0.2, 0.5, 0.7, 0.9)
n.sim <- 5e2
X <- data.frame(n = factor(rep(n, length(p))),
p = rep(p, each = length(n)),
Estimate = c(outer(n, p, simulate, n.sim = n.sim)))
When I ran it, the output indicates the estimator is biased low for tiny values of $n,$ but once $n \ge 5$ or so, it is accurate. This doesn't mean it's a good estimator (we wouldn't expect the MLE to be a great one for small samples), but clearly it works.
This is a gussied-up version of the ggplot2 visualization of X created by ggplot(X, aes(p, Estimate, color = n)) + geom_point(). You can also make a usable, not-quite-so-pretty plot with the base plot command, as in with(X, plot(p, Estimate)).
Apart from the lines that specify the range of sample sizes n, the range of parameter values p, and the simulation size n.sim, this solution requires five lines of code for the five basic steps to estimate--generate--simulate--organize--summarize the results. Carrying out this kind of check often is so quick and easy (the computation time is negligible) that it's always worth doing when you care about your answer. | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution | A quick and--usually--easy way to verify an estimator is to apply it to simulated data. I will describe this approach in a way that generalizes to any estimator in any situation.
Begin by coding your | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution
A quick and--usually--easy way to verify an estimator is to apply it to simulated data. I will describe this approach in a way that generalizes to any estimator in any situation.
Begin by coding your estimator. Here is an R implementation. Its input is a sample in an array x. It outputs the estimated parameter $\hat p(x).$
estimator <- function(x) 2 * (1 - 1 / mean(x))
You will need to generate random datasets. This function creates independent random Binomial$(m,p)$ variables truncated to exceed the value of $k:$
rbinom.trunc <- function(n, m, p, k = 0) qbinom(runif(n, pbinom(k, m, p), 1), m, p)
The simplest check is whether, on average, (1) the estimator's value is close to the parameter value for a range of parameter values and (2) that it gets closer as the sample size increases. That needs a double loop, implemented below using the outer function in R, which takes care of running the following simulate function for a specified sample size n and parameter value p:
simulate <- Vectorize(function(n, p, n.sim) {
mean(apply(matrix(rbinom.trunc(n * n.sim, 2, p), n), 2, estimator))
}, c("n", "p"))
The third argument n.sim is the number of samples to generate. These are placed into a matrix, one sample per column, and the estimator is applied to each column to produce its estimate $\hat p.$ simulate returns the mean of all these estimates. (For a more detailed study, portray the entire set of estimates graphically with a histogram, probability plot, frequency plot, or whatever.) Here is an example of its use, where the values of $n,$ $p,$ and $\hat p$ are collected into a data frame for visualization:
n <- rev(c(2, 5, 10, 20, 50))
p <- c(0.05, 0.2, 0.5, 0.7, 0.9)
n.sim <- 5e2
X <- data.frame(n = factor(rep(n, length(p))),
p = rep(p, each = length(n)),
Estimate = c(outer(n, p, simulate, n.sim = n.sim)))
When I ran it, the output indicates the estimator is biased low for tiny values of $n,$ but once $n \ge 5$ or so, it is accurate. This doesn't mean it's a good estimator (we wouldn't expect the MLE to be a great one for small samples), but clearly it works.
This is a gussied-up version of the ggplot2 visualization of X created by ggplot(X, aes(p, Estimate, color = n)) + geom_point(). You can also make a usable, not-quite-so-pretty plot with the base plot command, as in with(X, plot(p, Estimate)).
Apart from the lines that specify the range of sample sizes n, the range of parameter values p, and the simulation size n.sim, this solution requires five lines of code for the five basic steps to estimate--generate--simulate--organize--summarize the results. Carrying out this kind of check often is so quick and easy (the computation time is negligible) that it's always worth doing when you care about your answer. | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution
A quick and--usually--easy way to verify an estimator is to apply it to simulated data. I will describe this approach in a way that generalizes to any estimator in any situation.
Begin by coding your |
52,467 | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution | As a sanity check, you can think of this as an iid sample from a shifted Bernoulli distribution with parameter $q=p^2/(1-(1-p)^2)$. This gives you the MLE of $q$. You can then in turn use functional equivalence of MLEs to obtain the MLE of $p$. | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution | As a sanity check, you can think of this as an iid sample from a shifted Bernoulli distribution with parameter $q=p^2/(1-(1-p)^2)$. This gives you the MLE of $q$. You can then in turn use functional | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution
As a sanity check, you can think of this as an iid sample from a shifted Bernoulli distribution with parameter $q=p^2/(1-(1-p)^2)$. This gives you the MLE of $q$. You can then in turn use functional equivalence of MLEs to obtain the MLE of $p$. | Maximum likelihood estimator of $p$ for the binomial (truncated) distribution
As a sanity check, you can think of this as an iid sample from a shifted Bernoulli distribution with parameter $q=p^2/(1-(1-p)^2)$. This gives you the MLE of $q$. You can then in turn use functional |
52,468 | Taylor expansion in Hoeffding's Lemma proof | This is the mean-value form of Taylor's theorem:
$$f(x)=f(0)+xf'(0)+\frac{x^2}{2}f''(c)$$
where $c$ is between $0$ and $x$
Take $x=h$ and $c=h\theta$ | Taylor expansion in Hoeffding's Lemma proof | This is the mean-value form of Taylor's theorem:
$$f(x)=f(0)+xf'(0)+\frac{x^2}{2}f''(c)$$
where $c$ is between $0$ and $x$
Take $x=h$ and $c=h\theta$ | Taylor expansion in Hoeffding's Lemma proof
This is the mean-value form of Taylor's theorem:
$$f(x)=f(0)+xf'(0)+\frac{x^2}{2}f''(c)$$
where $c$ is between $0$ and $x$
Take $x=h$ and $c=h\theta$ | Taylor expansion in Hoeffding's Lemma proof
This is the mean-value form of Taylor's theorem:
$$f(x)=f(0)+xf'(0)+\frac{x^2}{2}f''(c)$$
where $c$ is between $0$ and $x$
Take $x=h$ and $c=h\theta$ |
52,469 | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed | Writing
$$ab - cd = \left(\left[\left(\frac{a+b}{\sqrt 2}\right)^2 + \left(\frac{c+d}{\sqrt 2}\right)^2 \right] - \left[\left(\frac{a-b}{\sqrt 2}\right)^2 + \left(\frac{c-d}{\sqrt 2}\right)^2 \right]\right)/2$$
and noting that $(a+b, c+d, a-b, c-d)/\sqrt{2}$ has a standard Normal distribution, it is immediate (from the definitions of chi-square distributions arising from sums of squares of independent standard Normal variables) that the distribution of $ab -cd$ is twice that of the difference of two independent chi-squared(2) variables.
Since chi-squared(2) variables have Exponential$(1/2)$ distributions, $ab-cd$ is distributed as the difference of two Exponential$(1)$ distributions. Since the characteristic function of an Exponential$(1)$ distribution is $\phi(t) = 1/(1 - it),$ the cf. of the difference is
$$\phi(t)\phi(-t) = \frac{1}{1+t^2}.$$
The Fourier Transform of that is proportional to $f(\omega)=\exp(-|\omega|),$ showing that the absolute value $|ab-cd|$ must have a density proportional to $f(x)$ for $x\ge 0:$ that's the Exponential$(1)$ distribution.
The thread at Probability function for difference between two i.i.d. Exponential r.v.s gives a direct demonstration of this latter result via integration of the convolution.
Reference
(Courtesy Glen B: see comments to the question.)
The symmetry of standard Normal distributions around zero and the independence of $d$ from $(a,b,c)$ imply $(a,b,c,-d)$ has the same distribution as $(a,b,c,d),$ whence the distribution of $|ab-cd|$ is the same as that of $|ab-c(-d)|=|ab+cd|.$
A brief analysis of the question in this form appears in Stuart & Ord, Kendall's Advanced Theory of Statistics Volume I (3rd Ed. 1987) in the first half of exercise 11.21:
$x_r,y_r,$ $r=1,2,\ldots,$ are independent standardized normal variates. Show that $z= x_1y_1 + x_2y_2$ is distributed exactly as $w_1-w_2,$ where the $w_j$ are independent with f.f. [probability density] $e^{-w_j}$ [NB: only for $w_j\ge 0$] and hence or otherwise that $z$ is distributed in the Laplace form $g(z)=\frac{1}{2}\exp\left(-|z|\right),$ and that $|z|$ is again exponentially distributed. | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed | Writing
$$ab - cd = \left(\left[\left(\frac{a+b}{\sqrt 2}\right)^2 + \left(\frac{c+d}{\sqrt 2}\right)^2 \right] - \left[\left(\frac{a-b}{\sqrt 2}\right)^2 + \left(\frac{c-d}{\sqrt 2}\right)^2 \right] | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed
Writing
$$ab - cd = \left(\left[\left(\frac{a+b}{\sqrt 2}\right)^2 + \left(\frac{c+d}{\sqrt 2}\right)^2 \right] - \left[\left(\frac{a-b}{\sqrt 2}\right)^2 + \left(\frac{c-d}{\sqrt 2}\right)^2 \right]\right)/2$$
and noting that $(a+b, c+d, a-b, c-d)/\sqrt{2}$ has a standard Normal distribution, it is immediate (from the definitions of chi-square distributions arising from sums of squares of independent standard Normal variables) that the distribution of $ab -cd$ is twice that of the difference of two independent chi-squared(2) variables.
Since chi-squared(2) variables have Exponential$(1/2)$ distributions, $ab-cd$ is distributed as the difference of two Exponential$(1)$ distributions. Since the characteristic function of an Exponential$(1)$ distribution is $\phi(t) = 1/(1 - it),$ the cf. of the difference is
$$\phi(t)\phi(-t) = \frac{1}{1+t^2}.$$
The Fourier Transform of that is proportional to $f(\omega)=\exp(-|\omega|),$ showing that the absolute value $|ab-cd|$ must have a density proportional to $f(x)$ for $x\ge 0:$ that's the Exponential$(1)$ distribution.
The thread at Probability function for difference between two i.i.d. Exponential r.v.s gives a direct demonstration of this latter result via integration of the convolution.
Reference
(Courtesy Glen B: see comments to the question.)
The symmetry of standard Normal distributions around zero and the independence of $d$ from $(a,b,c)$ imply $(a,b,c,-d)$ has the same distribution as $(a,b,c,d),$ whence the distribution of $|ab-cd|$ is the same as that of $|ab-c(-d)|=|ab+cd|.$
A brief analysis of the question in this form appears in Stuart & Ord, Kendall's Advanced Theory of Statistics Volume I (3rd Ed. 1987) in the first half of exercise 11.21:
$x_r,y_r,$ $r=1,2,\ldots,$ are independent standardized normal variates. Show that $z= x_1y_1 + x_2y_2$ is distributed exactly as $w_1-w_2,$ where the $w_j$ are independent with f.f. [probability density] $e^{-w_j}$ [NB: only for $w_j\ge 0$] and hence or otherwise that $z$ is distributed in the Laplace form $g(z)=\frac{1}{2}\exp\left(-|z|\right),$ and that $|z|$ is again exponentially distributed. | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed
Writing
$$ab - cd = \left(\left[\left(\frac{a+b}{\sqrt 2}\right)^2 + \left(\frac{c+d}{\sqrt 2}\right)^2 \right] - \left[\left(\frac{a-b}{\sqrt 2}\right)^2 + \left(\frac{c-d}{\sqrt 2}\right)^2 \right] |
52,470 | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed | There is an extensive theory on the properties of random matrices, including the distribution of their determinants.
From the answer here for example you can see that if you form the matrix $W=AA^T$, where the elements of $A$ are your $\mathcal N (0,1)$ random variables, then
$$ \det W \sim \chi^2_2 \chi^2_1 $$
namely the determinant of $W$ has the distribution of a product of two independent $\chi^2$ random variables with 2 and 1 degrees of freedom.
The distribution of the product can be derived using the equivalent distribution of a product of Gamma random variables. In the general case this is expressed in terms of a modified Bessel functions, but here it reduces to a simpler expression, and using the transformation $|\det A|=\sqrt{\det W}$ it can be worked out quite easily that indeed $|\det A|$ has an exponential distribution with mean 1.
There might be a more direct and maybe simpler way of driving this property that i'm missing at the moment - maybe someone else can find it. | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed | There is an extensive theory on the properties of random matrices, including the distribution of their determinants.
From the answer here for example you can see that if you form the matrix $W=AA^T$, | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed
There is an extensive theory on the properties of random matrices, including the distribution of their determinants.
From the answer here for example you can see that if you form the matrix $W=AA^T$, where the elements of $A$ are your $\mathcal N (0,1)$ random variables, then
$$ \det W \sim \chi^2_2 \chi^2_1 $$
namely the determinant of $W$ has the distribution of a product of two independent $\chi^2$ random variables with 2 and 1 degrees of freedom.
The distribution of the product can be derived using the equivalent distribution of a product of Gamma random variables. In the general case this is expressed in terms of a modified Bessel functions, but here it reduces to a simpler expression, and using the transformation $|\det A|=\sqrt{\det W}$ it can be worked out quite easily that indeed $|\det A|$ has an exponential distribution with mean 1.
There might be a more direct and maybe simpler way of driving this property that i'm missing at the moment - maybe someone else can find it. | Unexpected distribution of ab-cd where a,b,c,d are independent and N(0,1) distributed
There is an extensive theory on the properties of random matrices, including the distribution of their determinants.
From the answer here for example you can see that if you form the matrix $W=AA^T$, |
52,471 | Motivating use of Bayesian splines in excess mortality estimation | The death rate can't be negative (the pandemic was bad but it wasn't zombie apocalypse bad), so a natural way to enforce that is to fit an additive/linear model on the log scale (hence why the model has offset $\log p$ and not simply $p$), and then map back to the interval $[0, \infty]$ via the inverse of the log, the exponential function.
This is the common formulation for GLMs, where the $\log$ would be the link function and $\exp$ its inverse.
The authors don't really explain what they mean by a Bayesian spline; typically in this kind of framework we choose a spline basis of a given size and then use a penalized fit to shrink the coefficients of the spline to minimise a penalized fit criterion with penalty on the wiggliness of the estimated spline. In a Bayesian context this penalty can be thought of as a prior on the wiggliness of the spline, which can also be thought of as gaussian priors on the coefficients (IIRC). | Motivating use of Bayesian splines in excess mortality estimation | The death rate can't be negative (the pandemic was bad but it wasn't zombie apocalypse bad), so a natural way to enforce that is to fit an additive/linear model on the log scale (hence why the model h | Motivating use of Bayesian splines in excess mortality estimation
The death rate can't be negative (the pandemic was bad but it wasn't zombie apocalypse bad), so a natural way to enforce that is to fit an additive/linear model on the log scale (hence why the model has offset $\log p$ and not simply $p$), and then map back to the interval $[0, \infty]$ via the inverse of the log, the exponential function.
This is the common formulation for GLMs, where the $\log$ would be the link function and $\exp$ its inverse.
The authors don't really explain what they mean by a Bayesian spline; typically in this kind of framework we choose a spline basis of a given size and then use a penalized fit to shrink the coefficients of the spline to minimise a penalized fit criterion with penalty on the wiggliness of the estimated spline. In a Bayesian context this penalty can be thought of as a prior on the wiggliness of the spline, which can also be thought of as gaussian priors on the coefficients (IIRC). | Motivating use of Bayesian splines in excess mortality estimation
The death rate can't be negative (the pandemic was bad but it wasn't zombie apocalypse bad), so a natural way to enforce that is to fit an additive/linear model on the log scale (hence why the model h |
52,472 | Are there any way of removing impact of a certain data from a trained model (about "right to forget") | The keyword you're looking for is machine unlearning; if you search for that on Google scholar you'll find a large number of relevant studies. This is an active active area of research for exactly the reason you described. For CNNs, it seems to me that there is not really great solution yet (but I might be wrong).
For example, one solution that people (Bourtoule et al. 2021) have proposed is to split the training data into separate shards (=smaller subdatasets) and then train separate models on each of these shards. For prediction/inference, the output of these separate weak learners can then be combined in various ways (see Boosting). Why is this helpful for unlearning? Well, the influence of a single training point is thereby limited to a single submodel, and if that datapoint must be removed, then "only" this submodel has to be retrained.
There are various other methods proposed, but as I said, it seems to me to be an essentially open research question. A comprehensive reference list can be found here.
Two remarks that may or may not be of interest:
There is a connection to differential privacy, since the latter requires model outputs to be indistinguishable to a certain degree when individual datapoints in the training dataset are substituted. Does this completely eliminate the need for machine unlearning techniques? No. (Imaging what happens when 50% of the training dataset demand that their data be unlearned.)
How hard machine unlearning is depends largely on the considered model class. E.g., for linear Gaussian models and Gaussian Processes, simple recursive updating rules exist that can be exploited very cheaply. (Think recursive least squares, just in reverse.) In general, I think if a model class allows for a simple, closed-form recursive update procedure to include a single new datapoint, then it will also be possible to do the same thing in reverse. This obviously excludes all models that are batch-trained using numerical optimization procedures. | Are there any way of removing impact of a certain data from a trained model (about "right to forget" | The keyword you're looking for is machine unlearning; if you search for that on Google scholar you'll find a large number of relevant studies. This is an active active area of research for exactly the | Are there any way of removing impact of a certain data from a trained model (about "right to forget")
The keyword you're looking for is machine unlearning; if you search for that on Google scholar you'll find a large number of relevant studies. This is an active active area of research for exactly the reason you described. For CNNs, it seems to me that there is not really great solution yet (but I might be wrong).
For example, one solution that people (Bourtoule et al. 2021) have proposed is to split the training data into separate shards (=smaller subdatasets) and then train separate models on each of these shards. For prediction/inference, the output of these separate weak learners can then be combined in various ways (see Boosting). Why is this helpful for unlearning? Well, the influence of a single training point is thereby limited to a single submodel, and if that datapoint must be removed, then "only" this submodel has to be retrained.
There are various other methods proposed, but as I said, it seems to me to be an essentially open research question. A comprehensive reference list can be found here.
Two remarks that may or may not be of interest:
There is a connection to differential privacy, since the latter requires model outputs to be indistinguishable to a certain degree when individual datapoints in the training dataset are substituted. Does this completely eliminate the need for machine unlearning techniques? No. (Imaging what happens when 50% of the training dataset demand that their data be unlearned.)
How hard machine unlearning is depends largely on the considered model class. E.g., for linear Gaussian models and Gaussian Processes, simple recursive updating rules exist that can be exploited very cheaply. (Think recursive least squares, just in reverse.) In general, I think if a model class allows for a simple, closed-form recursive update procedure to include a single new datapoint, then it will also be possible to do the same thing in reverse. This obviously excludes all models that are batch-trained using numerical optimization procedures. | Are there any way of removing impact of a certain data from a trained model (about "right to forget"
The keyword you're looking for is machine unlearning; if you search for that on Google scholar you'll find a large number of relevant studies. This is an active active area of research for exactly the |
52,473 | Are there any way of removing impact of a certain data from a trained model (about "right to forget") | It is possible, but amounts to the same effort as retraining the model.
The weights $\theta$ at iteration $t$ (a mini-batch within an epoch) are defined as:
$$\theta_t=\theta_{t-1}-\nabla_{\theta_{t-1}}\mathcal L_{t-1}$$
By recursion, it becomes obvious that:
$$\theta_t=\theta_0-\sum_{i=0}^{t-1}\nabla_{\theta_i}\mathcal L_i$$
Where $\mathcal L_i$ is the mini-batch loss function at a given iteration (again, mini-batch within epochs).
So, even when said data-point you want to remove is not included in the mini-batch, its effect is still present in the current state of the weights due to the previous iterations, where it was used to derive a gradient.
To fully remove the effects of a data-point, you'd have to backtrack weights all the way back to the first time it was used to derive gradients.
Then, you'd calculate it's individual contribution to the gradient and remove that.
But, the next iteration would be using a new set of weights, which means that the new loss function calculation would need to be redone.
In other words, it's the same effort of retraining the whole model in most applications (I'm sure some very specific training schemes and architectures might allow for simpler solutions). | Are there any way of removing impact of a certain data from a trained model (about "right to forget" | It is possible, but amounts to the same effort as retraining the model.
The weights $\theta$ at iteration $t$ (a mini-batch within an epoch) are defined as:
$$\theta_t=\theta_{t-1}-\nabla_{\theta_{t- | Are there any way of removing impact of a certain data from a trained model (about "right to forget")
It is possible, but amounts to the same effort as retraining the model.
The weights $\theta$ at iteration $t$ (a mini-batch within an epoch) are defined as:
$$\theta_t=\theta_{t-1}-\nabla_{\theta_{t-1}}\mathcal L_{t-1}$$
By recursion, it becomes obvious that:
$$\theta_t=\theta_0-\sum_{i=0}^{t-1}\nabla_{\theta_i}\mathcal L_i$$
Where $\mathcal L_i$ is the mini-batch loss function at a given iteration (again, mini-batch within epochs).
So, even when said data-point you want to remove is not included in the mini-batch, its effect is still present in the current state of the weights due to the previous iterations, where it was used to derive a gradient.
To fully remove the effects of a data-point, you'd have to backtrack weights all the way back to the first time it was used to derive gradients.
Then, you'd calculate it's individual contribution to the gradient and remove that.
But, the next iteration would be using a new set of weights, which means that the new loss function calculation would need to be redone.
In other words, it's the same effort of retraining the whole model in most applications (I'm sure some very specific training schemes and architectures might allow for simpler solutions). | Are there any way of removing impact of a certain data from a trained model (about "right to forget"
It is possible, but amounts to the same effort as retraining the model.
The weights $\theta$ at iteration $t$ (a mini-batch within an epoch) are defined as:
$$\theta_t=\theta_{t-1}-\nabla_{\theta_{t- |
52,474 | Can dropping an insignificant factor from a model make the model worse? | In this case you are relying on the wrong test to decide that Zone is not significant. Note that the coefficients of the Zone effect are large (>30) with huge standard errors. This happens when the likelihood keeps monotonically increasing as the estimate goes to infinity. In such cases the Wald test that gives you the z and p-values is useless. What is happening, I think, is that the Crocodile zone has 0 events, so the relative risk of the other zones compared to it is infinite.
If you were to do a likelihood ratio test for Zone as a covariate, you would see that it is significant (in fact, you pretty much did it by dropping the effect and looking at the likelihood again, you just did not compute the p-value), so you would not want to drop it. | Can dropping an insignificant factor from a model make the model worse? | In this case you are relying on the wrong test to decide that Zone is not significant. Note that the coefficients of the Zone effect are large (>30) with huge standard errors. This happens when the li | Can dropping an insignificant factor from a model make the model worse?
In this case you are relying on the wrong test to decide that Zone is not significant. Note that the coefficients of the Zone effect are large (>30) with huge standard errors. This happens when the likelihood keeps monotonically increasing as the estimate goes to infinity. In such cases the Wald test that gives you the z and p-values is useless. What is happening, I think, is that the Crocodile zone has 0 events, so the relative risk of the other zones compared to it is infinite.
If you were to do a likelihood ratio test for Zone as a covariate, you would see that it is significant (in fact, you pretty much did it by dropping the effect and looking at the likelihood again, you just did not compute the p-value), so you would not want to drop it. | Can dropping an insignificant factor from a model make the model worse?
In this case you are relying on the wrong test to decide that Zone is not significant. Note that the coefficients of the Zone effect are large (>30) with huge standard errors. This happens when the li |
52,475 | Can dropping an insignificant factor from a model make the model worse? | AIC is a function of the number of parameters within your model k and its likelihood L. Formally, AIC = 2k - 2 ln(L). Since a smaller AIC is better, the term 2k serves as a penalty based on the number of parameters. Thus, AIC represents a trade-off between complexity (k) and fit (L). Imagine two models with similar likelihoods, but one model features 1000 parameters and another 2 (extreme example). The simpler model with 2 parameters is typically preferred (parsimony).
In this case, you have...
Model 1: k = 5+1, 2 ln(L) = -739.89 => AIC1 = 12 + 739.89
Model 2: k
= 2+1, 2 ln(L) = -769.16 => AIC2 = 6 + 769.16
So even though you have reduced the model by 3 parameters, the likelihood or fit of your initial model was enough to offset the parameter penalty, resulting in a better AIC for the larger model. | Can dropping an insignificant factor from a model make the model worse? | AIC is a function of the number of parameters within your model k and its likelihood L. Formally, AIC = 2k - 2 ln(L). Since a smaller AIC is better, the term 2k serves as a penalty based on the number | Can dropping an insignificant factor from a model make the model worse?
AIC is a function of the number of parameters within your model k and its likelihood L. Formally, AIC = 2k - 2 ln(L). Since a smaller AIC is better, the term 2k serves as a penalty based on the number of parameters. Thus, AIC represents a trade-off between complexity (k) and fit (L). Imagine two models with similar likelihoods, but one model features 1000 parameters and another 2 (extreme example). The simpler model with 2 parameters is typically preferred (parsimony).
In this case, you have...
Model 1: k = 5+1, 2 ln(L) = -739.89 => AIC1 = 12 + 739.89
Model 2: k
= 2+1, 2 ln(L) = -769.16 => AIC2 = 6 + 769.16
So even though you have reduced the model by 3 parameters, the likelihood or fit of your initial model was enough to offset the parameter penalty, resulting in a better AIC for the larger model. | Can dropping an insignificant factor from a model make the model worse?
AIC is a function of the number of parameters within your model k and its likelihood L. Formally, AIC = 2k - 2 ln(L). Since a smaller AIC is better, the term 2k serves as a penalty based on the number |
52,476 | Bootstrap optimism corrected - results interpretation | OK, so I've identified a few problems with your approach in the comments. The key thing to remember here is that "the model" is really a process and not a single thing. Anything you do in the process of creating the model is technically part of "the model" and so it needs to be validated. For example, you mention using "best hyperparameters" but we techincally don't know what those are. All we really say is what hyperparameters lead to smallest loss using these data, and -- now here comes the important part -- the "best hyperparameters" might change were we to use different data to fit a model. That is the concept of sampling variability in a nutshell, so you need to evaluate how sensitive your model is to that variability.
In what follows, I'm going to basically show you:
a) How to to properly validate your model in cases for which train/test splits are not ideal, and
b) How to construct optimism corrected calibration curves.
I will do this all in sklearn. We'll need a model which requires some hyperparameters to be selected via cross validation. To this end, we'll use sklearn.linear_model.LogisticRegression with an l2 penalty. The approaches we will develop will generalize to other models with more parameters to select, but this is the simplest non-trivial example I could think of. For data, we will use the breast cancer data that comes with sklearn. I will include all relevant code at the end, choosing only to expose crucial code for understanding concepts. Let's begin.
"The Model" vs. A Model
Earlier, I tried to make a distinction between "The Model" and A Model. A model is whatever is going to do the learning (e.g. Logistic regression, random forests, neural nets, whatever). That part isn't as important as "The Model" because we need to validate "The Model" as opposed to A Model.
A handy trick for understanding what is part of "The Model" is to ask yourself the following question:
Were I to get different training data, what parts of the prediction pipeline are subject to change?
If you're standardizing your inputs, the standardization constants (i.e. mean and variances of the predictors) is going to change, so that is part of "The Model". If you're using a feature selection method, the features you select might change so that is part of "The Model". Everything in "The Model" needs to be validated.
Luckily, much of that stuff can be put into sklearn.pipeline.Pipeline. So, if I'm using logistic regression with an l2 penalty, my code might look like
pipeline_components = [
('scaling',StandardScaler()),
('logistic_regression', LogisticRegression(penalty='l2', max_iter = 1_000_000))
]
a_model = Pipeline(pipeline_components)
Now, if I had chosen an l2 penalty (C in LogisticRegression) then I would have "The Model". However, that parameter needs to be estimated via cross validation. To that extent, we will use GridSearchCrossValidation.
param_grid = {'logistic_regression__C': 1/np.logspace(-2, 2, base = np.exp(1))}
the_model = GridSearchCV(model, param_grid=param_grid, cv = inner_cv, scoring = brier_score, verbose=0)
Now we have "The Model". Remember, GridSearchCV has a .fit method, and once it is fit we can call .predict. Hence, GridSearchCV is really an estimator.
At this point, we can pass the_model to something like sklearn.model_selection.cross_validate in order to do the optimism corrected bootstrap. Alternatively, Frank Harrell has mentioned that 100 repeats of 10 fold CV are about as good as bootstrapping, and that requires a bunch less code, so I will opt for that.
One thing to keep in mind: There are two levels of cross validation here. There is the inner fold (meant to choose the optimal hyperparameters) and the outer fold (the 100 repeats of 10 fold, or the optimism corrected bootstrap). Keep track of the inner fold, because we'll need that later.
Calibration, Such an Aggravation
Now we have "The Model". We are capable of estimating the performance of "The Model" for a given metric via this nested cross validation structure (be it bootstrapped or otherwise). Now onto calibration via the optimism corrected bootstrap. Much of this is really similar to my blog post. A calibration curve is in essence an estimate, so we just do the optimsim corrected bootstrap for the calibration curve. Let me demonstrate.
First, we need to fit "The model" and obtain probability estimates. This will give us the "apparent calibration" (or apparent performance as I call it in my blog post). I'm going to pre-specify some probability values to evaluate the calibration curve at. We'll use a lowess smoother to estimate the calibration curve
prange = np.linspace(0, 1, 25)
# Fit our model on all the data
best_model = gscv.fit(X, y).best_estimator_
# Estimate the risks from the best model
predicted_p = best_model.predict_proba(X)[:, 1]
# Compute the apparent calibration
apparent_cal = lowess(y, predicted_p, it=0, xvals=prange)
We might get something that looks like
Now, all we need to do is bootstrap this entire process
nsim = 500
optimism = np.zeros((nsim, prange.size))
for i in tqdm(range(nsim)):
# Bootstrap the original dataset
Xb, yb = resample(X, y)
# Fit the model, including the hyperparameter selection, on the bootstrapped data
fit = gscv.fit(Xb, yb).best_estimator_
# Get the risk estimates from the model fit on the bootstrapped predictions
predicted_risk_bs = fit.predict_proba(Xb)[:, 1]
# Fit a calibration curve to the predicted risk on bootstrapped data
smooth_p_bs = lowess(yb, predicted_risk_bs, it=0, xvals=prange)
# Apply the bootstrap model on the original data
predicted_risk_orig = fit.predict_proba(X)[:, 1]
# Fit a calibration curve on the original data using predictions from bootstrapped model
smooth_p_bs_orig = lowess(y, predicted_risk_orig, it=0, xvals=prange)
optimism[i] = smooth_p_bs - smooth_p_bs_orig
bias_corrected_cal = apparent_cal - optimism.mean(0)
Although I write a little class for sake of ease in my blog post, the steps presented here are nearly identical. The only difference is that the estimate is not a single number, its a function (namely the calibration curve). The result would look like
Note there is not much difference between the two curves. I anticipate this is due to the data and the fact we've explicitly traded off bias for variance by using a penalty.
The Code (I know you skipped here, don't lie).
See this gist | Bootstrap optimism corrected - results interpretation | OK, so I've identified a few problems with your approach in the comments. The key thing to remember here is that "the model" is really a process and not a single thing. Anything you do in the proces | Bootstrap optimism corrected - results interpretation
OK, so I've identified a few problems with your approach in the comments. The key thing to remember here is that "the model" is really a process and not a single thing. Anything you do in the process of creating the model is technically part of "the model" and so it needs to be validated. For example, you mention using "best hyperparameters" but we techincally don't know what those are. All we really say is what hyperparameters lead to smallest loss using these data, and -- now here comes the important part -- the "best hyperparameters" might change were we to use different data to fit a model. That is the concept of sampling variability in a nutshell, so you need to evaluate how sensitive your model is to that variability.
In what follows, I'm going to basically show you:
a) How to to properly validate your model in cases for which train/test splits are not ideal, and
b) How to construct optimism corrected calibration curves.
I will do this all in sklearn. We'll need a model which requires some hyperparameters to be selected via cross validation. To this end, we'll use sklearn.linear_model.LogisticRegression with an l2 penalty. The approaches we will develop will generalize to other models with more parameters to select, but this is the simplest non-trivial example I could think of. For data, we will use the breast cancer data that comes with sklearn. I will include all relevant code at the end, choosing only to expose crucial code for understanding concepts. Let's begin.
"The Model" vs. A Model
Earlier, I tried to make a distinction between "The Model" and A Model. A model is whatever is going to do the learning (e.g. Logistic regression, random forests, neural nets, whatever). That part isn't as important as "The Model" because we need to validate "The Model" as opposed to A Model.
A handy trick for understanding what is part of "The Model" is to ask yourself the following question:
Were I to get different training data, what parts of the prediction pipeline are subject to change?
If you're standardizing your inputs, the standardization constants (i.e. mean and variances of the predictors) is going to change, so that is part of "The Model". If you're using a feature selection method, the features you select might change so that is part of "The Model". Everything in "The Model" needs to be validated.
Luckily, much of that stuff can be put into sklearn.pipeline.Pipeline. So, if I'm using logistic regression with an l2 penalty, my code might look like
pipeline_components = [
('scaling',StandardScaler()),
('logistic_regression', LogisticRegression(penalty='l2', max_iter = 1_000_000))
]
a_model = Pipeline(pipeline_components)
Now, if I had chosen an l2 penalty (C in LogisticRegression) then I would have "The Model". However, that parameter needs to be estimated via cross validation. To that extent, we will use GridSearchCrossValidation.
param_grid = {'logistic_regression__C': 1/np.logspace(-2, 2, base = np.exp(1))}
the_model = GridSearchCV(model, param_grid=param_grid, cv = inner_cv, scoring = brier_score, verbose=0)
Now we have "The Model". Remember, GridSearchCV has a .fit method, and once it is fit we can call .predict. Hence, GridSearchCV is really an estimator.
At this point, we can pass the_model to something like sklearn.model_selection.cross_validate in order to do the optimism corrected bootstrap. Alternatively, Frank Harrell has mentioned that 100 repeats of 10 fold CV are about as good as bootstrapping, and that requires a bunch less code, so I will opt for that.
One thing to keep in mind: There are two levels of cross validation here. There is the inner fold (meant to choose the optimal hyperparameters) and the outer fold (the 100 repeats of 10 fold, or the optimism corrected bootstrap). Keep track of the inner fold, because we'll need that later.
Calibration, Such an Aggravation
Now we have "The Model". We are capable of estimating the performance of "The Model" for a given metric via this nested cross validation structure (be it bootstrapped or otherwise). Now onto calibration via the optimism corrected bootstrap. Much of this is really similar to my blog post. A calibration curve is in essence an estimate, so we just do the optimsim corrected bootstrap for the calibration curve. Let me demonstrate.
First, we need to fit "The model" and obtain probability estimates. This will give us the "apparent calibration" (or apparent performance as I call it in my blog post). I'm going to pre-specify some probability values to evaluate the calibration curve at. We'll use a lowess smoother to estimate the calibration curve
prange = np.linspace(0, 1, 25)
# Fit our model on all the data
best_model = gscv.fit(X, y).best_estimator_
# Estimate the risks from the best model
predicted_p = best_model.predict_proba(X)[:, 1]
# Compute the apparent calibration
apparent_cal = lowess(y, predicted_p, it=0, xvals=prange)
We might get something that looks like
Now, all we need to do is bootstrap this entire process
nsim = 500
optimism = np.zeros((nsim, prange.size))
for i in tqdm(range(nsim)):
# Bootstrap the original dataset
Xb, yb = resample(X, y)
# Fit the model, including the hyperparameter selection, on the bootstrapped data
fit = gscv.fit(Xb, yb).best_estimator_
# Get the risk estimates from the model fit on the bootstrapped predictions
predicted_risk_bs = fit.predict_proba(Xb)[:, 1]
# Fit a calibration curve to the predicted risk on bootstrapped data
smooth_p_bs = lowess(yb, predicted_risk_bs, it=0, xvals=prange)
# Apply the bootstrap model on the original data
predicted_risk_orig = fit.predict_proba(X)[:, 1]
# Fit a calibration curve on the original data using predictions from bootstrapped model
smooth_p_bs_orig = lowess(y, predicted_risk_orig, it=0, xvals=prange)
optimism[i] = smooth_p_bs - smooth_p_bs_orig
bias_corrected_cal = apparent_cal - optimism.mean(0)
Although I write a little class for sake of ease in my blog post, the steps presented here are nearly identical. The only difference is that the estimate is not a single number, its a function (namely the calibration curve). The result would look like
Note there is not much difference between the two curves. I anticipate this is due to the data and the fact we've explicitly traded off bias for variance by using a penalty.
The Code (I know you skipped here, don't lie).
See this gist | Bootstrap optimism corrected - results interpretation
OK, so I've identified a few problems with your approach in the comments. The key thing to remember here is that "the model" is really a process and not a single thing. Anything you do in the proces |
52,477 | How can I know If LASSO logistic regression model is good enough to be feature selection tool? | Many analysts automatically assume that feature selection is a good idea. This never followed. Parsimony is the enemy of predictive discrimination. Perhaps more important, feature selection, whether using lasso or other methods, is unreliable. The way to tell if lasso is good enough is to test its resilience/stability using the bootstrap. The bootstrap will also inform you of how difficult it is to choose the penalty parameter $\lambda$ for the lasso as you'll probably see much different $\lambda$ selected over multiple resamples. For each bootstrap resample find the list of lasso-selected nonzero predictor coefficients and see how they vary. Also compute confidence intervals for the ranks of variable importance; you'll see these intervals are wide, exposing the difficulty of the task. These are discussed in one of the final chapters in BBR.
Here is an example showing how poorly the lasso works in the best of situations where predictors are uncorrelated and the distribution of true unknown regression coefficients follows a Laplace distribution, which is what the lasso penalty is optimized for. You see that frequently variables are selected that have very small true coefficients and variables are not selected that have large coefficients. This is from here. | How can I know If LASSO logistic regression model is good enough to be feature selection tool? | Many analysts automatically assume that feature selection is a good idea. This never followed. Parsimony is the enemy of predictive discrimination. Perhaps more important, feature selection, whethe | How can I know If LASSO logistic regression model is good enough to be feature selection tool?
Many analysts automatically assume that feature selection is a good idea. This never followed. Parsimony is the enemy of predictive discrimination. Perhaps more important, feature selection, whether using lasso or other methods, is unreliable. The way to tell if lasso is good enough is to test its resilience/stability using the bootstrap. The bootstrap will also inform you of how difficult it is to choose the penalty parameter $\lambda$ for the lasso as you'll probably see much different $\lambda$ selected over multiple resamples. For each bootstrap resample find the list of lasso-selected nonzero predictor coefficients and see how they vary. Also compute confidence intervals for the ranks of variable importance; you'll see these intervals are wide, exposing the difficulty of the task. These are discussed in one of the final chapters in BBR.
Here is an example showing how poorly the lasso works in the best of situations where predictors are uncorrelated and the distribution of true unknown regression coefficients follows a Laplace distribution, which is what the lasso penalty is optimized for. You see that frequently variables are selected that have very small true coefficients and variables are not selected that have large coefficients. This is from here. | How can I know If LASSO logistic regression model is good enough to be feature selection tool?
Many analysts automatically assume that feature selection is a good idea. This never followed. Parsimony is the enemy of predictive discrimination. Perhaps more important, feature selection, whethe |
52,478 | How can I know If LASSO logistic regression model is good enough to be feature selection tool? | Lasso is a common regression technique for variable selection and regularization. By defining many cross validation folds and playing with different values of $\alpha$, you can find the best set of beta coefficients which confidently predicts your outcome without overfitting or underfitting. If the Lasso technique has assigned the beta coefficient of any covariates to 0, you can either chose to drop these features since they do not contribute to the predictor or proceed in the knowledge that those covariates are essentially meaningless.
As such, consider a sample consisting of N observations, each with p covariates and a single outcome (typically the case for most regression problems). Essentially the objective of Lasso is to solve:
$ \min_{ \beta_0, \beta } \left\{ \sum_{i=1}^N (y_i - \beta_0 - x_i^T \beta)^2 \right\} \text{ subject to } \sum_{j=1}^p |\beta_j| \leq \alpha.
$
Here $ \beta_0 $ is the constant coefficient, $ \beta:=(\beta_1,\beta_2,\ldots, \beta_p)$ is the coefficient vector, and $\alpha$ is a prespecified free parameter that determines the degree of regularization. | How can I know If LASSO logistic regression model is good enough to be feature selection tool? | Lasso is a common regression technique for variable selection and regularization. By defining many cross validation folds and playing with different values of $\alpha$, you can find the best set of be | How can I know If LASSO logistic regression model is good enough to be feature selection tool?
Lasso is a common regression technique for variable selection and regularization. By defining many cross validation folds and playing with different values of $\alpha$, you can find the best set of beta coefficients which confidently predicts your outcome without overfitting or underfitting. If the Lasso technique has assigned the beta coefficient of any covariates to 0, you can either chose to drop these features since they do not contribute to the predictor or proceed in the knowledge that those covariates are essentially meaningless.
As such, consider a sample consisting of N observations, each with p covariates and a single outcome (typically the case for most regression problems). Essentially the objective of Lasso is to solve:
$ \min_{ \beta_0, \beta } \left\{ \sum_{i=1}^N (y_i - \beta_0 - x_i^T \beta)^2 \right\} \text{ subject to } \sum_{j=1}^p |\beta_j| \leq \alpha.
$
Here $ \beta_0 $ is the constant coefficient, $ \beta:=(\beta_1,\beta_2,\ldots, \beta_p)$ is the coefficient vector, and $\alpha$ is a prespecified free parameter that determines the degree of regularization. | How can I know If LASSO logistic regression model is good enough to be feature selection tool?
Lasso is a common regression technique for variable selection and regularization. By defining many cross validation folds and playing with different values of $\alpha$, you can find the best set of be |
52,479 | Variance of a function of a random variable as function of the original variable | Let $X\sim \mathcal N(0,\sigma^2)$ denote a normal random variable and let $f$ be the function $$f(x) = \begin{cases}+1, & x > 0,\\-1, &x \leq 0.\end{cases}$$
Then, $f(X)$ is a random variable taking on values $\pm 1$ with equal probability and so $f(X)$ has variance $1$. On the other hand, if $X\sim \mathcal N(1,\sigma^2)$, then $f(X)$ takes on values $\pm 1$ with probabilities $\Phi\left(-\frac{1}{\sigma}\right)$ and $1-\Phi\left(-\frac{1}{\sigma}\right)$ and its variance is not $1$ even though the variance of $X$ is unchanged. Thus, it is not necessarily possible that $\mathbb V(f(X))$ can be expressed as a fixed function $g(\cdot)$ of $\mathbb V(X)$, that is, $\mathbb V(f(X))\neq g(\mathbb V(X))$.
Are there any functions $f(\cdot)$ for which $\mathbb V(f(X))$ equals $g(\mathbb V(X))$ for some fixed function $g(\cdot)$? Sure, there are. If
$f(x) = ax+b$, then
$$\mathbb V(f(X)) = \mathbb V(aX+b) = a^2\mathbb V(X) = g(\mathbb V(X))$$ where $g(x) = a^2x$.
What if $X$ is a positive random variable and $f(x) = x^{-1}$ for $x >0$?
Well, one case where the desired relationship might hold is when if $X$ is a Gamma random variable with order parameter $3$ or more. See this answer of mine for some details of how this might be made to work. | Variance of a function of a random variable as function of the original variable | Let $X\sim \mathcal N(0,\sigma^2)$ denote a normal random variable and let $f$ be the function $$f(x) = \begin{cases}+1, & x > 0,\\-1, &x \leq 0.\end{cases}$$
Then, $f(X)$ is a random variable taking | Variance of a function of a random variable as function of the original variable
Let $X\sim \mathcal N(0,\sigma^2)$ denote a normal random variable and let $f$ be the function $$f(x) = \begin{cases}+1, & x > 0,\\-1, &x \leq 0.\end{cases}$$
Then, $f(X)$ is a random variable taking on values $\pm 1$ with equal probability and so $f(X)$ has variance $1$. On the other hand, if $X\sim \mathcal N(1,\sigma^2)$, then $f(X)$ takes on values $\pm 1$ with probabilities $\Phi\left(-\frac{1}{\sigma}\right)$ and $1-\Phi\left(-\frac{1}{\sigma}\right)$ and its variance is not $1$ even though the variance of $X$ is unchanged. Thus, it is not necessarily possible that $\mathbb V(f(X))$ can be expressed as a fixed function $g(\cdot)$ of $\mathbb V(X)$, that is, $\mathbb V(f(X))\neq g(\mathbb V(X))$.
Are there any functions $f(\cdot)$ for which $\mathbb V(f(X))$ equals $g(\mathbb V(X))$ for some fixed function $g(\cdot)$? Sure, there are. If
$f(x) = ax+b$, then
$$\mathbb V(f(X)) = \mathbb V(aX+b) = a^2\mathbb V(X) = g(\mathbb V(X))$$ where $g(x) = a^2x$.
What if $X$ is a positive random variable and $f(x) = x^{-1}$ for $x >0$?
Well, one case where the desired relationship might hold is when if $X$ is a Gamma random variable with order parameter $3$ or more. See this answer of mine for some details of how this might be made to work. | Variance of a function of a random variable as function of the original variable
Let $X\sim \mathcal N(0,\sigma^2)$ denote a normal random variable and let $f$ be the function $$f(x) = \begin{cases}+1, & x > 0,\\-1, &x \leq 0.\end{cases}$$
Then, $f(X)$ is a random variable taking |
52,480 | Variance of a function of a random variable as function of the original variable | The exact formula for the variance of $Y$ requires use of the function $f$ and the full distribution of $X$ (not just its variance). Nevertheless, while there is no exact formula of the kind you want, you can get approximate formulae using Taylor approximation (also called the "delta method").
To facilitate analysis using the delta method, suppose we let $\mu$, $\sigma^2$, $\gamma$ and $\kappa$ denote the mean, variance, skewness and kurtosis of $X$ respectively, and suppose that $f$ is differentiable up to the required orders for our formulae. The first-order Taylor approximation to the variance is:
$$\begin{aligned}
\mathbb{V}[f(X)]
&\approx f'(\mu)^2 \cdot \sigma^2. \\[6pt]
\end{aligned}$$
The second-order Taylor approximation (shown in this related question and answer) is:
$$\begin{aligned}
\mathbb{V}[f(X)]
&\approx f'(\mu) (f'(\mu) - \mu f''(\mu)) \sigma^2 \\[6pt]
&\quad - \Big[ f''(\mu)^2 \mu + (f'(\mu) - \mu f''(\mu)) f''(\mu) \Big] \gamma \sigma^3 \\[6pt]
&\quad + \frac{f''(\mu)^2}{4} (\kappa-1) \sigma^4. \\[6pt]
\end{aligned}$$
So, you can get an approximating formula that uses only the mean and variance of $X$ by using the first-order Taylor approximation to the variance. This is not an exact result, and it is not a particularly good approximation. If you are willing to also use the skewness and kurtosis of $X$ then you can use the second-order Taylor approximation to the variance. This is also not an exact result, but it is a reasonable approximation in a wide class of cases. | Variance of a function of a random variable as function of the original variable | The exact formula for the variance of $Y$ requires use of the function $f$ and the full distribution of $X$ (not just its variance). Nevertheless, while there is no exact formula of the kind you want | Variance of a function of a random variable as function of the original variable
The exact formula for the variance of $Y$ requires use of the function $f$ and the full distribution of $X$ (not just its variance). Nevertheless, while there is no exact formula of the kind you want, you can get approximate formulae using Taylor approximation (also called the "delta method").
To facilitate analysis using the delta method, suppose we let $\mu$, $\sigma^2$, $\gamma$ and $\kappa$ denote the mean, variance, skewness and kurtosis of $X$ respectively, and suppose that $f$ is differentiable up to the required orders for our formulae. The first-order Taylor approximation to the variance is:
$$\begin{aligned}
\mathbb{V}[f(X)]
&\approx f'(\mu)^2 \cdot \sigma^2. \\[6pt]
\end{aligned}$$
The second-order Taylor approximation (shown in this related question and answer) is:
$$\begin{aligned}
\mathbb{V}[f(X)]
&\approx f'(\mu) (f'(\mu) - \mu f''(\mu)) \sigma^2 \\[6pt]
&\quad - \Big[ f''(\mu)^2 \mu + (f'(\mu) - \mu f''(\mu)) f''(\mu) \Big] \gamma \sigma^3 \\[6pt]
&\quad + \frac{f''(\mu)^2}{4} (\kappa-1) \sigma^4. \\[6pt]
\end{aligned}$$
So, you can get an approximating formula that uses only the mean and variance of $X$ by using the first-order Taylor approximation to the variance. This is not an exact result, and it is not a particularly good approximation. If you are willing to also use the skewness and kurtosis of $X$ then you can use the second-order Taylor approximation to the variance. This is also not an exact result, but it is a reasonable approximation in a wide class of cases. | Variance of a function of a random variable as function of the original variable
The exact formula for the variance of $Y$ requires use of the function $f$ and the full distribution of $X$ (not just its variance). Nevertheless, while there is no exact formula of the kind you want |
52,481 | Variance of a function of a random variable as function of the original variable | Let's see how far we can towards characterizing functions $f$ where such a formula will work. We know it works for linear functions, but are there any others? How about when the random variables $X$ have restricted values?
The setting of the question is one in which $f$ is given but the distribution of the random variable $X$ is unknown and arbitrary--although possibly with restrictions on the values it can assume. Thus, the appropriate sense of such a formula "working" would be
For which functions $f$ is there an associated function $V_{f}:\mathbb{R}^+\to \mathbb{R}^+$ satisfying $$V_{f}(\operatorname{Var}(X))=\operatorname{Var}(f(X))\tag{*}$$ for all random variables $X:\Omega\to \mathcal{X}\subset \mathbb{R}$?
($\Omega$ is some abstract probability space whose details don't matter.)
Let's exploit the basic properties of variance to explore these possibilities. Begin by dismissing the trivial cases where $\mathcal X$ is empty or has just one element, or when $f$ is a constant function, thereby enabling us to assume $\mathcal X$ contains two numbers $x_1$ and $x_2$ for which $f(x_2)\ne f(x_1).$ The random variables $X$ whose values are confined to this subset $\{x_1,x_2\}$ are all (at most) binary. When $\Pr(X=x_2)=p,$ direct calculation establishes $$\operatorname{Var}(X) = p(1-p)(x_2-x_1)^2.$$ The same calculation yields $$\operatorname{Var}(f(X)) = p(1-p)(f(x_2)-f(x_1))^2.$$
Keeping $(x_1,x_2)$ fixed for a moment, write $a=(x_2-x_1)^2 \gt 0$ and $b=(f(x_2)-f(x_1))^2 \gt 0.$ As $p$ varies through the interval $[0,1],$ $p(1-p)a = \lambda$ varies through the interval $[0,a/4].$ In terms of $(*),$ the preceding results tell us
$$V_f(\lambda) = V_f(\operatorname{Var}(X)) = \operatorname{Var}(f(X)) = \frac{b}{a}(\lambda).$$
This exhibits $V_f$ as a non-constant linear function defined on the interval $[0,a/4].$ Consequently, because the $x_i$ are arbitrary, $V_f$ must be a non-constant linear function defined on all values $\lambda$ from $0$ through the supremum of $(x_2-x_1)^2/4$ (which might be infinite).
This solves the problem when $\mathcal X$ has at most two elements. Suppose it has a third element $x\in\mathcal X$ distinct from $x_1$ and $x_2.$ Let $\mu$ (obviously non-negative) be the slope of $V_f$ as previously established. Thus
$$\mu = \frac{(f(x_2)-f(x))^2}{(x_2-x)^2} = \frac{(f(x_1)-f(x))^2}{(x_1-x)^2}.$$
Clearing the denominators and taking square roots produces
$$f(x_i) - f(x) = \pm \mu(x_i-x),\ i=1,2.\tag{**}$$
But it's also the case that $f(x_2) - f(x_1)=\pm \mu (x_2-x_1).$ You can (readily) check that this contradicts $(**)$ unless all the signs of all the square roots are the same. Consequently,
$f$ must be an affine function of the form $f(x) = \nu + \mu x$ for some constant numbers $\nu$ and $\mu.$
This conclusion subsumes the earlier cases where the cardinality of $\mathcal X$ is $0,$ $1,$ or $2.$
Of course--to close this logical loop--when $f$ has this form, $V_f(\lambda) = \mu^2\lambda$ is the rule for computing the variance of $f(X)$ in terms of the variance of $X$ for any random variable $X.$ | Variance of a function of a random variable as function of the original variable | Let's see how far we can towards characterizing functions $f$ where such a formula will work. We know it works for linear functions, but are there any others? How about when the random variables $X$ | Variance of a function of a random variable as function of the original variable
Let's see how far we can towards characterizing functions $f$ where such a formula will work. We know it works for linear functions, but are there any others? How about when the random variables $X$ have restricted values?
The setting of the question is one in which $f$ is given but the distribution of the random variable $X$ is unknown and arbitrary--although possibly with restrictions on the values it can assume. Thus, the appropriate sense of such a formula "working" would be
For which functions $f$ is there an associated function $V_{f}:\mathbb{R}^+\to \mathbb{R}^+$ satisfying $$V_{f}(\operatorname{Var}(X))=\operatorname{Var}(f(X))\tag{*}$$ for all random variables $X:\Omega\to \mathcal{X}\subset \mathbb{R}$?
($\Omega$ is some abstract probability space whose details don't matter.)
Let's exploit the basic properties of variance to explore these possibilities. Begin by dismissing the trivial cases where $\mathcal X$ is empty or has just one element, or when $f$ is a constant function, thereby enabling us to assume $\mathcal X$ contains two numbers $x_1$ and $x_2$ for which $f(x_2)\ne f(x_1).$ The random variables $X$ whose values are confined to this subset $\{x_1,x_2\}$ are all (at most) binary. When $\Pr(X=x_2)=p,$ direct calculation establishes $$\operatorname{Var}(X) = p(1-p)(x_2-x_1)^2.$$ The same calculation yields $$\operatorname{Var}(f(X)) = p(1-p)(f(x_2)-f(x_1))^2.$$
Keeping $(x_1,x_2)$ fixed for a moment, write $a=(x_2-x_1)^2 \gt 0$ and $b=(f(x_2)-f(x_1))^2 \gt 0.$ As $p$ varies through the interval $[0,1],$ $p(1-p)a = \lambda$ varies through the interval $[0,a/4].$ In terms of $(*),$ the preceding results tell us
$$V_f(\lambda) = V_f(\operatorname{Var}(X)) = \operatorname{Var}(f(X)) = \frac{b}{a}(\lambda).$$
This exhibits $V_f$ as a non-constant linear function defined on the interval $[0,a/4].$ Consequently, because the $x_i$ are arbitrary, $V_f$ must be a non-constant linear function defined on all values $\lambda$ from $0$ through the supremum of $(x_2-x_1)^2/4$ (which might be infinite).
This solves the problem when $\mathcal X$ has at most two elements. Suppose it has a third element $x\in\mathcal X$ distinct from $x_1$ and $x_2.$ Let $\mu$ (obviously non-negative) be the slope of $V_f$ as previously established. Thus
$$\mu = \frac{(f(x_2)-f(x))^2}{(x_2-x)^2} = \frac{(f(x_1)-f(x))^2}{(x_1-x)^2}.$$
Clearing the denominators and taking square roots produces
$$f(x_i) - f(x) = \pm \mu(x_i-x),\ i=1,2.\tag{**}$$
But it's also the case that $f(x_2) - f(x_1)=\pm \mu (x_2-x_1).$ You can (readily) check that this contradicts $(**)$ unless all the signs of all the square roots are the same. Consequently,
$f$ must be an affine function of the form $f(x) = \nu + \mu x$ for some constant numbers $\nu$ and $\mu.$
This conclusion subsumes the earlier cases where the cardinality of $\mathcal X$ is $0,$ $1,$ or $2.$
Of course--to close this logical loop--when $f$ has this form, $V_f(\lambda) = \mu^2\lambda$ is the rule for computing the variance of $f(X)$ in terms of the variance of $X$ for any random variable $X.$ | Variance of a function of a random variable as function of the original variable
Let's see how far we can towards characterizing functions $f$ where such a formula will work. We know it works for linear functions, but are there any others? How about when the random variables $X$ |
52,482 | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large? | Think buying hundreds of fair dice. You do not know they are, though, and hence test if each has an expected value of 3.5 points, via throwing each many times (1000+). One of them must come up as "best", and if you do not account for multiple testing, almost certainly statistically significantly so.
Recall that the probability that a true null is rejected should (in practice, that may not be exactly true due to things like asymptotic approximations and finite-sample size distortions) not depend on sample size!
You might then conclude, wrongly (or at least not rightly, in that it is no better, but also no worse than the others), that this is the one you should bring to your next board game.
As to practical significance, this will indeed provide a clue, in that the "winning" one will likely have won with an average of points barely better than 3.5 when you tossed often.
Here is an illustration:
set.seed(1)
dice <- 100
throws <- 1000
tests <- apply(replicate(dice, sample(1:6, throws,
replace=T)), 2,
function(x) t.test(x, alternative="greater",
mu=3.5))
# right-tailed test, to look for "better" dice (assuming a
# game where many points are good, nothing hinges on this)
plot(1:dice, sort(unlist(lapply(1:dice, function(i)
tests[[i]]$p.value))))
abline(h=0.05, col="blue") # significance threshold not
# accounting for multiple testing
abline(h=0.05/dice, col="red") # Bonferroni threshold
max(unlist(lapply(1:dice, function(i) tests[[i]]$estimate)))
# the sample average of the "winner"
So we see a few "significantly" outperforming dice at level 0.05, but none, in this simulation run, after Bonferroni correction. The "winning" one (last line of the code) however has an average of 3.63, which is, in practice, not too far away from the true expectation 3.5.
We can also run a little Monte Carlo exercise - i.e., the above exercise many times so as to average out any "uncommon" samples that might arise from set.seed(1). We can then also illustrate the effect of varying the number of throws.
# Monte Carlo, with several runs of the experiment:
reps <- 500
mc.func.throws <- function(throws){
tests <- apply(replicate(dice, sample(1:6, throws,
replace=T)), 2,
function(x) t.test(x, alternative="greater",
mu=3.5))
winning.average <- max(unlist(lapply(1:dice, function(i)
tests[[i]]$estimate))) # the sample average of the "winner"
significant.pvalues <- mean(unlist(lapply(1:dice,
function(i) tests[[i]]$p.value)) < 0.05)
return(list(winning.average, significant.pvalues))
}
diff.throws <- function(throws){
mc.study <- replicate(reps, mc.func.throws(throws))
average.winning.average <- mean(unlist(mc.study[1,]))
mean.significant.results <- mean(unlist(mc.study[2,]))
return(list(average.winning.average,
mean.significant.results))
}
throws <- c(10, 50, 100, 500, 1000, 10000)
lapply(throws, diff.throws)
Result:
> unlist(lapply(mc.throws, `[[`, 1))
[1] 4.809200 4.108400 3.927120 3.692292 3.635224 3.542961
> unlist(lapply(mc.throws, `[[`, 2))
[1] 0.04992 0.05134 0.05012 0.04964 0.05006 0.05040
Hence, as predicted, the proportion of statistically significant results is independent of the number of throws (all proportions of $p$-values less than 0.05 are close to 0.05), while the practical significance - i.e., the distance of the average number of points of the "best" one to 3.5 - decreases in the number of throws. | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large? | Think buying hundreds of fair dice. You do not know they are, though, and hence test if each has an expected value of 3.5 points, via throwing each many times (1000+). One of them must come up as "bes | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large?
Think buying hundreds of fair dice. You do not know they are, though, and hence test if each has an expected value of 3.5 points, via throwing each many times (1000+). One of them must come up as "best", and if you do not account for multiple testing, almost certainly statistically significantly so.
Recall that the probability that a true null is rejected should (in practice, that may not be exactly true due to things like asymptotic approximations and finite-sample size distortions) not depend on sample size!
You might then conclude, wrongly (or at least not rightly, in that it is no better, but also no worse than the others), that this is the one you should bring to your next board game.
As to practical significance, this will indeed provide a clue, in that the "winning" one will likely have won with an average of points barely better than 3.5 when you tossed often.
Here is an illustration:
set.seed(1)
dice <- 100
throws <- 1000
tests <- apply(replicate(dice, sample(1:6, throws,
replace=T)), 2,
function(x) t.test(x, alternative="greater",
mu=3.5))
# right-tailed test, to look for "better" dice (assuming a
# game where many points are good, nothing hinges on this)
plot(1:dice, sort(unlist(lapply(1:dice, function(i)
tests[[i]]$p.value))))
abline(h=0.05, col="blue") # significance threshold not
# accounting for multiple testing
abline(h=0.05/dice, col="red") # Bonferroni threshold
max(unlist(lapply(1:dice, function(i) tests[[i]]$estimate)))
# the sample average of the "winner"
So we see a few "significantly" outperforming dice at level 0.05, but none, in this simulation run, after Bonferroni correction. The "winning" one (last line of the code) however has an average of 3.63, which is, in practice, not too far away from the true expectation 3.5.
We can also run a little Monte Carlo exercise - i.e., the above exercise many times so as to average out any "uncommon" samples that might arise from set.seed(1). We can then also illustrate the effect of varying the number of throws.
# Monte Carlo, with several runs of the experiment:
reps <- 500
mc.func.throws <- function(throws){
tests <- apply(replicate(dice, sample(1:6, throws,
replace=T)), 2,
function(x) t.test(x, alternative="greater",
mu=3.5))
winning.average <- max(unlist(lapply(1:dice, function(i)
tests[[i]]$estimate))) # the sample average of the "winner"
significant.pvalues <- mean(unlist(lapply(1:dice,
function(i) tests[[i]]$p.value)) < 0.05)
return(list(winning.average, significant.pvalues))
}
diff.throws <- function(throws){
mc.study <- replicate(reps, mc.func.throws(throws))
average.winning.average <- mean(unlist(mc.study[1,]))
mean.significant.results <- mean(unlist(mc.study[2,]))
return(list(average.winning.average,
mean.significant.results))
}
throws <- c(10, 50, 100, 500, 1000, 10000)
lapply(throws, diff.throws)
Result:
> unlist(lapply(mc.throws, `[[`, 1))
[1] 4.809200 4.108400 3.927120 3.692292 3.635224 3.542961
> unlist(lapply(mc.throws, `[[`, 2))
[1] 0.04992 0.05134 0.05012 0.04964 0.05006 0.05040
Hence, as predicted, the proportion of statistically significant results is independent of the number of throws (all proportions of $p$-values less than 0.05 are close to 0.05), while the practical significance - i.e., the distance of the average number of points of the "best" one to 3.5 - decreases in the number of throws. | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large?
Think buying hundreds of fair dice. You do not know they are, though, and hence test if each has an expected value of 3.5 points, via throwing each many times (1000+). One of them must come up as "bes |
52,483 | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large? | A easy way to make sense of this is in terms of effect sizes, Type 1 errors, Type 2 errors, and Power.
Let's say that you're looking at correlations, and you have $N$ data points.
Your effect size is the correlation coefficient, $r$.
Your Type 1 error rate, $\alpha$, is the probability of concluding that there is an effect, $r \neq 0$, when there really is no effect, $r = 0$. We usually use $\alpha = .05$, which means we conclude there is an effect (reject the null hypothesis) when the p-value is $p< .05$. When you conduct multiple tests, you usually adjust for multiple comparisons to keep the Type 1 error rate at $0.05$.
Your Type 2 error rate, $\beta$, is the probabiliy of failing to conclude that there is an effect when there really is one. Your Power is just the inverse of this, $1 - \beta$.
Now, what happens with different sample sizes?
Here's a table of critical values, from real-statistic.com:
Focus on the third column, where $\alpha = 0.05$.
The values in each row show, for each sample size
(these rows actually show degrees of freedom, which is just $N - 2$),
how strong the correlation $r$ needs to be in order to be significant,
$p < .05$:
When $N = 12$ (df $= 10$), $r \geq 0.57$ is significant.
When $N = 102$ (df $= 100$), $r \geq 0.19$ is significant.
When $N = 1002$ (df $= 1000$), $r \geq 0.06$ is significant.
Crucially, the Type 1 error rate is the same in each case, $\alpha = 0.05$.
Increasing the sample size has instead made it possible to detect smaller effects, which reduces the Type 2 error rate/boosts power (you're less likely to miss a small but non-zero effect).
Finally, adjusting for multiple comparisons just means
using a more conservative threshold for how big $r$ needs to be
before you decide there's an effect. In other words, to reduce your Type 1 error rate, you have to increase your Type 2 error rate. Luckily, collecting lots of data means your Type 2 error rate is low, so you can better afford to do this.
So, putting this together:
More data reduces Type 2 errors, but leaves Type 1 errors unchanged at $\alpha = 0.05$.
Doing multiple comparisons increases Type 1 errors
To counteract this, you adjust for multiple comparisons by making your decisions more conservative. This brings the Type 1 error rate back down, at the expense of increasing the Type 2 error rate. | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large? | A easy way to make sense of this is in terms of effect sizes, Type 1 errors, Type 2 errors, and Power.
Let's say that you're looking at correlations, and you have $N$ data points.
Your effect size is | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large?
A easy way to make sense of this is in terms of effect sizes, Type 1 errors, Type 2 errors, and Power.
Let's say that you're looking at correlations, and you have $N$ data points.
Your effect size is the correlation coefficient, $r$.
Your Type 1 error rate, $\alpha$, is the probability of concluding that there is an effect, $r \neq 0$, when there really is no effect, $r = 0$. We usually use $\alpha = .05$, which means we conclude there is an effect (reject the null hypothesis) when the p-value is $p< .05$. When you conduct multiple tests, you usually adjust for multiple comparisons to keep the Type 1 error rate at $0.05$.
Your Type 2 error rate, $\beta$, is the probabiliy of failing to conclude that there is an effect when there really is one. Your Power is just the inverse of this, $1 - \beta$.
Now, what happens with different sample sizes?
Here's a table of critical values, from real-statistic.com:
Focus on the third column, where $\alpha = 0.05$.
The values in each row show, for each sample size
(these rows actually show degrees of freedom, which is just $N - 2$),
how strong the correlation $r$ needs to be in order to be significant,
$p < .05$:
When $N = 12$ (df $= 10$), $r \geq 0.57$ is significant.
When $N = 102$ (df $= 100$), $r \geq 0.19$ is significant.
When $N = 1002$ (df $= 1000$), $r \geq 0.06$ is significant.
Crucially, the Type 1 error rate is the same in each case, $\alpha = 0.05$.
Increasing the sample size has instead made it possible to detect smaller effects, which reduces the Type 2 error rate/boosts power (you're less likely to miss a small but non-zero effect).
Finally, adjusting for multiple comparisons just means
using a more conservative threshold for how big $r$ needs to be
before you decide there's an effect. In other words, to reduce your Type 1 error rate, you have to increase your Type 2 error rate. Luckily, collecting lots of data means your Type 2 error rate is low, so you can better afford to do this.
So, putting this together:
More data reduces Type 2 errors, but leaves Type 1 errors unchanged at $\alpha = 0.05$.
Doing multiple comparisons increases Type 1 errors
To counteract this, you adjust for multiple comparisons by making your decisions more conservative. This brings the Type 1 error rate back down, at the expense of increasing the Type 2 error rate. | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large?
A easy way to make sense of this is in terms of effect sizes, Type 1 errors, Type 2 errors, and Power.
Let's say that you're looking at correlations, and you have $N$ data points.
Your effect size is |
52,484 | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large? | Their suggestion was that we don't have to worry about [multiple testing], because the sample size of each test is going to be big enough (we're looking at minimum of n=100 but frequently n=1000+).
Here's a scenario where your colleagues would be right in practice even if wrong in theory.
Your many tests fall neatly into two categories: $H_0$ is correct (or very close to be) or $H_0$ is extremely inapprioriate. Also, you are not too interested in which tests reject $H_0$ but rather in the overall proportion of accept/reject or you are not too worried about the occasional false positive. In such a case adjusting or not the p-values would make little difference (I still don't see why shouldn't do it though...).
For example, you have a bag of coins, some are legal and some with both sides the same. If you flip each coin a hundred times or so, you are nearly certain to spot the faulty coins by looking at the pvalue, whether you adjust it or not. If the "cost" of a false negative is much higher than a false positive you may just work with nominal p-values.
(I'm not saying your colleagues are right of course, I'm just imaging a scenario that would give them some grounds). | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large? | Their suggestion was that we don't have to worry about [multiple testing], because the sample size of each test is going to be big enough (we're looking at minimum of n=100 but frequently n=1000+).
H | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large?
Their suggestion was that we don't have to worry about [multiple testing], because the sample size of each test is going to be big enough (we're looking at minimum of n=100 but frequently n=1000+).
Here's a scenario where your colleagues would be right in practice even if wrong in theory.
Your many tests fall neatly into two categories: $H_0$ is correct (or very close to be) or $H_0$ is extremely inapprioriate. Also, you are not too interested in which tests reject $H_0$ but rather in the overall proportion of accept/reject or you are not too worried about the occasional false positive. In such a case adjusting or not the p-values would make little difference (I still don't see why shouldn't do it though...).
For example, you have a bag of coins, some are legal and some with both sides the same. If you flip each coin a hundred times or so, you are nearly certain to spot the faulty coins by looking at the pvalue, whether you adjust it or not. If the "cost" of a false negative is much higher than a false positive you may just work with nominal p-values.
(I'm not saying your colleagues are right of course, I'm just imaging a scenario that would give them some grounds). | Is it still necessary to correct for multiple comparisons/testing if the sample sizes are large?
Their suggestion was that we don't have to worry about [multiple testing], because the sample size of each test is going to be big enough (we're looking at minimum of n=100 but frequently n=1000+).
H |
52,485 | Likelihood vs quasi-likelihood vs pseudo-likelihood and restricted likelihood | I think you may be conflating likelihood with maximum likelihood methods. I'll try to separate them as best I can below.
Likelihood
The likelihood function is one which relates the probability of an observation with a particular parameterization of a specific distributional family. It is not necessarily the same thing as the probability of the observation, as in the continuous case, the likelihood is still well defined at point but the probability is the integral of the pdf over a span and is 0 at every single point. Nevertheless, given both a distributional form (e.g. exponential vs. gamma) and a specific parameterization (a $\beta$ for the exponential or an $\alpha, \beta$ for the gamma) it is a measure of relative probability of one observation versus another.
Maximum Likelihood Estimation
Knowing that, if one has a set of observations and chooses a distributional form, then a logical approach to finding estimates of the parameters of that specific distributional form is to find parameters for which the likelihood of the set of observations is greatest. This is the principle of maximum likelihood estimation (MLE). The actual optimization is almost always performed on the negative log-likelihood for performance and over/underflow reasons, but the principle remains the same. The parameter(s) "most likely" to have generated this data are the ones for which the combined likelihood of all the observations (product of actual or sum of the logs) is the largest.
Quasi-Likelihood Function
In MLE, one needs to select a distributional form first and then solve for the parameters. When dealing with count data in particular, often the data exhibits properties that don't allow it to be cleanly modeled by a known distribution with a closed form. For example, the Poisson distribution requires that the variance equal the mean. The negative binomial distribution requires (in most parameterizations at least) that the variance be a function of the square of the mean. What happens when the variance is a linear function of the mean? There is no simple discrete distribution with this property, so how can we find a likelihood without a distributional form? Roger Wedderburn, one of the developers of the GLM framework and the use of IWLS for MLE in that context, also proved that one can use a function similar to a true likelihood—a quasi-likelihood function—despite it not being a "true" distribution so long as the mean-variance relationship is well defined (among other requirements). This allows for using the existing mechanics of IWLS/MLE to be used to create models in the presence of overdispersion and which would otherwise not fit cleanly into the forms of known distributions.
Restricted Maximum Likelihood (REML)
MLE does not always return an unbiased estimator. For example, with the classic normal distribution, the MLE of the mean is the sample mean, which is unbiased, but the MLE of the variance is the population variance, which we know is biased. When using the GLM framework (for which the IWLS approach is actually MLE in disguise) to solve for the variance, often an unbiased estimator is desired. Applying MLE not to the raw data but to a transformation of the raw data, can result in an unbiased estimate. This underlies the use of restricted maximum likelihood. As the transformed data does not necessarily encompass all the data, the method is called "restricted".
Pseudolikelihoods (2022-09-12)
The "true" likelihood of a distribution may involve very complicated normalizing factors, especially in multivariate cases. This may make using true maximum likelihood estimation intractable. However, a simplified function of the observations—or a subset of the observations—may be mathematically tractable and allow for estimating a "good" optimum even though it may not be the "best". So while these functions are not the true likelihood, we may treat them as such for the purpose of fitting the distributions. For more information, please see Besag (1975) or Arnold & Strauss (1991). | Likelihood vs quasi-likelihood vs pseudo-likelihood and restricted likelihood | I think you may be conflating likelihood with maximum likelihood methods. I'll try to separate them as best I can below.
Likelihood
The likelihood function is one which relates the probability of an o | Likelihood vs quasi-likelihood vs pseudo-likelihood and restricted likelihood
I think you may be conflating likelihood with maximum likelihood methods. I'll try to separate them as best I can below.
Likelihood
The likelihood function is one which relates the probability of an observation with a particular parameterization of a specific distributional family. It is not necessarily the same thing as the probability of the observation, as in the continuous case, the likelihood is still well defined at point but the probability is the integral of the pdf over a span and is 0 at every single point. Nevertheless, given both a distributional form (e.g. exponential vs. gamma) and a specific parameterization (a $\beta$ for the exponential or an $\alpha, \beta$ for the gamma) it is a measure of relative probability of one observation versus another.
Maximum Likelihood Estimation
Knowing that, if one has a set of observations and chooses a distributional form, then a logical approach to finding estimates of the parameters of that specific distributional form is to find parameters for which the likelihood of the set of observations is greatest. This is the principle of maximum likelihood estimation (MLE). The actual optimization is almost always performed on the negative log-likelihood for performance and over/underflow reasons, but the principle remains the same. The parameter(s) "most likely" to have generated this data are the ones for which the combined likelihood of all the observations (product of actual or sum of the logs) is the largest.
Quasi-Likelihood Function
In MLE, one needs to select a distributional form first and then solve for the parameters. When dealing with count data in particular, often the data exhibits properties that don't allow it to be cleanly modeled by a known distribution with a closed form. For example, the Poisson distribution requires that the variance equal the mean. The negative binomial distribution requires (in most parameterizations at least) that the variance be a function of the square of the mean. What happens when the variance is a linear function of the mean? There is no simple discrete distribution with this property, so how can we find a likelihood without a distributional form? Roger Wedderburn, one of the developers of the GLM framework and the use of IWLS for MLE in that context, also proved that one can use a function similar to a true likelihood—a quasi-likelihood function—despite it not being a "true" distribution so long as the mean-variance relationship is well defined (among other requirements). This allows for using the existing mechanics of IWLS/MLE to be used to create models in the presence of overdispersion and which would otherwise not fit cleanly into the forms of known distributions.
Restricted Maximum Likelihood (REML)
MLE does not always return an unbiased estimator. For example, with the classic normal distribution, the MLE of the mean is the sample mean, which is unbiased, but the MLE of the variance is the population variance, which we know is biased. When using the GLM framework (for which the IWLS approach is actually MLE in disguise) to solve for the variance, often an unbiased estimator is desired. Applying MLE not to the raw data but to a transformation of the raw data, can result in an unbiased estimate. This underlies the use of restricted maximum likelihood. As the transformed data does not necessarily encompass all the data, the method is called "restricted".
Pseudolikelihoods (2022-09-12)
The "true" likelihood of a distribution may involve very complicated normalizing factors, especially in multivariate cases. This may make using true maximum likelihood estimation intractable. However, a simplified function of the observations—or a subset of the observations—may be mathematically tractable and allow for estimating a "good" optimum even though it may not be the "best". So while these functions are not the true likelihood, we may treat them as such for the purpose of fitting the distributions. For more information, please see Besag (1975) or Arnold & Strauss (1991). | Likelihood vs quasi-likelihood vs pseudo-likelihood and restricted likelihood
I think you may be conflating likelihood with maximum likelihood methods. I'll try to separate them as best I can below.
Likelihood
The likelihood function is one which relates the probability of an o |
52,486 | Inverse Predictive Posterior | Denoting all the conditioning explicitly (which you should make a habit of doing in Bayesian analysis), your nonlinear regression model is actually specifying:
$$p(y_i | x_i, \theta, \sigma) = \text{N}(y_i | f_\theta(x_i), \sigma^2).$$
Now, if you want to make a Bayesian inference about any of the values in the conditional part, you are going to need to specify a prior for them. Fundamentally this is no different from any situation in Bayesian analysis; if you want a posterior for the regressors then your model must specify an appropriate prior. I'm going to assume that you will want to model the regressors using a parametric model with an additional parameter vector $\lambda$. In this case, it is useful to decompose the prior for these three conditioning variables in a hierarchical manner as:
$$\begin{align}
\text{Prior for model parameters} & & & \pi(\theta, \sigma, \lambda) \\[6pt]
\text{Sampling distribution for regressors} & & & \phi(x_i | \theta, \sigma, \lambda)
\end{align}$$
I'm also going to assume that the regressors are IID conditional on the model parameters, so that $p(\mathbf{x}| \theta, \sigma, \lambda) = \prod \phi(x_i | \theta, \sigma, \lambda)$. If you specify this sampling distribution for the regressors then you will get the posterior distribution:
$$\begin{align}
\phi(\mathbf{x} | \mathbf{y})
&\overset{\mathbf{x}}{\propto} p(\mathbf{x}, \mathbf{y}, \theta, \sigma, \lambda) \\[12pt]
&= \pi(\theta, \sigma, \lambda) \prod_{i=1}^n p(y_i | x_i, \theta, \sigma) \cdot \phi(x_i | \theta, \sigma, \lambda) \\[6pt]
&= \pi(\theta, \sigma, \lambda) \prod_{i=1}^n \text{N}(y_i | f_\theta(x_i), \sigma^2) \cdot \phi(x_i | \theta, \sigma, \lambda). \\[6pt]
\end{align}$$
Computing the last line of this formula will give you the posterior kernel, and then you can get the posterior distribution by computing the constant for the density directly, or by using MCMC simulation. | Inverse Predictive Posterior | Denoting all the conditioning explicitly (which you should make a habit of doing in Bayesian analysis), your nonlinear regression model is actually specifying:
$$p(y_i | x_i, \theta, \sigma) = \text{N | Inverse Predictive Posterior
Denoting all the conditioning explicitly (which you should make a habit of doing in Bayesian analysis), your nonlinear regression model is actually specifying:
$$p(y_i | x_i, \theta, \sigma) = \text{N}(y_i | f_\theta(x_i), \sigma^2).$$
Now, if you want to make a Bayesian inference about any of the values in the conditional part, you are going to need to specify a prior for them. Fundamentally this is no different from any situation in Bayesian analysis; if you want a posterior for the regressors then your model must specify an appropriate prior. I'm going to assume that you will want to model the regressors using a parametric model with an additional parameter vector $\lambda$. In this case, it is useful to decompose the prior for these three conditioning variables in a hierarchical manner as:
$$\begin{align}
\text{Prior for model parameters} & & & \pi(\theta, \sigma, \lambda) \\[6pt]
\text{Sampling distribution for regressors} & & & \phi(x_i | \theta, \sigma, \lambda)
\end{align}$$
I'm also going to assume that the regressors are IID conditional on the model parameters, so that $p(\mathbf{x}| \theta, \sigma, \lambda) = \prod \phi(x_i | \theta, \sigma, \lambda)$. If you specify this sampling distribution for the regressors then you will get the posterior distribution:
$$\begin{align}
\phi(\mathbf{x} | \mathbf{y})
&\overset{\mathbf{x}}{\propto} p(\mathbf{x}, \mathbf{y}, \theta, \sigma, \lambda) \\[12pt]
&= \pi(\theta, \sigma, \lambda) \prod_{i=1}^n p(y_i | x_i, \theta, \sigma) \cdot \phi(x_i | \theta, \sigma, \lambda) \\[6pt]
&= \pi(\theta, \sigma, \lambda) \prod_{i=1}^n \text{N}(y_i | f_\theta(x_i), \sigma^2) \cdot \phi(x_i | \theta, \sigma, \lambda). \\[6pt]
\end{align}$$
Computing the last line of this formula will give you the posterior kernel, and then you can get the posterior distribution by computing the constant for the density directly, or by using MCMC simulation. | Inverse Predictive Posterior
Denoting all the conditioning explicitly (which you should make a habit of doing in Bayesian analysis), your nonlinear regression model is actually specifying:
$$p(y_i | x_i, \theta, \sigma) = \text{N |
52,487 | Inverse Predictive Posterior | Ok, I've edited my response taking into account feedback from the OP. Below is a DAG that captures the assumptions provided. So for example, $x^{(\mathrm{new})}$ need not be equal in distribution to $x$ as required, and $y^{(\mathrm{new})}$ is conditionally independent of the training data $\mathbf{Y}$ given $\theta$
We require draws from the distribution $p(x^{(\mathrm{new})}|y^{(\mathrm{new})}, \mathbf{Y})$. We have that
\begin{align}
p(x^{(\mathrm{new})}|y^{(\mathrm{new})}, \mathbf{Y}) \propto p(y^{(\mathrm{new})}|x^{(\mathrm{new})}, \mathbf{Y}) p(x^{(\mathrm{new})}|\mathbf{Y})
\end{align}
where we've dropped terms that do not depend on $x^{(\mathrm{new})}$. Focusing on one term at a time:
\begin{align}
p(y^{(\mathrm{new})}|x^{(\mathrm{new})}, \mathbf{Y}) &= \int_\theta p(y^{(\mathrm{new})}|x^{(\mathrm{new})}, \mathbf{Y}, \theta) p(\theta|x^{(\mathrm{new})}, \mathbf{Y})d\theta\\
&= \int_\theta p(y^{(\mathrm{new})}|x^{(\mathrm{new})},\theta) p(\theta|\mathbf{Y})d\theta\\
&= E_{\theta|\mathbf{Y}} \left[p(y^{(\mathrm{new})}|x^{(\mathrm{new})},\theta) \right]
\end{align}
The implied conditional independences from the DAG justify dropping the terms in the second line. Next
\begin{align}
p(x^{(\mathrm{new})}|\mathbf{Y}) \propto p(x^{(\mathrm{new})},\mathbf{Y})\propto p(x^{(\mathrm{new})})
\end{align}
again using the DAG as justification that $x^{(\mathrm{new})}$ and $\mathbf{Y}$ are independent.
Thus,
\begin{align}
p(x^{(\mathrm{new})}|y^{(\mathrm{new})}, \mathbf{Y}) \propto p(x^{(\mathrm{new})}) E_{\theta|\mathbf{Y}} \left[p(y^{(\mathrm{new})}|x^{(\mathrm{new})},\theta) \right]
\end{align}
Then use an importance sampling approach to draw from this distribution. Specifically, first create $M$ draws from the prior distribution for $x$, as in $x^{(\mathrm{new})}_\ell \sim p(x^{(\mathrm{new})})$, $\ell=1,\ldots,M$, where $M$ is some very large integer.
Then calculate $w_\ell\equiv E_{\theta|\mathbf{Y}} \left[p(y^{(\mathrm{new})}|x^{(\mathrm{new})}_\ell,\theta) \right]$ and resample with replacement from your set of $M$ values, with sampling probability proportional to $w_\ell$. I do not know how large your resampled set should be, but I think it should be smaller than $M$.
Your resampled set can be taken as a draw from the posterior predictive distribution of interest.
EDIT: Or do a Metropolis-Hasting algorithm to do your sampling. Or others. Xi'an (who commented on your question) is an expert on sampling algorithms. | Inverse Predictive Posterior | Ok, I've edited my response taking into account feedback from the OP. Below is a DAG that captures the assumptions provided. So for example, $x^{(\mathrm{new})}$ need not be equal in distribution to | Inverse Predictive Posterior
Ok, I've edited my response taking into account feedback from the OP. Below is a DAG that captures the assumptions provided. So for example, $x^{(\mathrm{new})}$ need not be equal in distribution to $x$ as required, and $y^{(\mathrm{new})}$ is conditionally independent of the training data $\mathbf{Y}$ given $\theta$
We require draws from the distribution $p(x^{(\mathrm{new})}|y^{(\mathrm{new})}, \mathbf{Y})$. We have that
\begin{align}
p(x^{(\mathrm{new})}|y^{(\mathrm{new})}, \mathbf{Y}) \propto p(y^{(\mathrm{new})}|x^{(\mathrm{new})}, \mathbf{Y}) p(x^{(\mathrm{new})}|\mathbf{Y})
\end{align}
where we've dropped terms that do not depend on $x^{(\mathrm{new})}$. Focusing on one term at a time:
\begin{align}
p(y^{(\mathrm{new})}|x^{(\mathrm{new})}, \mathbf{Y}) &= \int_\theta p(y^{(\mathrm{new})}|x^{(\mathrm{new})}, \mathbf{Y}, \theta) p(\theta|x^{(\mathrm{new})}, \mathbf{Y})d\theta\\
&= \int_\theta p(y^{(\mathrm{new})}|x^{(\mathrm{new})},\theta) p(\theta|\mathbf{Y})d\theta\\
&= E_{\theta|\mathbf{Y}} \left[p(y^{(\mathrm{new})}|x^{(\mathrm{new})},\theta) \right]
\end{align}
The implied conditional independences from the DAG justify dropping the terms in the second line. Next
\begin{align}
p(x^{(\mathrm{new})}|\mathbf{Y}) \propto p(x^{(\mathrm{new})},\mathbf{Y})\propto p(x^{(\mathrm{new})})
\end{align}
again using the DAG as justification that $x^{(\mathrm{new})}$ and $\mathbf{Y}$ are independent.
Thus,
\begin{align}
p(x^{(\mathrm{new})}|y^{(\mathrm{new})}, \mathbf{Y}) \propto p(x^{(\mathrm{new})}) E_{\theta|\mathbf{Y}} \left[p(y^{(\mathrm{new})}|x^{(\mathrm{new})},\theta) \right]
\end{align}
Then use an importance sampling approach to draw from this distribution. Specifically, first create $M$ draws from the prior distribution for $x$, as in $x^{(\mathrm{new})}_\ell \sim p(x^{(\mathrm{new})})$, $\ell=1,\ldots,M$, where $M$ is some very large integer.
Then calculate $w_\ell\equiv E_{\theta|\mathbf{Y}} \left[p(y^{(\mathrm{new})}|x^{(\mathrm{new})}_\ell,\theta) \right]$ and resample with replacement from your set of $M$ values, with sampling probability proportional to $w_\ell$. I do not know how large your resampled set should be, but I think it should be smaller than $M$.
Your resampled set can be taken as a draw from the posterior predictive distribution of interest.
EDIT: Or do a Metropolis-Hasting algorithm to do your sampling. Or others. Xi'an (who commented on your question) is an expert on sampling algorithms. | Inverse Predictive Posterior
Ok, I've edited my response taking into account feedback from the OP. Below is a DAG that captures the assumptions provided. So for example, $x^{(\mathrm{new})}$ need not be equal in distribution to |
52,488 | Inverse Predictive Posterior | I'm going to take a Bayesian approach to this.
As far as I can tell, the existence of the training set is irrelevant for this problem -- it doesn't matter how we obtained the model, we can just take it as given and fixed.
So, the actual inference problem is to obtain posterior distribution for $\vec{x}' = [x_1', x_2' \ldots x_N']$ corresponding to an ensemble $\vec{y}' = [y_1', y_2' \ldots y_N']$ (these are the $y^{(new)}$).
Apply Bayes' theorem to the get the conditional distribution
$$
P(\vec{x}' \vert \vec{y}'; \theta) = \frac{P(\vec{y}' \vert \vec{x}'; \theta)P(\vec{x}'; \theta) }{ \int P(\vec{y}' \vert \vec{x}'; \theta) P(\vert \vec{x}'; \theta) }
$$
$$
P(\vec{x}' \vert \vec{y}'; \theta) = \frac{P(\vec{x}'; \theta) \prod_i P(y_i' \vert x_i'; \theta) }{ \int P(\vec{y}' \vert \vec{x}'; \theta) P(\vert \vec{x}'; \theta) }
$$
Now comes the modeling part. Given the statement "x is not random, but rather fixed by experimental design", we're going to have to make some kind(s) of assumptions about the process that generated the $\vec{y}'$.
The place I'd start is just to assume that the $x_i$ are independently draw from some a priori distribution $P(\vec{x}') = \prod_i P(x_i')$, suitably specified by domain knowledge or by using the relevant maximum entropy prior. Then the probability distribution fully factorizes, and one can compute the posterior distribution for each factor corresponding to a particular $x_i'$.
If there are domain reasons to assume that the sequence of $x_i$ are correlated with one another, then you'd need to incorporate that into the model. Note that even in this case, if desired, one can compute $P(x_i' \vert \vec{y}'; \theta)$ by marginalizing out all of the other $x'$ variables.
From here, the main problem is practical: the fact that the factors $P(y_i' \vert x_i'; \theta) = N(y_i' ; f_\theta (x_i') , \sigma^2)$ means that they are not simple functions of $x_i'$ due to the non-linear $f_\theta$ means that you'll probably need to resort to some sort of numerical approach (Monte Carlo maybe), or approximation (saddlepoint method maybe) to construct a representation of the posterior. | Inverse Predictive Posterior | I'm going to take a Bayesian approach to this.
As far as I can tell, the existence of the training set is irrelevant for this problem -- it doesn't matter how we obtained the model, we can just take i | Inverse Predictive Posterior
I'm going to take a Bayesian approach to this.
As far as I can tell, the existence of the training set is irrelevant for this problem -- it doesn't matter how we obtained the model, we can just take it as given and fixed.
So, the actual inference problem is to obtain posterior distribution for $\vec{x}' = [x_1', x_2' \ldots x_N']$ corresponding to an ensemble $\vec{y}' = [y_1', y_2' \ldots y_N']$ (these are the $y^{(new)}$).
Apply Bayes' theorem to the get the conditional distribution
$$
P(\vec{x}' \vert \vec{y}'; \theta) = \frac{P(\vec{y}' \vert \vec{x}'; \theta)P(\vec{x}'; \theta) }{ \int P(\vec{y}' \vert \vec{x}'; \theta) P(\vert \vec{x}'; \theta) }
$$
$$
P(\vec{x}' \vert \vec{y}'; \theta) = \frac{P(\vec{x}'; \theta) \prod_i P(y_i' \vert x_i'; \theta) }{ \int P(\vec{y}' \vert \vec{x}'; \theta) P(\vert \vec{x}'; \theta) }
$$
Now comes the modeling part. Given the statement "x is not random, but rather fixed by experimental design", we're going to have to make some kind(s) of assumptions about the process that generated the $\vec{y}'$.
The place I'd start is just to assume that the $x_i$ are independently draw from some a priori distribution $P(\vec{x}') = \prod_i P(x_i')$, suitably specified by domain knowledge or by using the relevant maximum entropy prior. Then the probability distribution fully factorizes, and one can compute the posterior distribution for each factor corresponding to a particular $x_i'$.
If there are domain reasons to assume that the sequence of $x_i$ are correlated with one another, then you'd need to incorporate that into the model. Note that even in this case, if desired, one can compute $P(x_i' \vert \vec{y}'; \theta)$ by marginalizing out all of the other $x'$ variables.
From here, the main problem is practical: the fact that the factors $P(y_i' \vert x_i'; \theta) = N(y_i' ; f_\theta (x_i') , \sigma^2)$ means that they are not simple functions of $x_i'$ due to the non-linear $f_\theta$ means that you'll probably need to resort to some sort of numerical approach (Monte Carlo maybe), or approximation (saddlepoint method maybe) to construct a representation of the posterior. | Inverse Predictive Posterior
I'm going to take a Bayesian approach to this.
As far as I can tell, the existence of the training set is irrelevant for this problem -- it doesn't matter how we obtained the model, we can just take i |
52,489 | Inverse Predictive Posterior | This is an observation rather than an answer, and if $f_{\theta}$ has some heinous form and/or your sample size is small may not have any practical relevance for your particular problem.
You know the form of $f_{\theta}$, and that must help! In particular all plausible potential values of $X_{new}|y_{new}$ must be consistent with what you know about the form of $f_{\theta}$.
Now if $f_{\theta}$ belongs to a family of well-behaved functions then that might tell you a great deal about what are the plausible values of $x_{new}$ once you have observed $y_{new}$. For example, if $f_{\theta}$ is continuous in $x$ or better yet (in order of increasing niceness) Lipschitz continuous, or differentiable, or monotonic, or convex, or linear.
In any case it certainly makes sense to look at observed $(x,y)$ pairs in the data, with similar $y$-values to $y_{new}$. As a very simple toy example, suppose that for all observed $y$-values in the data within an interval $I:= y_{new} \pm \epsilon$ it is true that the corresponding observed $x$-values are tightly clustered in a small region $A$ (e.g. a ball or an interval) of the domain of $X$. Clearly in this situation if $f_{\theta}$ is continuous in $x$ we should reason that $P(x_{new}\in A|y_{new}\in I) \gg P(x_{new}\notin A|y_{new}\in I)$. | Inverse Predictive Posterior | This is an observation rather than an answer, and if $f_{\theta}$ has some heinous form and/or your sample size is small may not have any practical relevance for your particular problem.
You know the | Inverse Predictive Posterior
This is an observation rather than an answer, and if $f_{\theta}$ has some heinous form and/or your sample size is small may not have any practical relevance for your particular problem.
You know the form of $f_{\theta}$, and that must help! In particular all plausible potential values of $X_{new}|y_{new}$ must be consistent with what you know about the form of $f_{\theta}$.
Now if $f_{\theta}$ belongs to a family of well-behaved functions then that might tell you a great deal about what are the plausible values of $x_{new}$ once you have observed $y_{new}$. For example, if $f_{\theta}$ is continuous in $x$ or better yet (in order of increasing niceness) Lipschitz continuous, or differentiable, or monotonic, or convex, or linear.
In any case it certainly makes sense to look at observed $(x,y)$ pairs in the data, with similar $y$-values to $y_{new}$. As a very simple toy example, suppose that for all observed $y$-values in the data within an interval $I:= y_{new} \pm \epsilon$ it is true that the corresponding observed $x$-values are tightly clustered in a small region $A$ (e.g. a ball or an interval) of the domain of $X$. Clearly in this situation if $f_{\theta}$ is continuous in $x$ we should reason that $P(x_{new}\in A|y_{new}\in I) \gg P(x_{new}\notin A|y_{new}\in I)$. | Inverse Predictive Posterior
This is an observation rather than an answer, and if $f_{\theta}$ has some heinous form and/or your sample size is small may not have any practical relevance for your particular problem.
You know the |
52,490 | Which approaches exist for optimization in machine learning? | Towards Data Science isn't a reliable website, and the text you've quoted is, unfortunately, nonsense.
For any Optimization problem with respect to Machine Learning, there can be either a numerical approach or an analytical approach. The numerical problems are Deterministic, meaning that they have a closed form solution which doesn’t change. [...] These closed form solutions are solvable analytically. But these are not optimization problems.
What they meant to say, I hope, is that "analytical problems are Determinstic [...]", etc.
I won't explain the difference between analytic and numeric approaches here, because there are lots of good sources, but going by this paragraph I'm going to say the post you read isn't one of them.
EDIT: OK, I'll explain a bit
Part of the problem is that there are a lot of partially overlapping terms. Very roughly speaking, you have:
Models where you can directly calculate the parameters: AKA closed-form solutions, analytical or analytic solutions, or sometimes algebraic solutions.
Models where you have to use an iterative algorithm to fit the parameters. All such models are numerical, but
They might be deterministic (no randomness), like batch gradient descent with fixed starting points, or stochastic (random), like stochastic gradient descent.
They might always reach the best value (convex optimisation), or might have a risk of getting stuck at local optima (non-convex optimisation)
There are plenty of other ways to slice this up, but these should be plenty to get started! | Which approaches exist for optimization in machine learning? | Towards Data Science isn't a reliable website, and the text you've quoted is, unfortunately, nonsense.
For any Optimization problem with respect to Machine Learning, there can be either a numerical a | Which approaches exist for optimization in machine learning?
Towards Data Science isn't a reliable website, and the text you've quoted is, unfortunately, nonsense.
For any Optimization problem with respect to Machine Learning, there can be either a numerical approach or an analytical approach. The numerical problems are Deterministic, meaning that they have a closed form solution which doesn’t change. [...] These closed form solutions are solvable analytically. But these are not optimization problems.
What they meant to say, I hope, is that "analytical problems are Determinstic [...]", etc.
I won't explain the difference between analytic and numeric approaches here, because there are lots of good sources, but going by this paragraph I'm going to say the post you read isn't one of them.
EDIT: OK, I'll explain a bit
Part of the problem is that there are a lot of partially overlapping terms. Very roughly speaking, you have:
Models where you can directly calculate the parameters: AKA closed-form solutions, analytical or analytic solutions, or sometimes algebraic solutions.
Models where you have to use an iterative algorithm to fit the parameters. All such models are numerical, but
They might be deterministic (no randomness), like batch gradient descent with fixed starting points, or stochastic (random), like stochastic gradient descent.
They might always reach the best value (convex optimisation), or might have a risk of getting stuck at local optima (non-convex optimisation)
There are plenty of other ways to slice this up, but these should be plenty to get started! | Which approaches exist for optimization in machine learning?
Towards Data Science isn't a reliable website, and the text you've quoted is, unfortunately, nonsense.
For any Optimization problem with respect to Machine Learning, there can be either a numerical a |
52,491 | $H_0$ vs $H_1$ in diagnostic testing | The somewhat unsettling truth is that misspecification testing is not suitable for "persuading a skeptic that the model is valid". Generally, as you obviously understand, not rejecting the $H_0$ does not imply that the $H_0$ is true, and this is the case also in misspecification testing. What the test does is something weaker, namely it just tells you that certain observable problems with the model assumptions have not occurred. Still the misspecification test will not rule out that the data has been generated in a way that violate the model assumptions and may violate them badly. For example, an evil dependence structure could be at work that enforces the data to show a certain seemingly innocent pattern that you see even though this may be contrived enough to not look suspicious to your favourite test for independence (I'm not claiming that this is realistic, I'm just claiming that a misspecification test cannot rule out that this is technically possible).
Misspecification testing can to a certain extent reassure you, but it cannot secure model assumptions to be true.
Note that some would argue that the term "valid" is weaker than the term "true", and A. Spanos (2018) argues that if you do misspecification testing in the right way (i.e., testing all assumptions in a reasonable order, meaning that the misspecification test of one assumption is not sabotaged by the failure of another assumption), ultimately indeed you can be sure that the model is "valid" for the data, even though this doesn't mean it's "true". The way he does this is by defining the term "valid" basically as passing all those tests, because then, according to him, we know that the data looks like a typical realisation from the model. I think that this is misleading though, because as I have argued above, this does not rule out that in fact model assumptions are violated in harmful ways.
A message from this is that misspecification testing is never a substitute for thinking about the subject matter and the data generating process in order to know whether there are problems with the assumptions that you couldn't see from the data alone.
The following are additions that were made taking into account comments and discussion:
In a comment, you already made reference to "severe testing" (Mayo and Spanos). Note that in their work you'll never find severity calculations that refer to misspecification tests, and for good reasons. Models can be violated in far too many and too complex ways in order to rule out all violations (or even just all relevant ones), and be it with a certain error probability.
There's TOST as in the response by Dave. This can work if we focus on one particular assumption (for example an autocorrelation parameter $\alpha$ to be zero) and take everything else in the model specification for granted. And even then we can only reject $|\alpha|>c$ for some $c>0$ (how small $c$ can be will depend on the sample size); we cannot reject $\alpha\neq 0$.
The original question was "how to choose the $H_0$", which I haven't really addressed up to now; instead of answering it, I will argue that we can't do much better than what is usually done. Remark 2 above is about an $H_0$ that isn't exactly the complement of the model assumption, rather rejecting it would secure (with the usual error probability) that the true $\alpha$ is close to zero, i.e., the model assumption. This is really the best we can hope for, and also it is not an accident that even this can only be achieved taking a host of other assumptions for granted. The thing is that we can never rule out too rich a class of distributions, because such a class will contain distributions that are so close (in case $\alpha\neq 0$) to the model assumption that they cannot be distinguished by any finite amount of data, or even distributions that are in terms of interpretation very different (like the "evil dependence structure" mentioned above), but can emulate perfectly whatever we observe, and can therefore not be rejected from the data. Famous early results in this vein are in Bahadur and Savage (1956) and Donoho (1988). Particularly there is no way to make sure that the underlying process has a density, let alone being normal or anything specific. (There is less work about evil dependence structures as far as I'm aware, because detecting them is outright hopeless.)
Furthermore, the problem with TOST is that I'd suspect that this has a higher probability to reject a true model than the standard misspecification testing approach, and this is bad, because not only it would be a (type II) error, but also it will worsen the problem that running model-based analysis conditionally on the "correct" outcome of a misspecification test can be biased, as the theory behind standard analyses doesn't take MS-testing into account, see the Shamsudheen and Hennig arxiv paper for this issue and some more literature.
References:
Bahadur, R. and L. Savage (1956). The nonexistence of certain statistical procedures in nonparametric problems. Annals of Mathematical Statistics 27, 1115–
1122.
Donoho, D. (1988). One-sided inference about functionals of a density. Annals of
Statistics 16, 1390–1420.
Spanos A (2018) Mis-specification testing in retrospect. Journal of Economic Surveys 32:541–577
There's also this (with which I agree more):
M. Iqbal Shamsudheen, Christian Hennig (2020) Should we test the model assumptions before running a model-based test? https://arxiv.org/abs/1908.02218 | $H_0$ vs $H_1$ in diagnostic testing | The somewhat unsettling truth is that misspecification testing is not suitable for "persuading a skeptic that the model is valid". Generally, as you obviously understand, not rejecting the $H_0$ does | $H_0$ vs $H_1$ in diagnostic testing
The somewhat unsettling truth is that misspecification testing is not suitable for "persuading a skeptic that the model is valid". Generally, as you obviously understand, not rejecting the $H_0$ does not imply that the $H_0$ is true, and this is the case also in misspecification testing. What the test does is something weaker, namely it just tells you that certain observable problems with the model assumptions have not occurred. Still the misspecification test will not rule out that the data has been generated in a way that violate the model assumptions and may violate them badly. For example, an evil dependence structure could be at work that enforces the data to show a certain seemingly innocent pattern that you see even though this may be contrived enough to not look suspicious to your favourite test for independence (I'm not claiming that this is realistic, I'm just claiming that a misspecification test cannot rule out that this is technically possible).
Misspecification testing can to a certain extent reassure you, but it cannot secure model assumptions to be true.
Note that some would argue that the term "valid" is weaker than the term "true", and A. Spanos (2018) argues that if you do misspecification testing in the right way (i.e., testing all assumptions in a reasonable order, meaning that the misspecification test of one assumption is not sabotaged by the failure of another assumption), ultimately indeed you can be sure that the model is "valid" for the data, even though this doesn't mean it's "true". The way he does this is by defining the term "valid" basically as passing all those tests, because then, according to him, we know that the data looks like a typical realisation from the model. I think that this is misleading though, because as I have argued above, this does not rule out that in fact model assumptions are violated in harmful ways.
A message from this is that misspecification testing is never a substitute for thinking about the subject matter and the data generating process in order to know whether there are problems with the assumptions that you couldn't see from the data alone.
The following are additions that were made taking into account comments and discussion:
In a comment, you already made reference to "severe testing" (Mayo and Spanos). Note that in their work you'll never find severity calculations that refer to misspecification tests, and for good reasons. Models can be violated in far too many and too complex ways in order to rule out all violations (or even just all relevant ones), and be it with a certain error probability.
There's TOST as in the response by Dave. This can work if we focus on one particular assumption (for example an autocorrelation parameter $\alpha$ to be zero) and take everything else in the model specification for granted. And even then we can only reject $|\alpha|>c$ for some $c>0$ (how small $c$ can be will depend on the sample size); we cannot reject $\alpha\neq 0$.
The original question was "how to choose the $H_0$", which I haven't really addressed up to now; instead of answering it, I will argue that we can't do much better than what is usually done. Remark 2 above is about an $H_0$ that isn't exactly the complement of the model assumption, rather rejecting it would secure (with the usual error probability) that the true $\alpha$ is close to zero, i.e., the model assumption. This is really the best we can hope for, and also it is not an accident that even this can only be achieved taking a host of other assumptions for granted. The thing is that we can never rule out too rich a class of distributions, because such a class will contain distributions that are so close (in case $\alpha\neq 0$) to the model assumption that they cannot be distinguished by any finite amount of data, or even distributions that are in terms of interpretation very different (like the "evil dependence structure" mentioned above), but can emulate perfectly whatever we observe, and can therefore not be rejected from the data. Famous early results in this vein are in Bahadur and Savage (1956) and Donoho (1988). Particularly there is no way to make sure that the underlying process has a density, let alone being normal or anything specific. (There is less work about evil dependence structures as far as I'm aware, because detecting them is outright hopeless.)
Furthermore, the problem with TOST is that I'd suspect that this has a higher probability to reject a true model than the standard misspecification testing approach, and this is bad, because not only it would be a (type II) error, but also it will worsen the problem that running model-based analysis conditionally on the "correct" outcome of a misspecification test can be biased, as the theory behind standard analyses doesn't take MS-testing into account, see the Shamsudheen and Hennig arxiv paper for this issue and some more literature.
References:
Bahadur, R. and L. Savage (1956). The nonexistence of certain statistical procedures in nonparametric problems. Annals of Mathematical Statistics 27, 1115–
1122.
Donoho, D. (1988). One-sided inference about functionals of a density. Annals of
Statistics 16, 1390–1420.
Spanos A (2018) Mis-specification testing in retrospect. Journal of Economic Surveys 32:541–577
There's also this (with which I agree more):
M. Iqbal Shamsudheen, Christian Hennig (2020) Should we test the model assumptions before running a model-based test? https://arxiv.org/abs/1908.02218 | $H_0$ vs $H_1$ in diagnostic testing
The somewhat unsettling truth is that misspecification testing is not suitable for "persuading a skeptic that the model is valid". Generally, as you obviously understand, not rejecting the $H_0$ does |
52,492 | $H_0$ vs $H_1$ in diagnostic testing | I think it is exactly what the two one-sided tests procedure (TOST) does. TOST concedes that there might be some small effect but shows, with some level of confidence, that the effect is below the threshold of causing us to care. Perhaps there is a bit of autocorrelation, but an autocorrelation of $0.01$ might be effectively zero. If you truly want to show the value to be zero, not close to zero, with some confidence (credibility...), I cannot see a way to do it without going Bayesian and using a prior with $P(0)>0$. If you want to be frequentist, then I think the best you can do is to bound the value in a range.
(I do not have enough experience with Bayesian methods to have much of an opinion of using a prior that puts $P(\text{what we want})>0$, but that sure sounds like rigging the test.)
$$\text{TOST}\\
H_0: \vert\theta\vert\ge d\\
H_a: \vert\theta\vert<d$$
In this way, we flip the null and alternative hypothesis to show that the value of interest, $\theta$, is less than our tolerance for difference from zero, $d$.
There are equivalences between TOST and power calculations, so I think this satisfies your requirement for controlling power that you mentioned in your comment to Lewian. | $H_0$ vs $H_1$ in diagnostic testing | I think it is exactly what the two one-sided tests procedure (TOST) does. TOST concedes that there might be some small effect but shows, with some level of confidence, that the effect is below the thr | $H_0$ vs $H_1$ in diagnostic testing
I think it is exactly what the two one-sided tests procedure (TOST) does. TOST concedes that there might be some small effect but shows, with some level of confidence, that the effect is below the threshold of causing us to care. Perhaps there is a bit of autocorrelation, but an autocorrelation of $0.01$ might be effectively zero. If you truly want to show the value to be zero, not close to zero, with some confidence (credibility...), I cannot see a way to do it without going Bayesian and using a prior with $P(0)>0$. If you want to be frequentist, then I think the best you can do is to bound the value in a range.
(I do not have enough experience with Bayesian methods to have much of an opinion of using a prior that puts $P(\text{what we want})>0$, but that sure sounds like rigging the test.)
$$\text{TOST}\\
H_0: \vert\theta\vert\ge d\\
H_a: \vert\theta\vert<d$$
In this way, we flip the null and alternative hypothesis to show that the value of interest, $\theta$, is less than our tolerance for difference from zero, $d$.
There are equivalences between TOST and power calculations, so I think this satisfies your requirement for controlling power that you mentioned in your comment to Lewian. | $H_0$ vs $H_1$ in diagnostic testing
I think it is exactly what the two one-sided tests procedure (TOST) does. TOST concedes that there might be some small effect but shows, with some level of confidence, that the effect is below the thr |
52,493 | $H_0$ vs $H_1$ in diagnostic testing | This is a great question. If we were to set autocorrelation as the null hypothesis we would have to be very specific about the type and amount. If we reject this hypothesis we have not brought evidence against all types or amounts of autocorrelation, just the one we tested. For this reason we set no autocorrelation as the null hypothesis, with the general alternative being some form and amount of autocorrelation. This is in agreement with Henry's comment. While I see a similarity between a diagnostic test and a TOST, these are not the same. In a TOST we are hopeful to reject the null hypothesis in favor of the alternative. In a diagnostic test we are hopeful for a failure to reject the null hypothesis.
We typically think of a small p-value as evidence against the null, reducing the null to the absurd, showing it is implausible. By this same logic a large p-value could be seen as evidence in favor of the null (weak evidence against the null), showing it is not absurd, it is plausible. Of course no hypothesis is proven false with a small p-value, nor is it proven true with a large one. All we can do is provide the weight of the evidence.
There is no right or wrong for what hypothesis is considered the null and what is the alternative. If you are using a Neyman-Pearson framework it is a matter of what you want as the default decision. For instance, when investigating a treatment effect we often think of "no effect" as the null hypothesis. However, in clinical development one might use a clinically meaningful effect as the null hypothesis (default decision) and only if there is sufficient evidence against this hypothesis would it be decided that the durg is not efficacious. Under a Fisherian framework one would test all possible hypotheses to see the evidence against no treatment effect as well as evidence against a clinically meaningful effect. | $H_0$ vs $H_1$ in diagnostic testing | This is a great question. If we were to set autocorrelation as the null hypothesis we would have to be very specific about the type and amount. If we reject this hypothesis we have not brought evide | $H_0$ vs $H_1$ in diagnostic testing
This is a great question. If we were to set autocorrelation as the null hypothesis we would have to be very specific about the type and amount. If we reject this hypothesis we have not brought evidence against all types or amounts of autocorrelation, just the one we tested. For this reason we set no autocorrelation as the null hypothesis, with the general alternative being some form and amount of autocorrelation. This is in agreement with Henry's comment. While I see a similarity between a diagnostic test and a TOST, these are not the same. In a TOST we are hopeful to reject the null hypothesis in favor of the alternative. In a diagnostic test we are hopeful for a failure to reject the null hypothesis.
We typically think of a small p-value as evidence against the null, reducing the null to the absurd, showing it is implausible. By this same logic a large p-value could be seen as evidence in favor of the null (weak evidence against the null), showing it is not absurd, it is plausible. Of course no hypothesis is proven false with a small p-value, nor is it proven true with a large one. All we can do is provide the weight of the evidence.
There is no right or wrong for what hypothesis is considered the null and what is the alternative. If you are using a Neyman-Pearson framework it is a matter of what you want as the default decision. For instance, when investigating a treatment effect we often think of "no effect" as the null hypothesis. However, in clinical development one might use a clinically meaningful effect as the null hypothesis (default decision) and only if there is sufficient evidence against this hypothesis would it be decided that the durg is not efficacious. Under a Fisherian framework one would test all possible hypotheses to see the evidence against no treatment effect as well as evidence against a clinically meaningful effect. | $H_0$ vs $H_1$ in diagnostic testing
This is a great question. If we were to set autocorrelation as the null hypothesis we would have to be very specific about the type and amount. If we reject this hypothesis we have not brought evide |
52,494 | $H_0$ vs $H_1$ in diagnostic testing | I don't think your premise is accurate regarding model testing. All the diagnostic tests for models that I am familiar with stipulate the model assumption as the null hypothesis and test for a departure from this that would falsify the assumption. Even if we are talking to someone who is a skeptic of the model assumptions, the usual approach would be to show them that when we subject the model to diagnostic tests there is no evidence of any breach of the model assumptions; via tests where those assumptions are taken as the null hypothesis.
The problem with setting the null hypothesis as a violation of the model is that this is not a simple hypothesis --- it is a complex composite hypothesis that must stipulate the type and degree of the violation in assumptions (which would then beg the question of sensitivity analysis for the stipulated degree).
So, I am not convinced that there is any incongruity in the first place to resolve. | $H_0$ vs $H_1$ in diagnostic testing | I don't think your premise is accurate regarding model testing. All the diagnostic tests for models that I am familiar with stipulate the model assumption as the null hypothesis and test for a depart | $H_0$ vs $H_1$ in diagnostic testing
I don't think your premise is accurate regarding model testing. All the diagnostic tests for models that I am familiar with stipulate the model assumption as the null hypothesis and test for a departure from this that would falsify the assumption. Even if we are talking to someone who is a skeptic of the model assumptions, the usual approach would be to show them that when we subject the model to diagnostic tests there is no evidence of any breach of the model assumptions; via tests where those assumptions are taken as the null hypothesis.
The problem with setting the null hypothesis as a violation of the model is that this is not a simple hypothesis --- it is a complex composite hypothesis that must stipulate the type and degree of the violation in assumptions (which would then beg the question of sensitivity analysis for the stipulated degree).
So, I am not convinced that there is any incongruity in the first place to resolve. | $H_0$ vs $H_1$ in diagnostic testing
I don't think your premise is accurate regarding model testing. All the diagnostic tests for models that I am familiar with stipulate the model assumption as the null hypothesis and test for a depart |
52,495 | $H_0$ vs $H_1$ in diagnostic testing | Aa
Hello mr. Hardy
I did read page, but cant comment so i post
Having autocorrelation in residuals for me is good thing (from usage stand point)- it give me "assurance" that info about next error term can be "known" from previous
I mean, many tests were designed to help solving real issues - and it works widely.
I cant see any problem, just predicting that model with H0 or H1 will be bad / good - i personally would still test both.
But different data, different approaches. I feel you try to broke things that are good on some issues, but bad on another... like saying "every h0 model is good and every h1 is bad" its like saying "all fruits are red" while clearly orange is orange.
Guessing some field is more approprieate to solve issue, usualy save initional time.
(like many stats tests for me ... they are good in their ways)
Its always about helping in some way, I hope it will help you help. | $H_0$ vs $H_1$ in diagnostic testing | Aa
Hello mr. Hardy
I did read page, but cant comment so i post
Having autocorrelation in residuals for me is good thing (from usage stand point)- it give me "assurance" that info about next error term | $H_0$ vs $H_1$ in diagnostic testing
Aa
Hello mr. Hardy
I did read page, but cant comment so i post
Having autocorrelation in residuals for me is good thing (from usage stand point)- it give me "assurance" that info about next error term can be "known" from previous
I mean, many tests were designed to help solving real issues - and it works widely.
I cant see any problem, just predicting that model with H0 or H1 will be bad / good - i personally would still test both.
But different data, different approaches. I feel you try to broke things that are good on some issues, but bad on another... like saying "every h0 model is good and every h1 is bad" its like saying "all fruits are red" while clearly orange is orange.
Guessing some field is more approprieate to solve issue, usualy save initional time.
(like many stats tests for me ... they are good in their ways)
Its always about helping in some way, I hope it will help you help. | $H_0$ vs $H_1$ in diagnostic testing
Aa
Hello mr. Hardy
I did read page, but cant comment so i post
Having autocorrelation in residuals for me is good thing (from usage stand point)- it give me "assurance" that info about next error term |
52,496 | Model misfit with DHARMa - What needs/can be done? | Interesting problem! In addition to what Florian has suggested, here are my thoughts:
The mixed effects models you fitted may not be the best for teasing out the effect of person-level predictors (just recently I came upon a reference discussing the challenge with interpreting such effects - I'll see if I can find it and add it to this post);
If you stick with mixed effects models (as opposed to, say, GEE style models), it may be worth trying to fit them using the bam() function from the mgcv package, which is designed to accommodate large datasets. This may cut back on computational time. See https://stat.ethz.ch/pipermail/r-help/2016-April/438227.html for how to specify random effects in a bam() call.
When you are trying various refinments of your model, can you fit them on a smaller random sample of subjects just to get a sense of how reasonable they would be? Then you can fit them for all available subjects.
Within the context of using mixed effects models, I think you may need to be more careful with how you model the effect of time in your model. It seems that currently you model the effect of time as being linear (?). If you plot the residuals for each of your model against month (1 through 12) and year (e.g., 2018, 2019, etc.), you will have an opportunity to see if there is any systematic temporal structure in those residuals that was left unaccounted for by the model. If necessary, you could try replacing the time predictor in your model with a month predictor and a year predictor. Then perhaps include something like month + year or even month + year + month:year with month coded as a factor and year as a numeric; or go via the GAM route and include smooth additive effects of month (coded as numeric in R) and year (coded as numeric) or a smooth interaction between month and year. | Model misfit with DHARMa - What needs/can be done? | Interesting problem! In addition to what Florian has suggested, here are my thoughts:
The mixed effects models you fitted may not be the best for teasing out the effect of person-level predictors (j | Model misfit with DHARMa - What needs/can be done?
Interesting problem! In addition to what Florian has suggested, here are my thoughts:
The mixed effects models you fitted may not be the best for teasing out the effect of person-level predictors (just recently I came upon a reference discussing the challenge with interpreting such effects - I'll see if I can find it and add it to this post);
If you stick with mixed effects models (as opposed to, say, GEE style models), it may be worth trying to fit them using the bam() function from the mgcv package, which is designed to accommodate large datasets. This may cut back on computational time. See https://stat.ethz.ch/pipermail/r-help/2016-April/438227.html for how to specify random effects in a bam() call.
When you are trying various refinments of your model, can you fit them on a smaller random sample of subjects just to get a sense of how reasonable they would be? Then you can fit them for all available subjects.
Within the context of using mixed effects models, I think you may need to be more careful with how you model the effect of time in your model. It seems that currently you model the effect of time as being linear (?). If you plot the residuals for each of your model against month (1 through 12) and year (e.g., 2018, 2019, etc.), you will have an opportunity to see if there is any systematic temporal structure in those residuals that was left unaccounted for by the model. If necessary, you could try replacing the time predictor in your model with a month predictor and a year predictor. Then perhaps include something like month + year or even month + year + month:year with month coded as a factor and year as a numeric; or go via the GAM route and include smooth additive effects of month (coded as numeric in R) and year (coded as numeric) or a smooth interaction between month and year. | Model misfit with DHARMa - What needs/can be done?
Interesting problem! In addition to what Florian has suggested, here are my thoughts:
The mixed effects models you fitted may not be the best for teasing out the effect of person-level predictors (j |
52,497 | Model misfit with DHARMa - What needs/can be done? | I think what's pretty clear is that the chosen distributions don't fit the data very well. I don't find this particular surprising. For example active days gambling per month is not really a count variable, as the month has a strict max (30/31) days, so if you count how many of those days someone gambles, this is more like a k/n binomial. The other two cases are less obviously not count data, but if you look at the data-generating process, there is nothing that really resembles a classical poisson respectively gamma process. In particular
For the second model, I would also consider fitting this as a k/n where you fit the probability that someone hits the loss limit, given that they play (k = losses, n = plays)
For the gamma, I would simply try other models or transformations of the response to fit the data better.
Regarding your questions of "how important" these misfits are: as a rule of thumb, the most important thing for p-values is to get the dispersion roughly correct. As you fit variable dispersion models, this is out of the way. Beyond this, the impact of residual problems is typically smaller, but what I see here seems large enough for me to have a meaningful impact on p-values or estimates, so I would try to change the models to get a better fit. | Model misfit with DHARMa - What needs/can be done? | I think what's pretty clear is that the chosen distributions don't fit the data very well. I don't find this particular surprising. For example active days gambling per month is not really a count var | Model misfit with DHARMa - What needs/can be done?
I think what's pretty clear is that the chosen distributions don't fit the data very well. I don't find this particular surprising. For example active days gambling per month is not really a count variable, as the month has a strict max (30/31) days, so if you count how many of those days someone gambles, this is more like a k/n binomial. The other two cases are less obviously not count data, but if you look at the data-generating process, there is nothing that really resembles a classical poisson respectively gamma process. In particular
For the second model, I would also consider fitting this as a k/n where you fit the probability that someone hits the loss limit, given that they play (k = losses, n = plays)
For the gamma, I would simply try other models or transformations of the response to fit the data better.
Regarding your questions of "how important" these misfits are: as a rule of thumb, the most important thing for p-values is to get the dispersion roughly correct. As you fit variable dispersion models, this is out of the way. Beyond this, the impact of residual problems is typically smaller, but what I see here seems large enough for me to have a meaningful impact on p-values or estimates, so I would try to change the models to get a better fit. | Model misfit with DHARMa - What needs/can be done?
I think what's pretty clear is that the chosen distributions don't fit the data very well. I don't find this particular surprising. For example active days gambling per month is not really a count var |
52,498 | Can we always pull a joint posterior apart? | No, it is not. In order for that to be true, $A$ and $B$ should be conditionally independent given $\theta$. | Can we always pull a joint posterior apart? | No, it is not. In order for that to be true, $A$ and $B$ should be conditionally independent given $\theta$. | Can we always pull a joint posterior apart?
No, it is not. In order for that to be true, $A$ and $B$ should be conditionally independent given $\theta$. | Can we always pull a joint posterior apart?
No, it is not. In order for that to be true, $A$ and $B$ should be conditionally independent given $\theta$. |
52,499 | Why is the variance of a binomial distribution not $n^2p(1-p)$? | The problem with your solution is at this step. $Var(X) = Var(nB)$
Because $X \neq nB$
I mean yes, $X = B_1 + B_2 + ... +B_n$ because all the $B_i$'s are $0$ or $1$, and X is the number of $1$'s in n trails. But you can't call it $nB$ because all $B$'s don't have the same value. Some of them are $0$ and some are $1$.
Your idea is only true if $B_1 = B_2 = ... = B_n = B$.
But if you really want to express binomial distribution in terms of Bernoulli's trails, you can do it like this.
$$X = B_1 + B_2 + ... +B_n$$
$$Var(X) = Var(\sum_{i=1}^n{B_i})$$
You can bring the summation out because by definition of binomial distribution, each trail is independent of the other trails. So it's like they are all independent.
$$Var(X) = \sum_{i=1}^n{Var(B_i)}$$
$$Var(X) = \sum_{i=1}^n{pq}$$
Now this you can write as $npq$ because all the $pq$'s regardless of $i$ have the same value $p(1-p)$.
$$Var(X) = npq$$ | Why is the variance of a binomial distribution not $n^2p(1-p)$? | The problem with your solution is at this step. $Var(X) = Var(nB)$
Because $X \neq nB$
I mean yes, $X = B_1 + B_2 + ... +B_n$ because all the $B_i$'s are $0$ or $1$, and X is the number of $1$'s in n | Why is the variance of a binomial distribution not $n^2p(1-p)$?
The problem with your solution is at this step. $Var(X) = Var(nB)$
Because $X \neq nB$
I mean yes, $X = B_1 + B_2 + ... +B_n$ because all the $B_i$'s are $0$ or $1$, and X is the number of $1$'s in n trails. But you can't call it $nB$ because all $B$'s don't have the same value. Some of them are $0$ and some are $1$.
Your idea is only true if $B_1 = B_2 = ... = B_n = B$.
But if you really want to express binomial distribution in terms of Bernoulli's trails, you can do it like this.
$$X = B_1 + B_2 + ... +B_n$$
$$Var(X) = Var(\sum_{i=1}^n{B_i})$$
You can bring the summation out because by definition of binomial distribution, each trail is independent of the other trails. So it's like they are all independent.
$$Var(X) = \sum_{i=1}^n{Var(B_i)}$$
$$Var(X) = \sum_{i=1}^n{pq}$$
Now this you can write as $npq$ because all the $pq$'s regardless of $i$ have the same value $p(1-p)$.
$$Var(X) = npq$$ | Why is the variance of a binomial distribution not $n^2p(1-p)$?
The problem with your solution is at this step. $Var(X) = Var(nB)$
Because $X \neq nB$
I mean yes, $X = B_1 + B_2 + ... +B_n$ because all the $B_i$'s are $0$ or $1$, and X is the number of $1$'s in n |
52,500 | Why is the variance of a binomial distribution not $n^2p(1-p)$? | It might be worth examining the binomial as a sum of $n$ i.i.d. bernoulli trials. Let $X_i$ be i.i.d. bernoulli draws. $Y = \sum_i X_i$ is then a binomial random variable. The variance of this is
$$ \operatorname{Var}(Y) = \operatorname{Var}(\sum_i X_i) = \sum_i \operatorname{Var}(X_i) $$
Where I have used the property that the variances add iff the the covariance is 0 (which is true by assumption). For a given Bernoulli random variable $\operatorname{Var}(X_i) = p(1-p)$. Since all the $X_i$ are identical,
$$ \operatorname{Var}(Y) = \sum_i p(1-p) = n p (1-p) $$ | Why is the variance of a binomial distribution not $n^2p(1-p)$? | It might be worth examining the binomial as a sum of $n$ i.i.d. bernoulli trials. Let $X_i$ be i.i.d. bernoulli draws. $Y = \sum_i X_i$ is then a binomial random variable. The variance of this is
$ | Why is the variance of a binomial distribution not $n^2p(1-p)$?
It might be worth examining the binomial as a sum of $n$ i.i.d. bernoulli trials. Let $X_i$ be i.i.d. bernoulli draws. $Y = \sum_i X_i$ is then a binomial random variable. The variance of this is
$$ \operatorname{Var}(Y) = \operatorname{Var}(\sum_i X_i) = \sum_i \operatorname{Var}(X_i) $$
Where I have used the property that the variances add iff the the covariance is 0 (which is true by assumption). For a given Bernoulli random variable $\operatorname{Var}(X_i) = p(1-p)$. Since all the $X_i$ are identical,
$$ \operatorname{Var}(Y) = \sum_i p(1-p) = n p (1-p) $$ | Why is the variance of a binomial distribution not $n^2p(1-p)$?
It might be worth examining the binomial as a sum of $n$ i.i.d. bernoulli trials. Let $X_i$ be i.i.d. bernoulli draws. $Y = \sum_i X_i$ is then a binomial random variable. The variance of this is
$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.