idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
50,301 | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$? | The latter note is not obvious to me. Consider a DAG that is a simple
chain $$ Z \rightarrow X \rightarrow Y $$ Here, $\text{PA}(X)=Z$. At
the same time, there is no backdoor path between $X$ and $Y$, so $Z$
does not block any. Question: How come $Z$ satisfies the backdoor
criterion then?
$Z$ satisfy the backdoor criterion because no backdoor paths between $X$ and $Y$ remain open in the DAG if we condition on $Z$.
Considering that we are interested in the (total) causal effect of $X$ on $Y$ a control set that contain $Z$ is a good control set. Moreover even the empty set is good, indeed even it deal with backdoor criterion.
If your concern is about the fairness of the definition you reported above, I suggest:
Given an ordered pair of variables $(X,Y)$ in a directed acyclic graph $G$, a set of variables $Z$ satisfies the backdoor criterion relative to $(X,Y)$ if conditioning on the control set $Z$ no directed/causal paths are blocked and no spurious/backdoor paths remain open. | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$? | The latter note is not obvious to me. Consider a DAG that is a simple
chain $$ Z \rightarrow X \rightarrow Y $$ Here, $\text{PA}(X)=Z$. At
the same time, there is no backdoor path between $X$ and $Y$, | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$?
The latter note is not obvious to me. Consider a DAG that is a simple
chain $$ Z \rightarrow X \rightarrow Y $$ Here, $\text{PA}(X)=Z$. At
the same time, there is no backdoor path between $X$ and $Y$, so $Z$
does not block any. Question: How come $Z$ satisfies the backdoor
criterion then?
$Z$ satisfy the backdoor criterion because no backdoor paths between $X$ and $Y$ remain open in the DAG if we condition on $Z$.
Considering that we are interested in the (total) causal effect of $X$ on $Y$ a control set that contain $Z$ is a good control set. Moreover even the empty set is good, indeed even it deal with backdoor criterion.
If your concern is about the fairness of the definition you reported above, I suggest:
Given an ordered pair of variables $(X,Y)$ in a directed acyclic graph $G$, a set of variables $Z$ satisfies the backdoor criterion relative to $(X,Y)$ if conditioning on the control set $Z$ no directed/causal paths are blocked and no spurious/backdoor paths remain open. | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$?
The latter note is not obvious to me. Consider a DAG that is a simple
chain $$ Z \rightarrow X \rightarrow Y $$ Here, $\text{PA}(X)=Z$. At
the same time, there is no backdoor path between $X$ and $Y$, |
50,302 | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$? | The reason that conditioning on the parents of $X$, irrespective of what the DAG looks like, always satisfies the backdoor criterion relative to $(X,Y)$ is that there is a parent of $X$ on each backdoor path and parents of $X$ cannot be colliders, by definition of parents of $X$ (which implies an arrow from the parent to $X$), hence conditioning on the the set of parents of $X$ will block all the backdoor paths, not open any spurious paths, and leave all directed paths untouched.
With regards to your specific question on this DAG: $Z \rightarrow X \rightarrow Y$: $Z$, the parent of $X$, does satisfy the backdoor criterion, albeit trivially. There is no backdoor path that remains open once we condition on $Z$; all directed paths from $X$ to $Y$ remain unperturbed; no new spurious paths are created. But, of course, the empty set also satisfies the backdoor criterion in this case.
However, there are at least 3 reasons why when interested in the causal effect of $X$ on $Y$, we would prefer to reduce the DAG you brought up $Z \rightarrow X \rightarrow Y$ to the following $X \rightarrow Y$ instead.
$Y$ and $Z$ are independent conditional on X: $P(Y|X,Z)=P(Y|X)$. We gain nothing by conditioning on $Z$, once we've already conditioned on $X$. Put differently, $Z$ here is neutral in terms of bias reduction.
Controlling for $Z$ will reduce the variation in $X$ and hence will reduce the precision of the estimate of the average causal effect.
To the extent that there are unobserved common causes of $X$ and $Y$, controlling for Z will amplify the bias (due to the association via U).
See here for more on this. | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$? | The reason that conditioning on the parents of $X$, irrespective of what the DAG looks like, always satisfies the backdoor criterion relative to $(X,Y)$ is that there is a parent of $X$ on each backdo | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$?
The reason that conditioning on the parents of $X$, irrespective of what the DAG looks like, always satisfies the backdoor criterion relative to $(X,Y)$ is that there is a parent of $X$ on each backdoor path and parents of $X$ cannot be colliders, by definition of parents of $X$ (which implies an arrow from the parent to $X$), hence conditioning on the the set of parents of $X$ will block all the backdoor paths, not open any spurious paths, and leave all directed paths untouched.
With regards to your specific question on this DAG: $Z \rightarrow X \rightarrow Y$: $Z$, the parent of $X$, does satisfy the backdoor criterion, albeit trivially. There is no backdoor path that remains open once we condition on $Z$; all directed paths from $X$ to $Y$ remain unperturbed; no new spurious paths are created. But, of course, the empty set also satisfies the backdoor criterion in this case.
However, there are at least 3 reasons why when interested in the causal effect of $X$ on $Y$, we would prefer to reduce the DAG you brought up $Z \rightarrow X \rightarrow Y$ to the following $X \rightarrow Y$ instead.
$Y$ and $Z$ are independent conditional on X: $P(Y|X,Z)=P(Y|X)$. We gain nothing by conditioning on $Z$, once we've already conditioned on $X$. Put differently, $Z$ here is neutral in terms of bias reduction.
Controlling for $Z$ will reduce the variation in $X$ and hence will reduce the precision of the estimate of the average causal effect.
To the extent that there are unobserved common causes of $X$ and $Y$, controlling for Z will amplify the bias (due to the association via U).
See here for more on this. | How come parents of $X$ always satisfy the backdoor criterion relative to $(X,Y)$?
The reason that conditioning on the parents of $X$, irrespective of what the DAG looks like, always satisfies the backdoor criterion relative to $(X,Y)$ is that there is a parent of $X$ on each backdo |
50,303 | How total loss is manipulated in mini batch gradient descent as the loss function is calculated and minimized for mini batches? | You don't accumulate those losses unless you are reporting training loss, which is usually not part of the training where mini-batching matters. The mini-batch is assumed to approximate the full-batch loss function and we update the weight and bias under that assumption, in the hope that full batch also approximates the average loss for the population.
In the extreme case, there is online learning, where only one training sample is thrown into the neural net and the weights are updated according to that sample only. When data is extremely abundant, we sometimes don't even use a single sample twice, so again no aggregation of losses. | How total loss is manipulated in mini batch gradient descent as the loss function is calculated and | You don't accumulate those losses unless you are reporting training loss, which is usually not part of the training where mini-batching matters. The mini-batch is assumed to approximate the full-batch | How total loss is manipulated in mini batch gradient descent as the loss function is calculated and minimized for mini batches?
You don't accumulate those losses unless you are reporting training loss, which is usually not part of the training where mini-batching matters. The mini-batch is assumed to approximate the full-batch loss function and we update the weight and bias under that assumption, in the hope that full batch also approximates the average loss for the population.
In the extreme case, there is online learning, where only one training sample is thrown into the neural net and the weights are updated according to that sample only. When data is extremely abundant, we sometimes don't even use a single sample twice, so again no aggregation of losses. | How total loss is manipulated in mini batch gradient descent as the loss function is calculated and
You don't accumulate those losses unless you are reporting training loss, which is usually not part of the training where mini-batching matters. The mini-batch is assumed to approximate the full-batch |
50,304 | Derivation of maximum likelihood for a Gaussian mixture model | To avoid any confusion, the summation index and the index of the $\mu$ that you differentiate with should be different. From the beginning, assume the likelihood is written with index $j$ and you want to differentiate it with $\mu_k$:
$$\frac{\partial \sum_{j=1}^K \pi_j N(x_n|\mu_j,\Sigma_j) }{\partial \mu_k}=\frac{ \pi_k\partial N(x_n|\mu_k,\Sigma_k)}{\partial \mu_k}$$
which explains why the answer doesn't have a summation in the numerator.
You'll have a minus in $(x-\mu_k)$, i.e. differentiating wrt $\mu_k$ gives $-1$, and also another minus in $\exp(-(\ldots))$ expression in normal PDF. They'll cancel out each other. | Derivation of maximum likelihood for a Gaussian mixture model | To avoid any confusion, the summation index and the index of the $\mu$ that you differentiate with should be different. From the beginning, assume the likelihood is written with index $j$ and you want | Derivation of maximum likelihood for a Gaussian mixture model
To avoid any confusion, the summation index and the index of the $\mu$ that you differentiate with should be different. From the beginning, assume the likelihood is written with index $j$ and you want to differentiate it with $\mu_k$:
$$\frac{\partial \sum_{j=1}^K \pi_j N(x_n|\mu_j,\Sigma_j) }{\partial \mu_k}=\frac{ \pi_k\partial N(x_n|\mu_k,\Sigma_k)}{\partial \mu_k}$$
which explains why the answer doesn't have a summation in the numerator.
You'll have a minus in $(x-\mu_k)$, i.e. differentiating wrt $\mu_k$ gives $-1$, and also another minus in $\exp(-(\ldots))$ expression in normal PDF. They'll cancel out each other. | Derivation of maximum likelihood for a Gaussian mixture model
To avoid any confusion, the summation index and the index of the $\mu$ that you differentiate with should be different. From the beginning, assume the likelihood is written with index $j$ and you want |
50,305 | Where are the Wald p-values and where are the LRT ones in the resulf of mixed models? [closed] | 1 . So, when I get an output of a mixed model, in any statistical package, I get the list of coefficients with its p-values. Are they Wald's?
Yes, generally they are. They may be $Z$-statistics/tests (i.e., assuming that the sample is big enough so the standard errors have no uncertainty) or $t$-statistics (allowing for the uncertainty in std err due to finite sample size); this is usually indicated in the column names (and by the appearance of a "df" or "ddf" [(denominator) degrees of freedom] column in the output).
In your second case (results of "ANOVA"), it's hard to know without reading the documentation exactly what tests are being done. It might be either Wald or LRT and might do some sort of finite-size correction or not (see details under #2).
When I do ANOVA on such model output, it performs a joint test on all coefficients belonging to a single effects, thus, ANOVA gives me p-value for the main effects, one by one. In certain packages, like R or SAS one can choose among: LRT, Kenward-Roger, [Sattherthwaite], F test, Chi2 test. Which one is the Wald? Is this the Chi2 test? Is LRT the F test? Is the Kenward-Roger a "small-data adjustment" to the Wald?
This is a little complicated.
Wald tests in general assume the log-likelihood surface is quadratic.
They may ignore the finiteness of the data set, in particular the uncertainty associated with nuisance parameters such as the residual standard deviation (in which case they are "Wald chi-square tests", because the test statistic is $\chi^2$ (or scaled $\chi^2$) distributed in this case
If they take the finiteness of the data set into account, they are "F tests" ($F$-distributed test statistic)
if the experimental design is balanced and nested, the denominator degrees of freedom (df) for the F-statistic can be computed exactly
if not, then some approximation such as Satterthwaite or Kenward-Roger must be used (so the answer to your question "is the K-R a 'small-data adjustment' to the Wald?" is "yes")
The likelihood ratio test (also "LRT", or (also) "Chi2", because the test statistic of a LRT is $\chi^2$-distributed: in R, if the output says just "Chi2" and not "Wald Chi2" it's probably a LRT) accounts for the non-quadratic shape of the log-likelihood surface, but not the uncertainty of nuisance parameters due to finite size. Finite-size corrections to the LRT are complicated and rarely used.
Briefly - which of them (Wald, t-test, F test, Chi2 test, LRT) refer to the model coefficients and which of them refer to the main effects? And which can, if any, refer to both?
I'm guessing that what by "coefficients" you mean tests of single coefficients (e.g. the slope in a regression models) vs. joint tests of multiple coefficients simultaneously equaling zero (e.g. the effect of a categorical predictor with >2 levels). t- and Z-tests specifically refer to single coefficients. F and Chi2 essentially test sums of squares of scaled coefficients, so can refer to single or multiple-coefficient tests. Wald and LRT refer to assumptions about the shape of the log-likelihood surface, so are not specific to single- or multiple-coefficient tests.
See also: GLMM FAQ on denominator df; the "pvalue" help page for lme4; How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)?
Corrections and comments welcome. | Where are the Wald p-values and where are the LRT ones in the resulf of mixed models? [closed] | 1 . So, when I get an output of a mixed model, in any statistical package, I get the list of coefficients with its p-values. Are they Wald's?
Yes, generally they are. They may be $Z$-statistics/tests | Where are the Wald p-values and where are the LRT ones in the resulf of mixed models? [closed]
1 . So, when I get an output of a mixed model, in any statistical package, I get the list of coefficients with its p-values. Are they Wald's?
Yes, generally they are. They may be $Z$-statistics/tests (i.e., assuming that the sample is big enough so the standard errors have no uncertainty) or $t$-statistics (allowing for the uncertainty in std err due to finite sample size); this is usually indicated in the column names (and by the appearance of a "df" or "ddf" [(denominator) degrees of freedom] column in the output).
In your second case (results of "ANOVA"), it's hard to know without reading the documentation exactly what tests are being done. It might be either Wald or LRT and might do some sort of finite-size correction or not (see details under #2).
When I do ANOVA on such model output, it performs a joint test on all coefficients belonging to a single effects, thus, ANOVA gives me p-value for the main effects, one by one. In certain packages, like R or SAS one can choose among: LRT, Kenward-Roger, [Sattherthwaite], F test, Chi2 test. Which one is the Wald? Is this the Chi2 test? Is LRT the F test? Is the Kenward-Roger a "small-data adjustment" to the Wald?
This is a little complicated.
Wald tests in general assume the log-likelihood surface is quadratic.
They may ignore the finiteness of the data set, in particular the uncertainty associated with nuisance parameters such as the residual standard deviation (in which case they are "Wald chi-square tests", because the test statistic is $\chi^2$ (or scaled $\chi^2$) distributed in this case
If they take the finiteness of the data set into account, they are "F tests" ($F$-distributed test statistic)
if the experimental design is balanced and nested, the denominator degrees of freedom (df) for the F-statistic can be computed exactly
if not, then some approximation such as Satterthwaite or Kenward-Roger must be used (so the answer to your question "is the K-R a 'small-data adjustment' to the Wald?" is "yes")
The likelihood ratio test (also "LRT", or (also) "Chi2", because the test statistic of a LRT is $\chi^2$-distributed: in R, if the output says just "Chi2" and not "Wald Chi2" it's probably a LRT) accounts for the non-quadratic shape of the log-likelihood surface, but not the uncertainty of nuisance parameters due to finite size. Finite-size corrections to the LRT are complicated and rarely used.
Briefly - which of them (Wald, t-test, F test, Chi2 test, LRT) refer to the model coefficients and which of them refer to the main effects? And which can, if any, refer to both?
I'm guessing that what by "coefficients" you mean tests of single coefficients (e.g. the slope in a regression models) vs. joint tests of multiple coefficients simultaneously equaling zero (e.g. the effect of a categorical predictor with >2 levels). t- and Z-tests specifically refer to single coefficients. F and Chi2 essentially test sums of squares of scaled coefficients, so can refer to single or multiple-coefficient tests. Wald and LRT refer to assumptions about the shape of the log-likelihood surface, so are not specific to single- or multiple-coefficient tests.
See also: GLMM FAQ on denominator df; the "pvalue" help page for lme4; How can I obtain z-values instead of t-values in linear mixed-effect model (lmer vs glmer)?
Corrections and comments welcome. | Where are the Wald p-values and where are the LRT ones in the resulf of mixed models? [closed]
1 . So, when I get an output of a mixed model, in any statistical package, I get the list of coefficients with its p-values. Are they Wald's?
Yes, generally they are. They may be $Z$-statistics/tests |
50,306 | Range of values of $R^2$ for a two-feature linear model based on the $R^2$s of one-feature linear models? [duplicate] | A simple approach to this problem is considering it from a geometrical view of point.
Firstly, we immediately know answer is within range [0.1, 1], then let's check if it's tight.
Note regression is projection of $y$ on to column space of $x$.
If vector $x_1$ and $x_2$ are almost perfect linear dependent, it's easy to see the projection of $y$ is almost the same as before, thus $R^2$ is almost 0.1.
If $y$ lies in subspace by two vector $x_1$ and $x_2$(this indeed can happen, you can first think of it in 3-dim space, then it holds for general n-dim space), then we immediately know $R^2$ is 1.
Thus answer should be (0.1, 1]. | Range of values of $R^2$ for a two-feature linear model based on the $R^2$s of one-feature linear mo | A simple approach to this problem is considering it from a geometrical view of point.
Firstly, we immediately know answer is within range [0.1, 1], then let's check if it's tight.
Note regression is p | Range of values of $R^2$ for a two-feature linear model based on the $R^2$s of one-feature linear models? [duplicate]
A simple approach to this problem is considering it from a geometrical view of point.
Firstly, we immediately know answer is within range [0.1, 1], then let's check if it's tight.
Note regression is projection of $y$ on to column space of $x$.
If vector $x_1$ and $x_2$ are almost perfect linear dependent, it's easy to see the projection of $y$ is almost the same as before, thus $R^2$ is almost 0.1.
If $y$ lies in subspace by two vector $x_1$ and $x_2$(this indeed can happen, you can first think of it in 3-dim space, then it holds for general n-dim space), then we immediately know $R^2$ is 1.
Thus answer should be (0.1, 1]. | Range of values of $R^2$ for a two-feature linear model based on the $R^2$s of one-feature linear mo
A simple approach to this problem is considering it from a geometrical view of point.
Firstly, we immediately know answer is within range [0.1, 1], then let's check if it's tight.
Note regression is p |
50,307 | Smoothed Moments as Function of Predictor | Continuing from the example data generated in the OP, we can construct a simple GAMLSS model for the mean of the data using a penalised B-spline. This model assumed a Normal distribution. We are only interested in the spline for the mean.
m1 <- gamlss(ys~pb(xs))
plot(NULL, xlim=c(0,xlim), ylim = c(0, 4), xlab="x", ylab="y", main="Real and Estimated Mean")
lines(seq, mean, col="red")
lines(xs[order(xs)], fitted(m1)[order(xs)], col="red", lty = 2)
The estimated mean is similar to the real mean as a function of $x$.
We now subtract this estimated mean from the data and square the data. We run the same model again to estimate the variance.
ys2 <- (ys - fitted(m1))^2
m2 <- gamlss(ys2~pb(xs))
plot(NULL, xlim=c(0,xlim), ylim = c(0, 4), xlab="x", ylab="y", main="Real and Estimated Variance")
lines(seq, variance, col="orange")
lines(xs[order(xs)], fitted(m2)[order(xs)], col="orange", lty = 2)
The estimated variance is similar to the real variance as a function of $x$.
We now divide by the square root of the estimated variance and cube the data. We run the same model again to estimate the skewness.
ys3 <- ((ys - fitted(m1))/sqrt(fitted(m2)))^3
m3 <- gamlss(ys3~pb(xs))
plot(NULL, xlim=c(0,xlim), ylim = c(0, 4), xlab="x", ylab="y", main="Real and Estimated Skewness")
lines(seq, skewness, col="green")
lines(xs[order(xs)], fitted(m3)[order(xs)], col="green", lty = 2)
The estimated skewness is similar to the real skewness as a function of $x$.
Further comments are welcomed as I am unsure whether this procedure:
reliably provides estimates of the moments
is the most accurate/efficient method
is penalised appropriately, as requested by the OP
really needs GAMLSS or can be done in a simpler way | Smoothed Moments as Function of Predictor | Continuing from the example data generated in the OP, we can construct a simple GAMLSS model for the mean of the data using a penalised B-spline. This model assumed a Normal distribution. We are only | Smoothed Moments as Function of Predictor
Continuing from the example data generated in the OP, we can construct a simple GAMLSS model for the mean of the data using a penalised B-spline. This model assumed a Normal distribution. We are only interested in the spline for the mean.
m1 <- gamlss(ys~pb(xs))
plot(NULL, xlim=c(0,xlim), ylim = c(0, 4), xlab="x", ylab="y", main="Real and Estimated Mean")
lines(seq, mean, col="red")
lines(xs[order(xs)], fitted(m1)[order(xs)], col="red", lty = 2)
The estimated mean is similar to the real mean as a function of $x$.
We now subtract this estimated mean from the data and square the data. We run the same model again to estimate the variance.
ys2 <- (ys - fitted(m1))^2
m2 <- gamlss(ys2~pb(xs))
plot(NULL, xlim=c(0,xlim), ylim = c(0, 4), xlab="x", ylab="y", main="Real and Estimated Variance")
lines(seq, variance, col="orange")
lines(xs[order(xs)], fitted(m2)[order(xs)], col="orange", lty = 2)
The estimated variance is similar to the real variance as a function of $x$.
We now divide by the square root of the estimated variance and cube the data. We run the same model again to estimate the skewness.
ys3 <- ((ys - fitted(m1))/sqrt(fitted(m2)))^3
m3 <- gamlss(ys3~pb(xs))
plot(NULL, xlim=c(0,xlim), ylim = c(0, 4), xlab="x", ylab="y", main="Real and Estimated Skewness")
lines(seq, skewness, col="green")
lines(xs[order(xs)], fitted(m3)[order(xs)], col="green", lty = 2)
The estimated skewness is similar to the real skewness as a function of $x$.
Further comments are welcomed as I am unsure whether this procedure:
reliably provides estimates of the moments
is the most accurate/efficient method
is penalised appropriately, as requested by the OP
really needs GAMLSS or can be done in a simpler way | Smoothed Moments as Function of Predictor
Continuing from the example data generated in the OP, we can construct a simple GAMLSS model for the mean of the data using a penalised B-spline. This model assumed a Normal distribution. We are only |
50,308 | Kernel density estimate vs Dirichlet process mixture | After some research and thinking, here would be my own tentative answer to the question I posted; just in case someone else is interested in this question.
Given n data points, KDE uses a mixture of n kernels to approximate the "true" density while DPM, in finite samples, typically ends up with a smaller mixture even though it theoretically uses an infinite number of mixtures. Moreover, KDE fixes the mixing weights at 1/n while DPM allows it to follow the stick breaking process. Assuming Gaussian kernels, it is easy to see that KDE fixes the means of the kernels at the data points while DPM estimates them from data. Taken together, DPM could argue that it is more parsimonious (using a smaller mixture) and is more flexible (not fixing mixture weights and kernel parameters) than KDE. The empirical findings in http://mlg.eng.cam.ac.uk/pub/pdf/GoeRas10.pdf contains a comparison of the performance of KDE and DPM and seems to suggest that DPM performs better.
(To Tim's comments: Rob Hyndman has a paper on estimating KDE bandwidth https://robjhyndman.com/publications/bandwidth-selection-for-multivariate-kernel-density-estimation-using-mcmc/. There are probably more papers on that topic. As for convergence, depending on the algorithm for DPM, the number of kernels as well as their labels change across MCMC iterations and hence makes it challenging to check the draws' convergence.) | Kernel density estimate vs Dirichlet process mixture | After some research and thinking, here would be my own tentative answer to the question I posted; just in case someone else is interested in this question.
Given n data points, KDE uses a mixture of n | Kernel density estimate vs Dirichlet process mixture
After some research and thinking, here would be my own tentative answer to the question I posted; just in case someone else is interested in this question.
Given n data points, KDE uses a mixture of n kernels to approximate the "true" density while DPM, in finite samples, typically ends up with a smaller mixture even though it theoretically uses an infinite number of mixtures. Moreover, KDE fixes the mixing weights at 1/n while DPM allows it to follow the stick breaking process. Assuming Gaussian kernels, it is easy to see that KDE fixes the means of the kernels at the data points while DPM estimates them from data. Taken together, DPM could argue that it is more parsimonious (using a smaller mixture) and is more flexible (not fixing mixture weights and kernel parameters) than KDE. The empirical findings in http://mlg.eng.cam.ac.uk/pub/pdf/GoeRas10.pdf contains a comparison of the performance of KDE and DPM and seems to suggest that DPM performs better.
(To Tim's comments: Rob Hyndman has a paper on estimating KDE bandwidth https://robjhyndman.com/publications/bandwidth-selection-for-multivariate-kernel-density-estimation-using-mcmc/. There are probably more papers on that topic. As for convergence, depending on the algorithm for DPM, the number of kernels as well as their labels change across MCMC iterations and hence makes it challenging to check the draws' convergence.) | Kernel density estimate vs Dirichlet process mixture
After some research and thinking, here would be my own tentative answer to the question I posted; just in case someone else is interested in this question.
Given n data points, KDE uses a mixture of n |
50,309 | How to compare model coefficients from models with different distribution family and link functions | You can't directly compare the estimated coefficients since the units of the response variable are not the same in both models.
See, a logistic regression will estimate a binomial probability of observing the event you modelled. So a number between 0 and one is ultimately estimated. Note also that the estimate is not linearly related to the covariates you estimate in the model. So the effect depends on the initial value.
Now, in a Negative binomial regression, the outcome is a count, a number between zero and infinity, and again, the covariates are not linearly related to the outcome, due to the link function.
But it doesn't mean you cannot compare both models. It just takes a little more effort. I would plot the partial effects of the covariate of interest (and relevant interactions) to each response variables and build my rationale from there.
Update
My idea is to help you investigate the variables effects onto the different response variables by exposing the model's mechanics. For this we will need an example. In the following code I generate the independent variables x and z. Then I generate the linear dependent portion of a logistic regression (log odds) response_binary and the negative binomial counts and theta using this SO answer.
suppressMessages(library(tidyverse))
suppressMessages(library(Hmisc))
suppressMessages(library(glmmTMB))
suppressMessages(library(broom))
suppressMessages(library(MASS))
suppressMessages(library(modelr))
N <- 500
set.seed(1)
df <- tibble(x = runif(n = N), z = runif(n = N) < 0.3)
bin_link <- function(x) 1/(1 + exp(-x))
df <-
df %>%
mutate(logodds = 1.2*x + 0.2*z - 1.2*x*z + rnorm(N, 0, 0.1) ,
mu = 1.2*x + 2*x^2 + z + rnorm(N, 0, 0.1)
) %>%
mutate( prob = bin_link(logodds),
neg_par = exp(mu),
response_binary = rbinom(n=N, size=1, prob=prob),
theta = sample(c(5,8,10, 15), replace = T, size=N),
response_count = rnbinom(n = N, size = theta, mu=neg_par)
)
The resulting distribution of response_count is in the following histogram.
df %>% ggplot(aes(response_count)) + geom_histogram(color='black', fill='skyblue') + labs(title='Count variable distribution')
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
And the scatter plots between x and response_count with z colours indicate an idea of what the model should be capable of identifying.
df %>% ggplot(aes(x = x, y = response_count, color=z)) + geom_point() + labs(title='relation between x, z and the count response')
And to explore the relation between x, z and the response_binary I turn to a lowess plot visualization taken from chapter 12 in Frank Harrel's Regression Modeling Strategies.
df %>% ggplot(aes(x = x, y = response_binary, group=z, color=z)) + histSpikeg(response_binary~x*z, lowess=T, data=df) +
labs(title='Estimated lowess for the relation between x, z and the\nproportion/probability of the binary response')
Now to explore the relationship between x + z when it comes to the response_count and the response_binary I suggest you inspect the models partial effect plots, since the coefficients can be directly compared.
First we build two simple models. A negative binomial model nb_md for the response_count variable, and a logistic regression logi_md for the response_binary.
nb_md <- glm.nb(response_count ~ x*z, data=df, link="log")
summary(nb_md)
#>
#> Call:
#> glm.nb(formula = response_count ~ x * z, data = df, link = "log",
#> init.theta = 8.292271032)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -3.1667 -0.7803 -0.1371 0.5643 2.7400
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -0.36614 0.08754 -4.183 2.88e-05 ***
#> x 3.37600 0.12373 27.285 < 2e-16 ***
#> zTRUE 0.98352 0.13262 7.416 1.21e-13 ***
#> x:zTRUE -0.08763 0.19660 -0.446 0.656
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for Negative Binomial(8.2923) family taken to be 1)
#>
#> Null deviance: 2448.32 on 499 degrees of freedom
#> Residual deviance: 548.59 on 496 degrees of freedom
#> AIC: 2455.9
#>
#> Number of Fisher Scoring iterations: 1
#>
#>
#> Theta: 8.29
#> Std. Err.: 1.24
#>
#> 2 x log-likelihood: -2445.94
logi_md <- glm(response_binary ~ x + z, data = df, family = 'binomial')
summary(logi_md)
#>
#> Call:
#> glm(formula = response_binary ~ x + z, family = "binomial", data = df)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -1.639 -1.248 0.829 1.031 1.451
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 0.03917 0.19276 0.203 0.838968
#> x 1.00490 0.33105 3.035 0.002401 **
#> zTRUE -0.67308 0.20188 -3.334 0.000856 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 679.64 on 499 degrees of freedom
#> Residual deviance: 659.00 on 497 degrees of freedom
#> AIC: 665
#>
#> Number of Fisher Scoring iterations: 4
The models could be better built, in order to better capture the non-linearity we can see in the exploration plots.
However I build the partial effects plot and plot the predictions together with the standard error. From these we can have an idea of how the variable X can have a different effect over response_count and response_binary.
Here are some observations we can make:
the effect of variable x greatly increases for larger values of x when it comes to understanding it's effect on response_count, being greater when z == T
the effect of variable x on response_binary tends to be the same, regardless of z in lower ranges. But it tends to be much larger with z == F if x is large.
partial_effects_frame <-
tibble(x = seq(0,1, 0.01)) %>%
tidyr::crossing(z=c(T,F))
bind_rows(
augment(nb_md, newdata= partial_effects_frame, type.predict='response') %>% mutate(model='nb_md'),
augment(logi_md, newdata= partial_effects_frame, type.predict='response') %>% mutate(model='logi_md')
) %>%
ggplot(aes(x, .fitted, group=z)) +
geom_ribbon(aes(ymin= .fitted - .se.fit, ymax= .fitted + .se.fit), alpha=0.2) +
geom_line(aes(group=z, color=z)) +
facet_wrap(~model, scales = 'free', ncol=1) +
labs(title = 'Partial effects of the variable x interacted with z')
Now this is just a simple example of what I meant with a little more effort.
Changing base values, exploring the plots and everything, working with tables might help you study the effect on different response variables.
Created on 2019-11-22 by the reprex package (v0.3.0) | How to compare model coefficients from models with different distribution family and link functions | You can't directly compare the estimated coefficients since the units of the response variable are not the same in both models.
See, a logistic regression will estimate a binomial probability of obse | How to compare model coefficients from models with different distribution family and link functions
You can't directly compare the estimated coefficients since the units of the response variable are not the same in both models.
See, a logistic regression will estimate a binomial probability of observing the event you modelled. So a number between 0 and one is ultimately estimated. Note also that the estimate is not linearly related to the covariates you estimate in the model. So the effect depends on the initial value.
Now, in a Negative binomial regression, the outcome is a count, a number between zero and infinity, and again, the covariates are not linearly related to the outcome, due to the link function.
But it doesn't mean you cannot compare both models. It just takes a little more effort. I would plot the partial effects of the covariate of interest (and relevant interactions) to each response variables and build my rationale from there.
Update
My idea is to help you investigate the variables effects onto the different response variables by exposing the model's mechanics. For this we will need an example. In the following code I generate the independent variables x and z. Then I generate the linear dependent portion of a logistic regression (log odds) response_binary and the negative binomial counts and theta using this SO answer.
suppressMessages(library(tidyverse))
suppressMessages(library(Hmisc))
suppressMessages(library(glmmTMB))
suppressMessages(library(broom))
suppressMessages(library(MASS))
suppressMessages(library(modelr))
N <- 500
set.seed(1)
df <- tibble(x = runif(n = N), z = runif(n = N) < 0.3)
bin_link <- function(x) 1/(1 + exp(-x))
df <-
df %>%
mutate(logodds = 1.2*x + 0.2*z - 1.2*x*z + rnorm(N, 0, 0.1) ,
mu = 1.2*x + 2*x^2 + z + rnorm(N, 0, 0.1)
) %>%
mutate( prob = bin_link(logodds),
neg_par = exp(mu),
response_binary = rbinom(n=N, size=1, prob=prob),
theta = sample(c(5,8,10, 15), replace = T, size=N),
response_count = rnbinom(n = N, size = theta, mu=neg_par)
)
The resulting distribution of response_count is in the following histogram.
df %>% ggplot(aes(response_count)) + geom_histogram(color='black', fill='skyblue') + labs(title='Count variable distribution')
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
And the scatter plots between x and response_count with z colours indicate an idea of what the model should be capable of identifying.
df %>% ggplot(aes(x = x, y = response_count, color=z)) + geom_point() + labs(title='relation between x, z and the count response')
And to explore the relation between x, z and the response_binary I turn to a lowess plot visualization taken from chapter 12 in Frank Harrel's Regression Modeling Strategies.
df %>% ggplot(aes(x = x, y = response_binary, group=z, color=z)) + histSpikeg(response_binary~x*z, lowess=T, data=df) +
labs(title='Estimated lowess for the relation between x, z and the\nproportion/probability of the binary response')
Now to explore the relationship between x + z when it comes to the response_count and the response_binary I suggest you inspect the models partial effect plots, since the coefficients can be directly compared.
First we build two simple models. A negative binomial model nb_md for the response_count variable, and a logistic regression logi_md for the response_binary.
nb_md <- glm.nb(response_count ~ x*z, data=df, link="log")
summary(nb_md)
#>
#> Call:
#> glm.nb(formula = response_count ~ x * z, data = df, link = "log",
#> init.theta = 8.292271032)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -3.1667 -0.7803 -0.1371 0.5643 2.7400
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -0.36614 0.08754 -4.183 2.88e-05 ***
#> x 3.37600 0.12373 27.285 < 2e-16 ***
#> zTRUE 0.98352 0.13262 7.416 1.21e-13 ***
#> x:zTRUE -0.08763 0.19660 -0.446 0.656
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for Negative Binomial(8.2923) family taken to be 1)
#>
#> Null deviance: 2448.32 on 499 degrees of freedom
#> Residual deviance: 548.59 on 496 degrees of freedom
#> AIC: 2455.9
#>
#> Number of Fisher Scoring iterations: 1
#>
#>
#> Theta: 8.29
#> Std. Err.: 1.24
#>
#> 2 x log-likelihood: -2445.94
logi_md <- glm(response_binary ~ x + z, data = df, family = 'binomial')
summary(logi_md)
#>
#> Call:
#> glm(formula = response_binary ~ x + z, family = "binomial", data = df)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -1.639 -1.248 0.829 1.031 1.451
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 0.03917 0.19276 0.203 0.838968
#> x 1.00490 0.33105 3.035 0.002401 **
#> zTRUE -0.67308 0.20188 -3.334 0.000856 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 679.64 on 499 degrees of freedom
#> Residual deviance: 659.00 on 497 degrees of freedom
#> AIC: 665
#>
#> Number of Fisher Scoring iterations: 4
The models could be better built, in order to better capture the non-linearity we can see in the exploration plots.
However I build the partial effects plot and plot the predictions together with the standard error. From these we can have an idea of how the variable X can have a different effect over response_count and response_binary.
Here are some observations we can make:
the effect of variable x greatly increases for larger values of x when it comes to understanding it's effect on response_count, being greater when z == T
the effect of variable x on response_binary tends to be the same, regardless of z in lower ranges. But it tends to be much larger with z == F if x is large.
partial_effects_frame <-
tibble(x = seq(0,1, 0.01)) %>%
tidyr::crossing(z=c(T,F))
bind_rows(
augment(nb_md, newdata= partial_effects_frame, type.predict='response') %>% mutate(model='nb_md'),
augment(logi_md, newdata= partial_effects_frame, type.predict='response') %>% mutate(model='logi_md')
) %>%
ggplot(aes(x, .fitted, group=z)) +
geom_ribbon(aes(ymin= .fitted - .se.fit, ymax= .fitted + .se.fit), alpha=0.2) +
geom_line(aes(group=z, color=z)) +
facet_wrap(~model, scales = 'free', ncol=1) +
labs(title = 'Partial effects of the variable x interacted with z')
Now this is just a simple example of what I meant with a little more effort.
Changing base values, exploring the plots and everything, working with tables might help you study the effect on different response variables.
Created on 2019-11-22 by the reprex package (v0.3.0) | How to compare model coefficients from models with different distribution family and link functions
You can't directly compare the estimated coefficients since the units of the response variable are not the same in both models.
See, a logistic regression will estimate a binomial probability of obse |
50,310 | How to account for the no:of parameters in the Multihead self-Attention layer of BERT | After doing the multi-head attention, you have 12 heads context vectors of dimension 768 and you need to project them back to the model dimension, this gives you another 12 × 768 × 768 + 768 parameters. In addition, there is layer normalization with 2 × 768 parameters. | How to account for the no:of parameters in the Multihead self-Attention layer of BERT | After doing the multi-head attention, you have 12 heads context vectors of dimension 768 and you need to project them back to the model dimension, this gives you another 12 × 768 × 768 + 768 parameter | How to account for the no:of parameters in the Multihead self-Attention layer of BERT
After doing the multi-head attention, you have 12 heads context vectors of dimension 768 and you need to project them back to the model dimension, this gives you another 12 × 768 × 768 + 768 parameters. In addition, there is layer normalization with 2 × 768 parameters. | How to account for the no:of parameters in the Multihead self-Attention layer of BERT
After doing the multi-head attention, you have 12 heads context vectors of dimension 768 and you need to project them back to the model dimension, this gives you another 12 × 768 × 768 + 768 parameter |
50,311 | How to account for the no:of parameters in the Multihead self-Attention layer of BERT | I have found the answer after digging into a pytorch implementation and a few other blogs. Here's the explanation for the number of paramteres in the Transformer cell (only the mult-headed self-attention part):
We can see the inside of transformer cell in above picture. The input vector is transformed in multiple heads, then applied the self-attention operation, then all are concatenated, and then a fully connected dense forward layer is applied. In terms of dimensions, here's how it looks:
The input vector of dimension d_model (in X) gets multiplied by three matrices WQ, WK, WV, 12 (=attention heads, or A) times to give (3A) pairs of vectors (Q, K, V). These vectors (Z0 to Z7 in the image) are each of length d_model/A. So dimension of each of these matrices is d_model * d_model/A and we have 3 * A such matrices.
Including the bias for eah of Q, K, V matrices, total weights till now = d_model * d_model/A * 3A + d_model * 3. By this point, we have Z0 to Zi vectors from above image. These are then concatenated, and passed through the dense layer W0 which would have dimension d_model * d_model + d_model (with bias).
So total dimension of transformer cell:
A * (d_model * d_model/A) * 3 + 3*d_model + (d_model * d_model + d_model). For BERT base, the values are A= 12, d_model = 786. So total parameters = 12 * ( 768 * 768/12) * 3 + 3*768 + 768*768 + 768 = 2,362,368
Edit: The output of this will be a vector of dim d_model. This then gets a residual connection to the input itself, which then is passed into another Dense layer where we get two matrices of dimensions (d_model * d_feed_forward). Those weights are not part of this calculation
Img Ref: http://jalammar.github.io/illustrated-transformer/ | How to account for the no:of parameters in the Multihead self-Attention layer of BERT | I have found the answer after digging into a pytorch implementation and a few other blogs. Here's the explanation for the number of paramteres in the Transformer cell (only the mult-headed self-attent | How to account for the no:of parameters in the Multihead self-Attention layer of BERT
I have found the answer after digging into a pytorch implementation and a few other blogs. Here's the explanation for the number of paramteres in the Transformer cell (only the mult-headed self-attention part):
We can see the inside of transformer cell in above picture. The input vector is transformed in multiple heads, then applied the self-attention operation, then all are concatenated, and then a fully connected dense forward layer is applied. In terms of dimensions, here's how it looks:
The input vector of dimension d_model (in X) gets multiplied by three matrices WQ, WK, WV, 12 (=attention heads, or A) times to give (3A) pairs of vectors (Q, K, V). These vectors (Z0 to Z7 in the image) are each of length d_model/A. So dimension of each of these matrices is d_model * d_model/A and we have 3 * A such matrices.
Including the bias for eah of Q, K, V matrices, total weights till now = d_model * d_model/A * 3A + d_model * 3. By this point, we have Z0 to Zi vectors from above image. These are then concatenated, and passed through the dense layer W0 which would have dimension d_model * d_model + d_model (with bias).
So total dimension of transformer cell:
A * (d_model * d_model/A) * 3 + 3*d_model + (d_model * d_model + d_model). For BERT base, the values are A= 12, d_model = 786. So total parameters = 12 * ( 768 * 768/12) * 3 + 3*768 + 768*768 + 768 = 2,362,368
Edit: The output of this will be a vector of dim d_model. This then gets a residual connection to the input itself, which then is passed into another Dense layer where we get two matrices of dimensions (d_model * d_feed_forward). Those weights are not part of this calculation
Img Ref: http://jalammar.github.io/illustrated-transformer/ | How to account for the no:of parameters in the Multihead self-Attention layer of BERT
I have found the answer after digging into a pytorch implementation and a few other blogs. Here's the explanation for the number of paramteres in the Transformer cell (only the mult-headed self-attent |
50,312 | Bootstrapping with more than one random effect | I think in this case it is recommended to do a parametric bootstrap: the mixed effect model gives you an estimate of the variance of the effects of words and subjects, so you can generate new random deviates from their distribution (thus without actually resampling the estimated values). It is not difficult to write the code yourself, but if you used the lme4 package to estimate the model then I think you should be able to do it via the function bootMer. If I understood correctly your problem, you could just write a wrapper function that computes predicted values $X^*$ from the bootstrapped model and calculate the function $f(X^*)$; and pass it to bootMer. Once you have a boostrapped distribution for $f(X^*)$ you can use any methods to calculate a confidence interval (e.g. percentile). If you are interested in BCa intervals I have some code that calculate that here (it is part of a small package that contains my frequently used custom functions available here) | Bootstrapping with more than one random effect | I think in this case it is recommended to do a parametric bootstrap: the mixed effect model gives you an estimate of the variance of the effects of words and subjects, so you can generate new random d | Bootstrapping with more than one random effect
I think in this case it is recommended to do a parametric bootstrap: the mixed effect model gives you an estimate of the variance of the effects of words and subjects, so you can generate new random deviates from their distribution (thus without actually resampling the estimated values). It is not difficult to write the code yourself, but if you used the lme4 package to estimate the model then I think you should be able to do it via the function bootMer. If I understood correctly your problem, you could just write a wrapper function that computes predicted values $X^*$ from the bootstrapped model and calculate the function $f(X^*)$; and pass it to bootMer. Once you have a boostrapped distribution for $f(X^*)$ you can use any methods to calculate a confidence interval (e.g. percentile). If you are interested in BCa intervals I have some code that calculate that here (it is part of a small package that contains my frequently used custom functions available here) | Bootstrapping with more than one random effect
I think in this case it is recommended to do a parametric bootstrap: the mixed effect model gives you an estimate of the variance of the effects of words and subjects, so you can generate new random d |
50,313 | Shouldn't we sample from the output of variational auto-encoder? | There are two terms in the ELBO:
$$E_{z \sim q}[\log P(x|z)] - \text{KL}(q(z)||p(z))$$
We estimate the first term by sampling a single $z$ and computing $\log P(x|z)$.
Since the VAE models $x|z \sim \mathcal{N}(\mu, \sigma^2)$, where $\mu = f(z;\theta)$ for some decoder neural network $f$, and the log of the gaussian density is $-(\mu - x)^2$ (up to some constant factors and scaling), therefore this squared "reconstruction" loss is correct.
And we should sample from that output to calculate the reconstruction loss.
Not quite. You should sample from that output if you want to sample from the distribution modeled by the VAE, but we have shown here that the "reconstruction loss" between $\mu$ and $x$ is the correct loss to use. | Shouldn't we sample from the output of variational auto-encoder? | There are two terms in the ELBO:
$$E_{z \sim q}[\log P(x|z)] - \text{KL}(q(z)||p(z))$$
We estimate the first term by sampling a single $z$ and computing $\log P(x|z)$.
Since the VAE models $x|z \sim \ | Shouldn't we sample from the output of variational auto-encoder?
There are two terms in the ELBO:
$$E_{z \sim q}[\log P(x|z)] - \text{KL}(q(z)||p(z))$$
We estimate the first term by sampling a single $z$ and computing $\log P(x|z)$.
Since the VAE models $x|z \sim \mathcal{N}(\mu, \sigma^2)$, where $\mu = f(z;\theta)$ for some decoder neural network $f$, and the log of the gaussian density is $-(\mu - x)^2$ (up to some constant factors and scaling), therefore this squared "reconstruction" loss is correct.
And we should sample from that output to calculate the reconstruction loss.
Not quite. You should sample from that output if you want to sample from the distribution modeled by the VAE, but we have shown here that the "reconstruction loss" between $\mu$ and $x$ is the correct loss to use. | Shouldn't we sample from the output of variational auto-encoder?
There are two terms in the ELBO:
$$E_{z \sim q}[\log P(x|z)] - \text{KL}(q(z)||p(z))$$
We estimate the first term by sampling a single $z$ and computing $\log P(x|z)$.
Since the VAE models $x|z \sim \ |
50,314 | Probabilities in the Raven paradox | I don't think tracking changes to $\Pr(B|R)$ captures completely how the probability of the raven hypothesis changes.
All ravens are black means for all things, if a thing has the predicate raven (R), then it has the predicate black (B).
So what is $\Pr(\forall x \,\ Rx \rightarrow Bx)$?
$\forall x \,\ Rx \rightarrow Bx $ is false if and only if $\exists x \,\ Rx\land \neg Bx$.
Therefore $\Pr(\forall x \,\ Rx \rightarrow Bx) = 1 - \Pr(\exists x \,\ Rx\land \neg Bx)$. This is the probability whose changes we need to track as we learn about the world.
It does not seem, in general, to be equal to $\Pr(B|R)$ which is what you've looked it. Instead it is $1 - (1- \Pr(B|R))\Pr(R)$. Rewrite the second term as $\left(1 - \Pr(B|R)\right)\frac{\Pr(BR)}{\Pr(B|R)}$. The conditional probability doesn't change, but because we've seen green frogs, the odds of seeing black frogs, green ravens and also black ravens have gone down. So $\Pr(BR)$ becomes smaller.
But then the term becomes smaller... and the probability that all ravens are black becomes greater.
Ceteris paribus, it's good for the contention that all ravens are black if ravens are rare, because there need to be ravens for it to have any chance of being wrong. That's not captured when focusing on $\Pr(B|R)$ alone and it seems to suffice to restore the paradoxical conclusion. | Probabilities in the Raven paradox | I don't think tracking changes to $\Pr(B|R)$ captures completely how the probability of the raven hypothesis changes.
All ravens are black means for all things, if a thing has the predicate raven (R) | Probabilities in the Raven paradox
I don't think tracking changes to $\Pr(B|R)$ captures completely how the probability of the raven hypothesis changes.
All ravens are black means for all things, if a thing has the predicate raven (R), then it has the predicate black (B).
So what is $\Pr(\forall x \,\ Rx \rightarrow Bx)$?
$\forall x \,\ Rx \rightarrow Bx $ is false if and only if $\exists x \,\ Rx\land \neg Bx$.
Therefore $\Pr(\forall x \,\ Rx \rightarrow Bx) = 1 - \Pr(\exists x \,\ Rx\land \neg Bx)$. This is the probability whose changes we need to track as we learn about the world.
It does not seem, in general, to be equal to $\Pr(B|R)$ which is what you've looked it. Instead it is $1 - (1- \Pr(B|R))\Pr(R)$. Rewrite the second term as $\left(1 - \Pr(B|R)\right)\frac{\Pr(BR)}{\Pr(B|R)}$. The conditional probability doesn't change, but because we've seen green frogs, the odds of seeing black frogs, green ravens and also black ravens have gone down. So $\Pr(BR)$ becomes smaller.
But then the term becomes smaller... and the probability that all ravens are black becomes greater.
Ceteris paribus, it's good for the contention that all ravens are black if ravens are rare, because there need to be ravens for it to have any chance of being wrong. That's not captured when focusing on $\Pr(B|R)$ alone and it seems to suffice to restore the paradoxical conclusion. | Probabilities in the Raven paradox
I don't think tracking changes to $\Pr(B|R)$ captures completely how the probability of the raven hypothesis changes.
All ravens are black means for all things, if a thing has the predicate raven (R) |
50,315 | Probabilities in the Raven paradox | You haven't really analyzed the setup of the raven paradox; or rather, you've analyzed an extremely constrained variant of it. You say:
We now observe a number of green frogs...
Since we did not make any other sightings...
You started from a uniform prior and observed a universe consisting only of green frogs. Of course your subjective probability of $P(GF)$ is going to tend towards 1, and of course your subjective probability for the other three cases is going to tend towards zero. That's what I'd think to, if literally every thing I'd seen since birth was a green frog. Since you've never observed either a black frog nor a raven of any kind, you never had an opportunity to update your prior about $P(B|R)$ in any way, so of course it sticks stubbornly at $\frac{1}{2}$.
If you do your analysis again and allow for a non-zero number of observations of BF or BR, you'll get an answer closer to the standard Bayesian analysis you mentioned. | Probabilities in the Raven paradox | You haven't really analyzed the setup of the raven paradox; or rather, you've analyzed an extremely constrained variant of it. You say:
We now observe a number of green frogs...
Since we did not make | Probabilities in the Raven paradox
You haven't really analyzed the setup of the raven paradox; or rather, you've analyzed an extremely constrained variant of it. You say:
We now observe a number of green frogs...
Since we did not make any other sightings...
You started from a uniform prior and observed a universe consisting only of green frogs. Of course your subjective probability of $P(GF)$ is going to tend towards 1, and of course your subjective probability for the other three cases is going to tend towards zero. That's what I'd think to, if literally every thing I'd seen since birth was a green frog. Since you've never observed either a black frog nor a raven of any kind, you never had an opportunity to update your prior about $P(B|R)$ in any way, so of course it sticks stubbornly at $\frac{1}{2}$.
If you do your analysis again and allow for a non-zero number of observations of BF or BR, you'll get an answer closer to the standard Bayesian analysis you mentioned. | Probabilities in the Raven paradox
You haven't really analyzed the setup of the raven paradox; or rather, you've analyzed an extremely constrained variant of it. You say:
We now observe a number of green frogs...
Since we did not make |
50,316 | Probabilities in the Raven paradox | So while the probability of raven-ness implying blackness did not increase
I started to write up an answer in which I noted that this was imprecise language, corrected it to "the probability of black-ness given raven-ness", but as I continued my answer, I noticed quite a bit of cognitive dissonance, and eventually gave up and deleted my answer. Now, thinking about a bit more, I think that this is not merely a bit of impreciseness to be "corrected" and moved on from, but the core of the issue.
There are several very different probabilities at play here. There's P(raven and black), P(not raven or not black), P(black|raven), and P(raven->black). The most direct interpretation of "the probability of raven-ness implying blackness" is P(raven -> black). Since "raven -> black" is equivalent to "all x are black or not raven", that translates to P($\forall$ x: x is black or not raven), but you seem to be conflating it with $\forall$ x: P(x is black or not raven), which is quite different. You shown that P(black|raven) isn't changing. However, P(not raven or not black) and P(raven->black) are increasing. To have a counterexample to raven->black, we need to find something that is both a raven and green. By finding something that is not a raven, you have removed one opportunity for a counterexample. | Probabilities in the Raven paradox | So while the probability of raven-ness implying blackness did not increase
I started to write up an answer in which I noted that this was imprecise language, corrected it to "the probability of black | Probabilities in the Raven paradox
So while the probability of raven-ness implying blackness did not increase
I started to write up an answer in which I noted that this was imprecise language, corrected it to "the probability of black-ness given raven-ness", but as I continued my answer, I noticed quite a bit of cognitive dissonance, and eventually gave up and deleted my answer. Now, thinking about a bit more, I think that this is not merely a bit of impreciseness to be "corrected" and moved on from, but the core of the issue.
There are several very different probabilities at play here. There's P(raven and black), P(not raven or not black), P(black|raven), and P(raven->black). The most direct interpretation of "the probability of raven-ness implying blackness" is P(raven -> black). Since "raven -> black" is equivalent to "all x are black or not raven", that translates to P($\forall$ x: x is black or not raven), but you seem to be conflating it with $\forall$ x: P(x is black or not raven), which is quite different. You shown that P(black|raven) isn't changing. However, P(not raven or not black) and P(raven->black) are increasing. To have a counterexample to raven->black, we need to find something that is both a raven and green. By finding something that is not a raven, you have removed one opportunity for a counterexample. | Probabilities in the Raven paradox
So while the probability of raven-ness implying blackness did not increase
I started to write up an answer in which I noted that this was imprecise language, corrected it to "the probability of black |
50,317 | How to prevent overfitting? [duplicate] | The main advice for dealing with it, usually is regularization. Is
there other practical advice to avoid overfitting?
I thought what you are actually asking is what is the relation between regularization and overfitting.
The answer is that the strategies designed to reduce overfitting or test error are known collectively as regularization. So I thought the short answer to your question is an emphatic "no".
And here are some regularization strategies listed in the Chapter 7 of the Deep Learning book:
Parameter norm penalties
Norm penalities as constrained optimization
Dataset augmentation
Noise robustness
Semi-supervised learning
Multi-task learning
Early stopping
Parameter tying and parameter sharing
Sparse representation
Bagging and other ensemble methods
Dropout
Adversarial training
Tangent distance, tagent prop, and manifold tagent classifier | How to prevent overfitting? [duplicate] | The main advice for dealing with it, usually is regularization. Is
there other practical advice to avoid overfitting?
I thought what you are actually asking is what is the relation between regulari | How to prevent overfitting? [duplicate]
The main advice for dealing with it, usually is regularization. Is
there other practical advice to avoid overfitting?
I thought what you are actually asking is what is the relation between regularization and overfitting.
The answer is that the strategies designed to reduce overfitting or test error are known collectively as regularization. So I thought the short answer to your question is an emphatic "no".
And here are some regularization strategies listed in the Chapter 7 of the Deep Learning book:
Parameter norm penalties
Norm penalities as constrained optimization
Dataset augmentation
Noise robustness
Semi-supervised learning
Multi-task learning
Early stopping
Parameter tying and parameter sharing
Sparse representation
Bagging and other ensemble methods
Dropout
Adversarial training
Tangent distance, tagent prop, and manifold tagent classifier | How to prevent overfitting? [duplicate]
The main advice for dealing with it, usually is regularization. Is
there other practical advice to avoid overfitting?
I thought what you are actually asking is what is the relation between regulari |
50,318 | What is MultiOutputRegressor and how does it work? | I read that can work as a trick to make single output regressors like SVR to support multioutput. You can read a little bit more over here
https://scikit-learn.org/stable/modules/multiclass.html | What is MultiOutputRegressor and how does it work? | I read that can work as a trick to make single output regressors like SVR to support multioutput. You can read a little bit more over here
https://scikit-learn.org/stable/modules/multiclass.html | What is MultiOutputRegressor and how does it work?
I read that can work as a trick to make single output regressors like SVR to support multioutput. You can read a little bit more over here
https://scikit-learn.org/stable/modules/multiclass.html | What is MultiOutputRegressor and how does it work?
I read that can work as a trick to make single output regressors like SVR to support multioutput. You can read a little bit more over here
https://scikit-learn.org/stable/modules/multiclass.html |
50,319 | What is MultiOutputRegressor and how does it work? | Yes, from the documentation page you linked to:
This strategy consists of fitting one regressor per target.
and from the User Guide:
Multioutput regression support can be added to any regressor with MultiOutputRegressor. This strategy consists of fitting one regressor per target.
Since each target is represented by exactly one regressor it is possible to gain knowledge about the target by inspecting its corresponding regressor. As MultiOutputRegressor fits one regressor per target it can not take advantage of correlations between targets. | What is MultiOutputRegressor and how does it work? | Yes, from the documentation page you linked to:
This strategy consists of fitting one regressor per target.
and from the User Guide:
Multioutput regression support can be added to any regressor wit | What is MultiOutputRegressor and how does it work?
Yes, from the documentation page you linked to:
This strategy consists of fitting one regressor per target.
and from the User Guide:
Multioutput regression support can be added to any regressor with MultiOutputRegressor. This strategy consists of fitting one regressor per target.
Since each target is represented by exactly one regressor it is possible to gain knowledge about the target by inspecting its corresponding regressor. As MultiOutputRegressor fits one regressor per target it can not take advantage of correlations between targets. | What is MultiOutputRegressor and how does it work?
Yes, from the documentation page you linked to:
This strategy consists of fitting one regressor per target.
and from the User Guide:
Multioutput regression support can be added to any regressor wit |
50,320 | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kernel Function | Some thoughts on your interesting question:
Good features are problem dependent. So seems difficult (if possible) to incorporate feature engineering into any rigorous mathematical framework.
To me, pushing the problem of finding good features to finding good kernels is a way to separate the problem in two parts.
First, the mathematics (learning from good/noisy features), and second, the practical nature of your data (finding features from your observations - sometimes using an arbitrary off-the-shelf mapping, but sometimes using your knowledge and understanding of how the data was generated).
It is indeed very nice that neural networks can find their own feature mappings. However this is an empirical observation, it is not the case that we mathematically understand how or why these features should be good.
You may call SLT "severely incomplete as a theory of inference", but our understanding of why neural networks should work is even more incomplete.
Finally, as you stated, the state of the art results are achieved by using various different architectures for neural networks.
While some neural networks achieve stellar performance, others perform poorly on the same datasets.
So, to me, the situation for SVM ad neural networks is similar: your performance will be strongly influenced by the choice of kernel or the network.
However if you go for SVM, then at least we can mathematically understand something - the "learning" step ;-) | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kerne | Some thoughts on your interesting question:
Good features are problem dependent. So seems difficult (if possible) to incorporate feature engineering into any rigorous mathematical framework.
To me, pu | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kernel Function
Some thoughts on your interesting question:
Good features are problem dependent. So seems difficult (if possible) to incorporate feature engineering into any rigorous mathematical framework.
To me, pushing the problem of finding good features to finding good kernels is a way to separate the problem in two parts.
First, the mathematics (learning from good/noisy features), and second, the practical nature of your data (finding features from your observations - sometimes using an arbitrary off-the-shelf mapping, but sometimes using your knowledge and understanding of how the data was generated).
It is indeed very nice that neural networks can find their own feature mappings. However this is an empirical observation, it is not the case that we mathematically understand how or why these features should be good.
You may call SLT "severely incomplete as a theory of inference", but our understanding of why neural networks should work is even more incomplete.
Finally, as you stated, the state of the art results are achieved by using various different architectures for neural networks.
While some neural networks achieve stellar performance, others perform poorly on the same datasets.
So, to me, the situation for SVM ad neural networks is similar: your performance will be strongly influenced by the choice of kernel or the network.
However if you go for SVM, then at least we can mathematically understand something - the "learning" step ;-) | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kerne
Some thoughts on your interesting question:
Good features are problem dependent. So seems difficult (if possible) to incorporate feature engineering into any rigorous mathematical framework.
To me, pu |
50,321 | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kernel Function | A lot of the theoretical underpinnings of the SVM are based on the linear maximum margin classifier. However, the advantage of a kernel is that the SVM can be viewed as a linear maximum margin classifier that is constructed in a feature space that is implicitly defined by the kernel. As imposing a fixed kernel is equivalent to a fixed transformation of the data, which is equivalent to just solving some other fixed classification problem. This means that any theory that holds for a linear classifier also holds for a fixed kernel.
However, in most practical cases, we don't use a fixed kernel. In general, the kernel has some hyper-parameters that we also learn from the data, e.g. via cross-validation. As soon as we tune the kernel parameters, we have immediately invalidated a lot of the theory on which it is based. Theoretical results become much harder to obtain as soon as you include
learning the kernel.
Having said which (IMHO) the main reason that the SVM works so well is that it really encourages us to think about regularisation and doing something (SRM) about over-fitting. That doesn't really rely too heavily on the theoretical justification. | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kerne | A lot of the theoretical underpinnings of the SVM are based on the linear maximum margin classifier. However, the advantage of a kernel is that the SVM can be viewed as a linear maximum margin classi | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kernel Function
A lot of the theoretical underpinnings of the SVM are based on the linear maximum margin classifier. However, the advantage of a kernel is that the SVM can be viewed as a linear maximum margin classifier that is constructed in a feature space that is implicitly defined by the kernel. As imposing a fixed kernel is equivalent to a fixed transformation of the data, which is equivalent to just solving some other fixed classification problem. This means that any theory that holds for a linear classifier also holds for a fixed kernel.
However, in most practical cases, we don't use a fixed kernel. In general, the kernel has some hyper-parameters that we also learn from the data, e.g. via cross-validation. As soon as we tune the kernel parameters, we have immediately invalidated a lot of the theory on which it is based. Theoretical results become much harder to obtain as soon as you include
learning the kernel.
Having said which (IMHO) the main reason that the SVM works so well is that it really encourages us to think about regularisation and doing something (SRM) about over-fitting. That doesn't really rely too heavily on the theoretical justification. | How are the various guarantees provided to SVMs by Statistical Learning Theory affected by the Kerne
A lot of the theoretical underpinnings of the SVM are based on the linear maximum margin classifier. However, the advantage of a kernel is that the SVM can be viewed as a linear maximum margin classi |
50,322 | Interpretation of confidence interval in Bayesian terms | A non-mathematical answer:
There are a lot of procedures that lead to the same answer but have completely different underlying mechanisms or operations.
One simple example would be to compare the median with the mean. Both rely on different operations and both have quite different interpretations, but in a lot of cases the answer is exactly the same. However, this does not mean that you can use the interpretation of the median when you explicitly report the mean and vise versa.
The same goes for Confidence Intervals and Credible Intervals. | Interpretation of confidence interval in Bayesian terms | A non-mathematical answer:
There are a lot of procedures that lead to the same answer but have completely different underlying mechanisms or operations.
One simple example would be to compare the me | Interpretation of confidence interval in Bayesian terms
A non-mathematical answer:
There are a lot of procedures that lead to the same answer but have completely different underlying mechanisms or operations.
One simple example would be to compare the median with the mean. Both rely on different operations and both have quite different interpretations, but in a lot of cases the answer is exactly the same. However, this does not mean that you can use the interpretation of the median when you explicitly report the mean and vise versa.
The same goes for Confidence Intervals and Credible Intervals. | Interpretation of confidence interval in Bayesian terms
A non-mathematical answer:
There are a lot of procedures that lead to the same answer but have completely different underlying mechanisms or operations.
One simple example would be to compare the me |
50,323 | What is the distribution of the difference of two iid noncentral Student t variates | Looks like I am a little late. Anyway, as per Owen (D.B. Owen, “A Survey of Properties and Applications of the Noncentral t distribution”, Technometrics 10 (1968) 445-478), if x is noncentral t distributed and $\nu > 2$, then
$$var[x] = \frac{\nu}{\nu-2}+\delta^2[\frac{\nu}{\nu-2}-\frac{\nu}{2}\frac{\Gamma^2((\nu-1)/2)}{\Gamma^2(\nu/2)}]$$
where $\nu = df$ and $\delta = NCT$ is the noncentrality parameter.
Using $\nu = 10$ and $\delta = 5$, var[x] = 3.1386 so the variance of the difference of two of these is 6.2773. I generated $10^7$ of these differences and binned them into a histogram, shown below. The variance of the $10^7$ differences was 6.2779. Unfortunately, I have no idea what function the histogram approximates. | What is the distribution of the difference of two iid noncentral Student t variates | Looks like I am a little late. Anyway, as per Owen (D.B. Owen, “A Survey of Properties and Applications of the Noncentral t distribution”, Technometrics 10 (1968) 445-478), if x is noncentral t distr | What is the distribution of the difference of two iid noncentral Student t variates
Looks like I am a little late. Anyway, as per Owen (D.B. Owen, “A Survey of Properties and Applications of the Noncentral t distribution”, Technometrics 10 (1968) 445-478), if x is noncentral t distributed and $\nu > 2$, then
$$var[x] = \frac{\nu}{\nu-2}+\delta^2[\frac{\nu}{\nu-2}-\frac{\nu}{2}\frac{\Gamma^2((\nu-1)/2)}{\Gamma^2(\nu/2)}]$$
where $\nu = df$ and $\delta = NCT$ is the noncentrality parameter.
Using $\nu = 10$ and $\delta = 5$, var[x] = 3.1386 so the variance of the difference of two of these is 6.2773. I generated $10^7$ of these differences and binned them into a histogram, shown below. The variance of the $10^7$ differences was 6.2779. Unfortunately, I have no idea what function the histogram approximates. | What is the distribution of the difference of two iid noncentral Student t variates
Looks like I am a little late. Anyway, as per Owen (D.B. Owen, “A Survey of Properties and Applications of the Noncentral t distribution”, Technometrics 10 (1968) 445-478), if x is noncentral t distr |
50,324 | What is the distribution of the difference of two iid noncentral Student t variates | Since you use R and don't need an exact solution, you may find the distr package for R useful, at least for exploring.
For fixed degrees of freedom and non-centrality parameter you can start exploring with code like:
library(distr)
d1 <- Td(df=10, ncp=5)
d2 <- Td(df=10, ncp=5)
plot(d1)
dd <- d1 - d2
plot(dd)
I am not sure how to incorporate the non-centrality depending on x. | What is the distribution of the difference of two iid noncentral Student t variates | Since you use R and don't need an exact solution, you may find the distr package for R useful, at least for exploring.
For fixed degrees of freedom and non-centrality parameter you can start exploring | What is the distribution of the difference of two iid noncentral Student t variates
Since you use R and don't need an exact solution, you may find the distr package for R useful, at least for exploring.
For fixed degrees of freedom and non-centrality parameter you can start exploring with code like:
library(distr)
d1 <- Td(df=10, ncp=5)
d2 <- Td(df=10, ncp=5)
plot(d1)
dd <- d1 - d2
plot(dd)
I am not sure how to incorporate the non-centrality depending on x. | What is the distribution of the difference of two iid noncentral Student t variates
Since you use R and don't need an exact solution, you may find the distr package for R useful, at least for exploring.
For fixed degrees of freedom and non-centrality parameter you can start exploring |
50,325 | How do I check in practice if a posterior is proper? | The fundamental issue with improper posteriors $\mu$ is that Markov chains associated with them are either transient (reaching almost surely and definitely some infinite region of the state space) or null recurrent (revisiting past places but in an infinite time). In the later case, there is a form of ergodic theorem, namely that for $g,h\in\mathcal L¹(\mu)$,
$$\dfrac{\sum_{t=1}^T g(\theta^t)}{\sum_{t=1}^T h(\theta^t)}\longrightarrow\dfrac{\int g(\theta)\,\text{d}\mu(\theta)}{\int h(\theta)\,\text{d}\mu(\theta)}$$
which means that some form of stability occurs and in consequence turns detection of infinite mass difficult, especially when the improper nature of the posterior occurs near a finite boundary. This happened for instance for an ANOVA model analysed in one of the first Gibbs sampling papers in 1990, namely that the Gibbs sampler did not produce detectable signals of the issue ...
Here is a toy example based on the improper target $\mu(\theta)=e^{-\theta}/\theta$ over $\Bbb R^*_+$:
targ=function(x) ifelse(x>0,1/x/exp(x),0)
T=1e6
mark=rep(1,T)
for (t in 2:T){
prop=rnorm(1,mark[t-1],.1)
mark[t]=ifelse(runif(1)<targ(prop)/targ(mark[t-
1]),prop,mark[t-1])}
with an output that looks not that great but still moving around: | How do I check in practice if a posterior is proper? | The fundamental issue with improper posteriors $\mu$ is that Markov chains associated with them are either transient (reaching almost surely and definitely some infinite region of the state space) or | How do I check in practice if a posterior is proper?
The fundamental issue with improper posteriors $\mu$ is that Markov chains associated with them are either transient (reaching almost surely and definitely some infinite region of the state space) or null recurrent (revisiting past places but in an infinite time). In the later case, there is a form of ergodic theorem, namely that for $g,h\in\mathcal L¹(\mu)$,
$$\dfrac{\sum_{t=1}^T g(\theta^t)}{\sum_{t=1}^T h(\theta^t)}\longrightarrow\dfrac{\int g(\theta)\,\text{d}\mu(\theta)}{\int h(\theta)\,\text{d}\mu(\theta)}$$
which means that some form of stability occurs and in consequence turns detection of infinite mass difficult, especially when the improper nature of the posterior occurs near a finite boundary. This happened for instance for an ANOVA model analysed in one of the first Gibbs sampling papers in 1990, namely that the Gibbs sampler did not produce detectable signals of the issue ...
Here is a toy example based on the improper target $\mu(\theta)=e^{-\theta}/\theta$ over $\Bbb R^*_+$:
targ=function(x) ifelse(x>0,1/x/exp(x),0)
T=1e6
mark=rep(1,T)
for (t in 2:T){
prop=rnorm(1,mark[t-1],.1)
mark[t]=ifelse(runif(1)<targ(prop)/targ(mark[t-
1]),prop,mark[t-1])}
with an output that looks not that great but still moving around: | How do I check in practice if a posterior is proper?
The fundamental issue with improper posteriors $\mu$ is that Markov chains associated with them are either transient (reaching almost surely and definitely some infinite region of the state space) or |
50,326 | How to estimate confidence intervals for LC50 | The most functional way is to use the tweaking the dose.p function that can be found in Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Springer.
The code below uses the Wald statistic for 95% CI
ec = 0.5
library(VGAM)
eta <- logit(ec)
beta <- coef(mod)[1:2]
ecx <- (eta - beta[1])/beta[2]
pd <- -cbind(1, ecx)/beta[2]
ff = as.matrix(vcov(mod)[1:2,1:2])
se <- sqrt(((pd %*% ff )* pd) %*% c(1, 1))
upper = (ecx+se*1.96)
lower = (ecx-se*1.96)
df1 = data.frame(ecx, lower, upper)
plot <- plot + geom_vline(xintercept = df1$ecx, linetype = "dashed")
plot <- plot + geom_vline(xintercept = df1$lower, linetype = "dashed")
plot <- plot + geom_vline(xintercept = df1$upper, linetype = "dashed")
plot | How to estimate confidence intervals for LC50 | The most functional way is to use the tweaking the dose.p function that can be found in Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Springer.
The code below uses the Wal | How to estimate confidence intervals for LC50
The most functional way is to use the tweaking the dose.p function that can be found in Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Springer.
The code below uses the Wald statistic for 95% CI
ec = 0.5
library(VGAM)
eta <- logit(ec)
beta <- coef(mod)[1:2]
ecx <- (eta - beta[1])/beta[2]
pd <- -cbind(1, ecx)/beta[2]
ff = as.matrix(vcov(mod)[1:2,1:2])
se <- sqrt(((pd %*% ff )* pd) %*% c(1, 1))
upper = (ecx+se*1.96)
lower = (ecx-se*1.96)
df1 = data.frame(ecx, lower, upper)
plot <- plot + geom_vline(xintercept = df1$ecx, linetype = "dashed")
plot <- plot + geom_vline(xintercept = df1$lower, linetype = "dashed")
plot <- plot + geom_vline(xintercept = df1$upper, linetype = "dashed")
plot | How to estimate confidence intervals for LC50
The most functional way is to use the tweaking the dose.p function that can be found in Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Springer.
The code below uses the Wal |
50,327 | How to estimate confidence intervals for LC50 | Let's go back to first principles here to see what you are trying to estimate. The GLM with the binomial family and the logit link function is just the logistic regression model, so we can use the regression equation for that model. Letting $p \equiv \mathbb{P}(Y=1)$ be the probability of a positive response outcome (here the death of the organism), your model is based on the model equation:
$$\log \Big( \frac{p}{1-p} \Big) = \beta_0 + \beta_1 x.$$
Setting $p = \tfrac{1}{2}$ gives the corresponding explanatory value (here the LC50 concentration):
$$x_* \equiv \frac{\log(\tfrac{1}{2}/\tfrac{1}{2}) - \beta_0}{\beta_1}
= \frac{\log(1) - \beta_0}{\beta_1}
= - \frac{\beta_0}{\beta_1}.$$
So, you are effectively trying to find a confidence interval for the ratio of the true coefficients in the logistic regression. Usually we approximate the joint distribution of the estimated coefficients by a normal distribution (by appeal to the central limit theorem for large $n$) and so we are then trying to find a confidence interval for the true ratio based on observed estimators that are assumed to be jointly normally distributed around the true values (with non-zero correlation). (The estimated variance-covariance matrix for the estimators can be found using the vcov command on your model.)
The other answer here already shows you how you can use bootstrapping to get a confidence interval in this case, but there are a range of analytic approximations you can use if you prefer; for some examples of analytic forms for this type of confidence interval, see e.g., Malley (1982), Shanmugalingam (1982), and Dunlap and Silver (1986). | How to estimate confidence intervals for LC50 | Let's go back to first principles here to see what you are trying to estimate. The GLM with the binomial family and the logit link function is just the logistic regression model, so we can use the re | How to estimate confidence intervals for LC50
Let's go back to first principles here to see what you are trying to estimate. The GLM with the binomial family and the logit link function is just the logistic regression model, so we can use the regression equation for that model. Letting $p \equiv \mathbb{P}(Y=1)$ be the probability of a positive response outcome (here the death of the organism), your model is based on the model equation:
$$\log \Big( \frac{p}{1-p} \Big) = \beta_0 + \beta_1 x.$$
Setting $p = \tfrac{1}{2}$ gives the corresponding explanatory value (here the LC50 concentration):
$$x_* \equiv \frac{\log(\tfrac{1}{2}/\tfrac{1}{2}) - \beta_0}{\beta_1}
= \frac{\log(1) - \beta_0}{\beta_1}
= - \frac{\beta_0}{\beta_1}.$$
So, you are effectively trying to find a confidence interval for the ratio of the true coefficients in the logistic regression. Usually we approximate the joint distribution of the estimated coefficients by a normal distribution (by appeal to the central limit theorem for large $n$) and so we are then trying to find a confidence interval for the true ratio based on observed estimators that are assumed to be jointly normally distributed around the true values (with non-zero correlation). (The estimated variance-covariance matrix for the estimators can be found using the vcov command on your model.)
The other answer here already shows you how you can use bootstrapping to get a confidence interval in this case, but there are a range of analytic approximations you can use if you prefer; for some examples of analytic forms for this type of confidence interval, see e.g., Malley (1982), Shanmugalingam (1982), and Dunlap and Silver (1986). | How to estimate confidence intervals for LC50
Let's go back to first principles here to see what you are trying to estimate. The GLM with the binomial family and the logit link function is just the logistic regression model, so we can use the re |
50,328 | Interpreting interaction term on highly correlated variables | I agree that in the case of perfect collinearity the interaction is just the square and it is possible to main effects that are not significant but a significant interaction.
If you had perfect collinearity then one approach is to add some small random error to one of the variables, or you could combine them, if this makes sense in your context.
Even without perfect collinearity issues I don't know if we can infer that the interaction is really measuring the linear combination of both variables.
It is, that's exactly what it does. | Interpreting interaction term on highly correlated variables | I agree that in the case of perfect collinearity the interaction is just the square and it is possible to main effects that are not significant but a significant interaction.
If you had perfect collin | Interpreting interaction term on highly correlated variables
I agree that in the case of perfect collinearity the interaction is just the square and it is possible to main effects that are not significant but a significant interaction.
If you had perfect collinearity then one approach is to add some small random error to one of the variables, or you could combine them, if this makes sense in your context.
Even without perfect collinearity issues I don't know if we can infer that the interaction is really measuring the linear combination of both variables.
It is, that's exactly what it does. | Interpreting interaction term on highly correlated variables
I agree that in the case of perfect collinearity the interaction is just the square and it is possible to main effects that are not significant but a significant interaction.
If you had perfect collin |
50,329 | How does the celebrated result about the diffusion limit of the Random Walk Metroplis-Hastings algorithm help us to find the optimal scaling | (though, it's still not clear to me if we need additional assumptions on $f$ to ensure that $U_t$ weakly converges to $f$ and I would be happy about any comment related to that).
This concerns the convergence of the continuous-time Langevin diffusion to its invariant measure $f$. The paper assumes that $f \in C^2$, as well as certain moment conditions (A1, A2) on $f'/f, f''/f$, and I believe these are sufficient for convergence to the invariant measure.
Essentially, the key ingredients are to check that the diffusion is well-defined for all times (i.e. is non-explosive), irreducible (i.e. doesn't get stuck in some subset), and well-confined (i.e. doesn't make excursions off to infinity). The first condition corresponds to the roughness of $f$, the second to the support of $f$ being well-connected, and the third to the tails of $f$ decaying sufficiently fast. Further details can be found in this paper, see e.g. Thereom 2.1.
However, why does this mean that this choice for $\ell$ is optimal for the Metropolis-Hastings algorithm? First of all, it is assumed that $X^{(d)}_0$ is distributed according to $\pi_d$. But this means we start in stationarity. If I got things right the "optimality" we're searching for is (besides other metrics) with respect to the convergence of the total variation distance of the distribution $\mathcal L(X^{(d)}_n)$ and $\pi_d$. But if we start in stationarity, that distance is $0$.
Sort of. There are two notions of convergence which one could consider in this setting: convergence to stationarity, and mixing at stationarity. The paper is (in some sense) taking the viewpoint that, once we reach stationarity, we would like our Markov chain to decorrelate as quickly as possible, as this would lead to higher effective sample sizes, etc. The calculations in this paper thus aim to be optimal in this sense.
The more general case (i.e. starting out of stationarity) is more complicated, is treated in this paper and this paper.
My next problem is that the process $U^{(d)}$ is not the chain generated by the Metropolis-Hastings algorithm. It is "speeded up in time and shrinked in space". While I see that this is necessary to obtain the (nontrivial) diffusion limit, I don't understand why we're able to draw conclusions about the original chain.
Actually, it is not "shrinked in space", only "speeded up in time". One makes smaller moves at each step, but the chain itself is not rescaled in space.
The argument is roughly as follows: in high dimensions, you can approximate the RWMH Markov Chain $(X_n)_{n = 0, 1, \ldots}$, with step-size $\sigma_d^2 = \ell^2 / (d-1)$, by the continuous-time diffusion
$${\rm d}U_t=\frac{h(\ell)}2g'(U_t){\rm d}t+\sqrt{h(\ell)}{\rm d}W_t,$$
in the sense that $X_n \approx U_{n/d}$ (in law).
You would like $X_n$ to decorrelate as quickly as possible around the space, but you cannot optimise this directly. However, it is possible to optimise the speed at which $U_t$ decorrelates, by maximising $h(\ell)$. The hope is then that making the corresponding $U_t$ better will make the chain of interest, $X_n$, better as well. See this paper for a more detailed account of this argument. | How does the celebrated result about the diffusion limit of the Random Walk Metroplis-Hastings algor | (though, it's still not clear to me if we need additional assumptions on $f$ to ensure that $U_t$ weakly converges to $f$ and I would be happy about any comment related to that).
This concerns the co | How does the celebrated result about the diffusion limit of the Random Walk Metroplis-Hastings algorithm help us to find the optimal scaling
(though, it's still not clear to me if we need additional assumptions on $f$ to ensure that $U_t$ weakly converges to $f$ and I would be happy about any comment related to that).
This concerns the convergence of the continuous-time Langevin diffusion to its invariant measure $f$. The paper assumes that $f \in C^2$, as well as certain moment conditions (A1, A2) on $f'/f, f''/f$, and I believe these are sufficient for convergence to the invariant measure.
Essentially, the key ingredients are to check that the diffusion is well-defined for all times (i.e. is non-explosive), irreducible (i.e. doesn't get stuck in some subset), and well-confined (i.e. doesn't make excursions off to infinity). The first condition corresponds to the roughness of $f$, the second to the support of $f$ being well-connected, and the third to the tails of $f$ decaying sufficiently fast. Further details can be found in this paper, see e.g. Thereom 2.1.
However, why does this mean that this choice for $\ell$ is optimal for the Metropolis-Hastings algorithm? First of all, it is assumed that $X^{(d)}_0$ is distributed according to $\pi_d$. But this means we start in stationarity. If I got things right the "optimality" we're searching for is (besides other metrics) with respect to the convergence of the total variation distance of the distribution $\mathcal L(X^{(d)}_n)$ and $\pi_d$. But if we start in stationarity, that distance is $0$.
Sort of. There are two notions of convergence which one could consider in this setting: convergence to stationarity, and mixing at stationarity. The paper is (in some sense) taking the viewpoint that, once we reach stationarity, we would like our Markov chain to decorrelate as quickly as possible, as this would lead to higher effective sample sizes, etc. The calculations in this paper thus aim to be optimal in this sense.
The more general case (i.e. starting out of stationarity) is more complicated, is treated in this paper and this paper.
My next problem is that the process $U^{(d)}$ is not the chain generated by the Metropolis-Hastings algorithm. It is "speeded up in time and shrinked in space". While I see that this is necessary to obtain the (nontrivial) diffusion limit, I don't understand why we're able to draw conclusions about the original chain.
Actually, it is not "shrinked in space", only "speeded up in time". One makes smaller moves at each step, but the chain itself is not rescaled in space.
The argument is roughly as follows: in high dimensions, you can approximate the RWMH Markov Chain $(X_n)_{n = 0, 1, \ldots}$, with step-size $\sigma_d^2 = \ell^2 / (d-1)$, by the continuous-time diffusion
$${\rm d}U_t=\frac{h(\ell)}2g'(U_t){\rm d}t+\sqrt{h(\ell)}{\rm d}W_t,$$
in the sense that $X_n \approx U_{n/d}$ (in law).
You would like $X_n$ to decorrelate as quickly as possible around the space, but you cannot optimise this directly. However, it is possible to optimise the speed at which $U_t$ decorrelates, by maximising $h(\ell)$. The hope is then that making the corresponding $U_t$ better will make the chain of interest, $X_n$, better as well. See this paper for a more detailed account of this argument. | How does the celebrated result about the diffusion limit of the Random Walk Metroplis-Hastings algor
(though, it's still not clear to me if we need additional assumptions on $f$ to ensure that $U_t$ weakly converges to $f$ and I would be happy about any comment related to that).
This concerns the co |
50,330 | Use regression as coefficient in another regression | This just looks like a way to write interaction terms and polynomials (and is also not specifically related to time series regression). Multiplying out the brackets gives
$$RV_{t+1} = \alpha + \beta_0RV_t + \beta_1 RET\cdot RV_t + \beta_2 RV_t^2 + \beta_3 RW + \beta_4 RM
$$
Try something like
n <- 10
x1 <- rnorm(n)
x2 <- rnorm(n)
y <- rnorm(n)
lm(y~x1*x2+I(x1^2))
Output:
> lm(y~x1*x2+I(x1^2))
Call:
lm(formula = y ~ x1 * x2 + I(x1^2))
Coefficients:
(Intercept) x1 x2 I(x1^2) x1:x2
0.08859 0.93306 -0.03421 -0.66431 -0.18097 | Use regression as coefficient in another regression | This just looks like a way to write interaction terms and polynomials (and is also not specifically related to time series regression). Multiplying out the brackets gives
$$RV_{t+1} = \alpha + \beta_0 | Use regression as coefficient in another regression
This just looks like a way to write interaction terms and polynomials (and is also not specifically related to time series regression). Multiplying out the brackets gives
$$RV_{t+1} = \alpha + \beta_0RV_t + \beta_1 RET\cdot RV_t + \beta_2 RV_t^2 + \beta_3 RW + \beta_4 RM
$$
Try something like
n <- 10
x1 <- rnorm(n)
x2 <- rnorm(n)
y <- rnorm(n)
lm(y~x1*x2+I(x1^2))
Output:
> lm(y~x1*x2+I(x1^2))
Call:
lm(formula = y ~ x1 * x2 + I(x1^2))
Coefficients:
(Intercept) x1 x2 I(x1^2) x1:x2
0.08859 0.93306 -0.03421 -0.66431 -0.18097 | Use regression as coefficient in another regression
This just looks like a way to write interaction terms and polynomials (and is also not specifically related to time series regression). Multiplying out the brackets gives
$$RV_{t+1} = \alpha + \beta_0 |
50,331 | Keras LSTM Long Term Dependencies | It is option 1. LSTM will learn from the 10 samples.
If you like to include more history, obviously, you can increase the time step, or you can use LSTM with stateful=True. I have found stateful LSTM's tricky but here you can find more information about them. | Keras LSTM Long Term Dependencies | It is option 1. LSTM will learn from the 10 samples.
If you like to include more history, obviously, you can increase the time step, or you can use LSTM with stateful=True. I have found stateful LSTM | Keras LSTM Long Term Dependencies
It is option 1. LSTM will learn from the 10 samples.
If you like to include more history, obviously, you can increase the time step, or you can use LSTM with stateful=True. I have found stateful LSTM's tricky but here you can find more information about them. | Keras LSTM Long Term Dependencies
It is option 1. LSTM will learn from the 10 samples.
If you like to include more history, obviously, you can increase the time step, or you can use LSTM with stateful=True. I have found stateful LSTM |
50,332 | Why we do not accept the result of our simulation study as evidence of a limitation of one method | Simulation studies that show that it is great when the data generating model and the analysis model are the same are very common. What people really want to see is more general:
Model performing well when the data generating merchanism has all the complexity of real life. There is a lot of judgement here, but some other aspect of the data generating mechanism may have a much bigger impact than others. Simulations are actually great for exploring that, but are too often poorly done.
Don't just knock down a strawman, but all the reasonable / frequently used methods. E.g. adjustment for covariates might make omitting a random effect less important.
The differences in performance need to be striking enough that it truly matters in practice. A good example can also help here to illustrate that one can get strikingly different conclusions. | Why we do not accept the result of our simulation study as evidence of a limitation of one method | Simulation studies that show that it is great when the data generating model and the analysis model are the same are very common. What people really want to see is more general:
Model performing well | Why we do not accept the result of our simulation study as evidence of a limitation of one method
Simulation studies that show that it is great when the data generating model and the analysis model are the same are very common. What people really want to see is more general:
Model performing well when the data generating merchanism has all the complexity of real life. There is a lot of judgement here, but some other aspect of the data generating mechanism may have a much bigger impact than others. Simulations are actually great for exploring that, but are too often poorly done.
Don't just knock down a strawman, but all the reasonable / frequently used methods. E.g. adjustment for covariates might make omitting a random effect less important.
The differences in performance need to be striking enough that it truly matters in practice. A good example can also help here to illustrate that one can get strikingly different conclusions. | Why we do not accept the result of our simulation study as evidence of a limitation of one method
Simulation studies that show that it is great when the data generating model and the analysis model are the same are very common. What people really want to see is more general:
Model performing well |
50,333 | Maximum entropy distribution on the hypercube | The solution will be a normal distribution truncated to the interval $[0,1]$. The details are messy, and in practice some numerical work will be needed. The proof follows the proof in the unrestricted case, the differences occurs first when we have to find the Lagrange multipliers. But note that the $\mu,\sigma^2$ parameters of the normal not need to coincide with the same parameters from the restrictions. For there to be a solution we need that the restrictions satisfy $0\le\mu\le 1,\quad 0<\sigma^2\le \mu(1-\mu)$.
For the multivariate "box" case, exactly the same can be said. The restrictions on the parameters will be messy. | Maximum entropy distribution on the hypercube | The solution will be a normal distribution truncated to the interval $[0,1]$. The details are messy, and in practice some numerical work will be needed. The proof follows the proof in the unrestricte | Maximum entropy distribution on the hypercube
The solution will be a normal distribution truncated to the interval $[0,1]$. The details are messy, and in practice some numerical work will be needed. The proof follows the proof in the unrestricted case, the differences occurs first when we have to find the Lagrange multipliers. But note that the $\mu,\sigma^2$ parameters of the normal not need to coincide with the same parameters from the restrictions. For there to be a solution we need that the restrictions satisfy $0\le\mu\le 1,\quad 0<\sigma^2\le \mu(1-\mu)$.
For the multivariate "box" case, exactly the same can be said. The restrictions on the parameters will be messy. | Maximum entropy distribution on the hypercube
The solution will be a normal distribution truncated to the interval $[0,1]$. The details are messy, and in practice some numerical work will be needed. The proof follows the proof in the unrestricte |
50,334 | Modeling Time Series Sensor Data with Machine Learning Techniques? | Edit and TL;DR version: this could be treated as a mediation/moderator analysis problem, but that would still require an independant measurement to calibrate the device.
This sounds like a mediation/moderation analysis problem, not machine learning.
Let M1 be a model of the voltage under clean air conditions as a function of p, v and humidity. The deviance from M1 per se would not give you a concentration estimate. It would give you a probability that the gas is present and interfering with the sensor. A certain deviance (residual value) will not indicate the same concentration of the target gas for every p, v and humidity values because the way the gas affects the voltage varies with the other parameters. Similarly, going from let's say 2mV to 4mV of deviance does not necessarily imply that the concentration doubled - the scale might be non-linear and that scale itself might be influenced by your other variables. In other words, it's a good idea to look at the difference between the measured value and the value predicted by M1, but converting the residuals in gas concentration is not a 1:1 thing.
Another way to look at it which is more akin to the actual situation is to see the concentration as the independant variable, the sensor voltage as the dependant variable and p, t and hum as moderator variables. You'd need to induce different concentrations of gas and take measurements at various t, p and hum values for that to work though.
Here are some ressources:
Alyssa Blair's chapter on mediation/moderator analysis
Datacamp course on the subject
Andrew Haye's book
This makes for a fun, almost philosophical problem to look at during the xmas vacation btw, so if you have a real or simulated dataset that you'd like to add to your question I'll take a look at it.
Epilogue
I showed this post and the data to a measurement specialist and an engineer who is also a specialist in measurement theory, and both said "get the suitcase with the calibration equipment". There's just no way around it. | Modeling Time Series Sensor Data with Machine Learning Techniques? | Edit and TL;DR version: this could be treated as a mediation/moderator analysis problem, but that would still require an independant measurement to calibrate the device.
This sounds like a mediation/m | Modeling Time Series Sensor Data with Machine Learning Techniques?
Edit and TL;DR version: this could be treated as a mediation/moderator analysis problem, but that would still require an independant measurement to calibrate the device.
This sounds like a mediation/moderation analysis problem, not machine learning.
Let M1 be a model of the voltage under clean air conditions as a function of p, v and humidity. The deviance from M1 per se would not give you a concentration estimate. It would give you a probability that the gas is present and interfering with the sensor. A certain deviance (residual value) will not indicate the same concentration of the target gas for every p, v and humidity values because the way the gas affects the voltage varies with the other parameters. Similarly, going from let's say 2mV to 4mV of deviance does not necessarily imply that the concentration doubled - the scale might be non-linear and that scale itself might be influenced by your other variables. In other words, it's a good idea to look at the difference between the measured value and the value predicted by M1, but converting the residuals in gas concentration is not a 1:1 thing.
Another way to look at it which is more akin to the actual situation is to see the concentration as the independant variable, the sensor voltage as the dependant variable and p, t and hum as moderator variables. You'd need to induce different concentrations of gas and take measurements at various t, p and hum values for that to work though.
Here are some ressources:
Alyssa Blair's chapter on mediation/moderator analysis
Datacamp course on the subject
Andrew Haye's book
This makes for a fun, almost philosophical problem to look at during the xmas vacation btw, so if you have a real or simulated dataset that you'd like to add to your question I'll take a look at it.
Epilogue
I showed this post and the data to a measurement specialist and an engineer who is also a specialist in measurement theory, and both said "get the suitcase with the calibration equipment". There's just no way around it. | Modeling Time Series Sensor Data with Machine Learning Techniques?
Edit and TL;DR version: this could be treated as a mediation/moderator analysis problem, but that would still require an independant measurement to calibrate the device.
This sounds like a mediation/m |
50,335 | Build confidence intervals for random effects in intercept only models | When you are interested in predictions conditional on the random effects, to my view it is easier to work with the hiearachical formulation of the mixed model that has a intrinsically Bayesian flavor. In particular, in your specific case, your are interested in the mean of the Poisson model conditional on the random effects, i.e., $$\mu_i = \exp(\beta + b_i),$$ with $i$ denoting the group, $\beta$ the fixed effect intercept, and $b_i$ the random intercept. You can derive a confidence interval for $\mu_i$ by using the following simulation scheme:
Step I: Simulate a value $\theta^*$ from the approximate posterior distribution $\mathcal N(\hat\theta, \hat\Sigma)$, where $\hat\theta$ denotes the maximum likelihood estimatates for $\beta$ and $\sigma_b$ with $\sigma_b$ denoting the standard deviation of the random intercepts term $b_i$, and $\hat\Sigma$ the variance of $\hat\theta$.
Step II: Simulate a value $b_i^*$ from the posterior distribution of the random effects $[b_i \mid y_i, \theta^*]$, where $y_i$ denotes the outcome data for group $i$ (note we condition on $\theta^*$ from the previous step).
Step III: Calculate $\mu_i^* = \exp(\beta^* + b_i^*)$.
Step I accounts for the sampling variability of the maximum likelihood estimates, and Step II for the variability in the random effects.
Repeating Steps I-III $L$ times, you obtain a Monte Carlo sample for $\mu_i$ based on which you could obtain a 95% CI using the 2.5% and 97.5% percentile.
This procedure is implemented in the predict() method for mixed models fitted using the GLMMadaptive package. For an example, check the vignette Methods for MixMod Objects. | Build confidence intervals for random effects in intercept only models | When you are interested in predictions conditional on the random effects, to my view it is easier to work with the hiearachical formulation of the mixed model that has a intrinsically Bayesian flavor. | Build confidence intervals for random effects in intercept only models
When you are interested in predictions conditional on the random effects, to my view it is easier to work with the hiearachical formulation of the mixed model that has a intrinsically Bayesian flavor. In particular, in your specific case, your are interested in the mean of the Poisson model conditional on the random effects, i.e., $$\mu_i = \exp(\beta + b_i),$$ with $i$ denoting the group, $\beta$ the fixed effect intercept, and $b_i$ the random intercept. You can derive a confidence interval for $\mu_i$ by using the following simulation scheme:
Step I: Simulate a value $\theta^*$ from the approximate posterior distribution $\mathcal N(\hat\theta, \hat\Sigma)$, where $\hat\theta$ denotes the maximum likelihood estimatates for $\beta$ and $\sigma_b$ with $\sigma_b$ denoting the standard deviation of the random intercepts term $b_i$, and $\hat\Sigma$ the variance of $\hat\theta$.
Step II: Simulate a value $b_i^*$ from the posterior distribution of the random effects $[b_i \mid y_i, \theta^*]$, where $y_i$ denotes the outcome data for group $i$ (note we condition on $\theta^*$ from the previous step).
Step III: Calculate $\mu_i^* = \exp(\beta^* + b_i^*)$.
Step I accounts for the sampling variability of the maximum likelihood estimates, and Step II for the variability in the random effects.
Repeating Steps I-III $L$ times, you obtain a Monte Carlo sample for $\mu_i$ based on which you could obtain a 95% CI using the 2.5% and 97.5% percentile.
This procedure is implemented in the predict() method for mixed models fitted using the GLMMadaptive package. For an example, check the vignette Methods for MixMod Objects. | Build confidence intervals for random effects in intercept only models
When you are interested in predictions conditional on the random effects, to my view it is easier to work with the hiearachical formulation of the mixed model that has a intrinsically Bayesian flavor. |
50,336 | Bayesian networks for one-class classification | Quite simple, yet actionable approach:
Collect your data, preprocess them to get categorical features $X$.
Create, tune, optimize the Bayesian network for $X$ with bnlearn. As a result we have practically a probability distribution $p(X)$.
Take all your observations and calculate their likelihoods $L_i=p(x_i)$.
Based on the likelihoods define a threshold $\theta$ for false negatives, i.e. if the desired sensitivity is e.g. 95%, you should take the likelihood that corresponds to the 5th quantile.
The resulting classifier is then: $p(X)>\theta$.
The trick is that you never know how similar are the unobserved counter examples. However, based on some ex-post observations, you can tune the threshold also with the respect to specificity. | Bayesian networks for one-class classification | Quite simple, yet actionable approach:
Collect your data, preprocess them to get categorical features $X$.
Create, tune, optimize the Bayesian network for $X$ with bnlearn. As a result we have practi | Bayesian networks for one-class classification
Quite simple, yet actionable approach:
Collect your data, preprocess them to get categorical features $X$.
Create, tune, optimize the Bayesian network for $X$ with bnlearn. As a result we have practically a probability distribution $p(X)$.
Take all your observations and calculate their likelihoods $L_i=p(x_i)$.
Based on the likelihoods define a threshold $\theta$ for false negatives, i.e. if the desired sensitivity is e.g. 95%, you should take the likelihood that corresponds to the 5th quantile.
The resulting classifier is then: $p(X)>\theta$.
The trick is that you never know how similar are the unobserved counter examples. However, based on some ex-post observations, you can tune the threshold also with the respect to specificity. | Bayesian networks for one-class classification
Quite simple, yet actionable approach:
Collect your data, preprocess them to get categorical features $X$.
Create, tune, optimize the Bayesian network for $X$ with bnlearn. As a result we have practi |
50,337 | How to get around the glmer warning : “Downdated VtV is not positive definite” | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Here is the code that allowed me to answer the question (thanks to @SalMangiafico who helped in for a similar question):
library(blme)
glmb = bglmer(y1 ~ procedure + (1|id), data=data_g2, family=binomial,
fixef.prior = normal(cov = diag(9,2)))
pairs(emmeans(glmb, ~ procedure))
Output:
contrast estimate SE df z.ratio p.value
p1 - p2 7.779445 0.9237406 Inf 8.422 <.0001
Results are given on the log odds ratio (not the response) scale.
For the theoretical explanation, I will largely copy part of the explanation of the author of the package bglmer. When a group contains all 0s or 1s (which is the case in this dataset) this can induces convergence failure. In that situation, the "Cauchy prior does a good job of pulling the extreme cases back down to earth while leaving the well-estimated ones roughly in place. As I'm far from being a statistician, I'm happy to learn the possible wrong facts about my answer. | How to get around the glmer warning : “Downdated VtV is not positive definite” | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to get around the glmer warning : “Downdated VtV is not positive definite”
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Here is the code that allowed me to answer the question (thanks to @SalMangiafico who helped in for a similar question):
library(blme)
glmb = bglmer(y1 ~ procedure + (1|id), data=data_g2, family=binomial,
fixef.prior = normal(cov = diag(9,2)))
pairs(emmeans(glmb, ~ procedure))
Output:
contrast estimate SE df z.ratio p.value
p1 - p2 7.779445 0.9237406 Inf 8.422 <.0001
Results are given on the log odds ratio (not the response) scale.
For the theoretical explanation, I will largely copy part of the explanation of the author of the package bglmer. When a group contains all 0s or 1s (which is the case in this dataset) this can induces convergence failure. In that situation, the "Cauchy prior does a good job of pulling the extreme cases back down to earth while leaving the well-estimated ones roughly in place. As I'm far from being a statistician, I'm happy to learn the possible wrong facts about my answer. | How to get around the glmer warning : “Downdated VtV is not positive definite”
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
50,338 | Did I mess up the Poisson-Gamma relationship? | \begin{equation}
\begin{aligned}
P(X_1 > 3 \mid X_1 + X_2 > 3)
& = \frac{P(X_1 + X_2 > 3 \mid X_1 > 3)P(X_1 > 3)}{P(X_1 + X_2 > 3)} \\
& = \frac{P(X_1 > 3)}{P(X_1 + X_2 > 3)} \text{ since $ X_2 > 0 $ with prob. 1} \\
& = \frac{e^{-1.5}}{P(X_1 + X_2 > 3)} \text{ using Exp($\frac{1}{2}$) cdf} \\
\end{aligned}
\end{equation}
You could easily evaluate $ P(X_1 + X_2 > 3) $ by conditioning on
$ X_2 $ and applying the law of total probability. But if you
insist on thinking about it in terms of a Poisson process, you can
do the following.
In a Poisson process with
rate $ \lambda $, the event where the sum of the first two inter-arrival
times $ X_1 + X_2 $
is greater than 3 is precisely the event where 1 or fewer arrivals occurred
in the time period up to 3.
Since $ \lambda = \frac{1}{2} $, the number of arrivals from time 0 to 3,
which we'll call $ N(3) $, is distributed as Poisson($\frac{3}{2}$). Then, we have
$$ P(N(3) \leq 1) = P(N(3) = 0) + P(N(3) = 1) = e^{-3/2} + \frac{3}{2}e^{-3/2} = \frac{5}{2}e^{-3/2} $$
and therefore
$$
\frac{e^{-1.5}}{P(X_1 + X_2 > 3)} = \frac{e^{-3/2}}{\frac{5}{2}e^{-3/2}} = \frac{2}{5} = 0.4
$$ | Did I mess up the Poisson-Gamma relationship? | \begin{equation}
\begin{aligned}
P(X_1 > 3 \mid X_1 + X_2 > 3)
& = \frac{P(X_1 + X_2 > 3 \mid X_1 > 3)P(X_1 > 3)}{P(X_1 + X_2 > 3)} \\
& = \frac{P(X_1 > 3)}{P(X_1 + X_2 > 3)} \text{ since $ X_2 > 0 $ | Did I mess up the Poisson-Gamma relationship?
\begin{equation}
\begin{aligned}
P(X_1 > 3 \mid X_1 + X_2 > 3)
& = \frac{P(X_1 + X_2 > 3 \mid X_1 > 3)P(X_1 > 3)}{P(X_1 + X_2 > 3)} \\
& = \frac{P(X_1 > 3)}{P(X_1 + X_2 > 3)} \text{ since $ X_2 > 0 $ with prob. 1} \\
& = \frac{e^{-1.5}}{P(X_1 + X_2 > 3)} \text{ using Exp($\frac{1}{2}$) cdf} \\
\end{aligned}
\end{equation}
You could easily evaluate $ P(X_1 + X_2 > 3) $ by conditioning on
$ X_2 $ and applying the law of total probability. But if you
insist on thinking about it in terms of a Poisson process, you can
do the following.
In a Poisson process with
rate $ \lambda $, the event where the sum of the first two inter-arrival
times $ X_1 + X_2 $
is greater than 3 is precisely the event where 1 or fewer arrivals occurred
in the time period up to 3.
Since $ \lambda = \frac{1}{2} $, the number of arrivals from time 0 to 3,
which we'll call $ N(3) $, is distributed as Poisson($\frac{3}{2}$). Then, we have
$$ P(N(3) \leq 1) = P(N(3) = 0) + P(N(3) = 1) = e^{-3/2} + \frac{3}{2}e^{-3/2} = \frac{5}{2}e^{-3/2} $$
and therefore
$$
\frac{e^{-1.5}}{P(X_1 + X_2 > 3)} = \frac{e^{-3/2}}{\frac{5}{2}e^{-3/2}} = \frac{2}{5} = 0.4
$$ | Did I mess up the Poisson-Gamma relationship?
\begin{equation}
\begin{aligned}
P(X_1 > 3 \mid X_1 + X_2 > 3)
& = \frac{P(X_1 + X_2 > 3 \mid X_1 > 3)P(X_1 > 3)}{P(X_1 + X_2 > 3)} \\
& = \frac{P(X_1 > 3)}{P(X_1 + X_2 > 3)} \text{ since $ X_2 > 0 $ |
50,339 | Did I mess up the Poisson-Gamma relationship? | Z denotes the point in time when the second Poisson event occurred. Z>3 means that the second Poisson event occurred after time 3, and therefore it is equivalent to Q<2 (Q being the number of Poisson events up to time 3) and not to Q>=2 as you calculated. If you divide 0.223 by 1-0.442=0.558, you will get the correct answer 0.4. | Did I mess up the Poisson-Gamma relationship? | Z denotes the point in time when the second Poisson event occurred. Z>3 means that the second Poisson event occurred after time 3, and therefore it is equivalent to Q<2 (Q being the number of Poisson | Did I mess up the Poisson-Gamma relationship?
Z denotes the point in time when the second Poisson event occurred. Z>3 means that the second Poisson event occurred after time 3, and therefore it is equivalent to Q<2 (Q being the number of Poisson events up to time 3) and not to Q>=2 as you calculated. If you divide 0.223 by 1-0.442=0.558, you will get the correct answer 0.4. | Did I mess up the Poisson-Gamma relationship?
Z denotes the point in time when the second Poisson event occurred. Z>3 means that the second Poisson event occurred after time 3, and therefore it is equivalent to Q<2 (Q being the number of Poisson |
50,340 | Maximum Entropy: another name for Maximum Likelihood or a legit Bayes procedure? | I believe Ariel Caticha has given some interesting insights on the interpretation of Maximum Entropy and its relation to Bayesian Inference.
As himself says, a good pedagogical review is his (unfinished) book, but one can check the papers coming out in arXiV as well.
I'll refer to some of the main ideas here in the hope that it helps answering the question (not sure about that, though, if the moderators think it's not going to the point I can delete it as well)
Cox, Jaynes, and many others have proved how probability is the fundamental theory for dealing with situations of incomplete information. If one assumes the proposed desiderata there can be no choice but to use (conditional) probabilities.
But even Jaynes used to say, as yourself has referred to, that updating probabilities through Bayes' rule or assigning probabilities using MaxEnt were entirely different things.
What Ariel did, building on the work of several other people (notably Skilling, Shore & Johnson; I'm probably missing others), was to prove that:
Maximum Entropy is a tool for updating probability distributions when discovering new information/data that constrains our knowledge about the inference we've been doing;
Maximum Entropy, as well as probabilities, also come from a set of desiderata, therefore one could not use another tool to update probabilites if one agrees with the impositions made in the beginning.
From that we can take 2 corollaries, which he also proves:
The process of assigning probabilities that Jaynes mentioned comes only from the choice of an uniform prior;
Maximum Entropy is the same as the Bayes' rule (therefore Bayesian inference, one could say) in the particular case that the new information comes in the form of data.
I guess this covers the MaxEnt $\leftrightarrow$ Bayesian link
I can't say much for the other one, MaxEnt $\leftrightarrow$ Maximum Likelihood, but I believe you have a point here that they connect somehow through Bayes' rule:
$$ p(x|\mathrm{data}) \propto p(\mathrm{data}|x) p(x) $$
If one makes a MAP (maximum a posteriori, usually considered a Bayesian method) estimate and takes an uniform prior $p(x)$, in fact what one is doing is maximizing the likelihood $p(\mathrm{data}|x)$. But I really don't have the experience to say more than that. | Maximum Entropy: another name for Maximum Likelihood or a legit Bayes procedure? | I believe Ariel Caticha has given some interesting insights on the interpretation of Maximum Entropy and its relation to Bayesian Inference.
As himself says, a good pedagogical review is his (unfinis | Maximum Entropy: another name for Maximum Likelihood or a legit Bayes procedure?
I believe Ariel Caticha has given some interesting insights on the interpretation of Maximum Entropy and its relation to Bayesian Inference.
As himself says, a good pedagogical review is his (unfinished) book, but one can check the papers coming out in arXiV as well.
I'll refer to some of the main ideas here in the hope that it helps answering the question (not sure about that, though, if the moderators think it's not going to the point I can delete it as well)
Cox, Jaynes, and many others have proved how probability is the fundamental theory for dealing with situations of incomplete information. If one assumes the proposed desiderata there can be no choice but to use (conditional) probabilities.
But even Jaynes used to say, as yourself has referred to, that updating probabilities through Bayes' rule or assigning probabilities using MaxEnt were entirely different things.
What Ariel did, building on the work of several other people (notably Skilling, Shore & Johnson; I'm probably missing others), was to prove that:
Maximum Entropy is a tool for updating probability distributions when discovering new information/data that constrains our knowledge about the inference we've been doing;
Maximum Entropy, as well as probabilities, also come from a set of desiderata, therefore one could not use another tool to update probabilites if one agrees with the impositions made in the beginning.
From that we can take 2 corollaries, which he also proves:
The process of assigning probabilities that Jaynes mentioned comes only from the choice of an uniform prior;
Maximum Entropy is the same as the Bayes' rule (therefore Bayesian inference, one could say) in the particular case that the new information comes in the form of data.
I guess this covers the MaxEnt $\leftrightarrow$ Bayesian link
I can't say much for the other one, MaxEnt $\leftrightarrow$ Maximum Likelihood, but I believe you have a point here that they connect somehow through Bayes' rule:
$$ p(x|\mathrm{data}) \propto p(\mathrm{data}|x) p(x) $$
If one makes a MAP (maximum a posteriori, usually considered a Bayesian method) estimate and takes an uniform prior $p(x)$, in fact what one is doing is maximizing the likelihood $p(\mathrm{data}|x)$. But I really don't have the experience to say more than that. | Maximum Entropy: another name for Maximum Likelihood or a legit Bayes procedure?
I believe Ariel Caticha has given some interesting insights on the interpretation of Maximum Entropy and its relation to Bayesian Inference.
As himself says, a good pedagogical review is his (unfinis |
50,341 | Parameter optimization with Neural Networks | But beside the normal tricks - is there something fundamentally wrong with my problem.
Yes, I think there is something fundamentally wrong with your problem statement.
From your description of the training data and the loss function I infer that you train the network to predict $k$. However, at the same time you somehow expect that the network generates $\lambda$ as output. Obviously, the network cannot do it.
Also, note that according to your description the true $\lambda$ does not depend at all on $x$, thus no model in the world would be able to predict $\lambda$ observing only $x$.
On the other hand, the $k$'s depend on $\lambda$ (and vice versa) and thus if one extends the training data set and includes $k$'s one could predict $\lambda$. However, in this situation NN would an overkill, because estimating $\lambda$ given $x$ and $k$'s is straightforward.
Can the neural network learn the average sum squared of the input?
Yes.
Update
Hence, my assumption is that $\lambda$
is the derivative value and a function of $x$.
Whatever this sentence means, the second derivative of quadratic function is constant and doesn’t depend on $x$.
Also, I am not trying to predict $k$. I have a model for it which is $\bar{k}$
Your loss function suggest that you do. Also note that it’s straightforward to estimate $\lambda$ if you have $x$ and $\bar{k}$. | Parameter optimization with Neural Networks | But beside the normal tricks - is there something fundamentally wrong with my problem.
Yes, I think there is something fundamentally wrong with your problem statement.
From your description of the | Parameter optimization with Neural Networks
But beside the normal tricks - is there something fundamentally wrong with my problem.
Yes, I think there is something fundamentally wrong with your problem statement.
From your description of the training data and the loss function I infer that you train the network to predict $k$. However, at the same time you somehow expect that the network generates $\lambda$ as output. Obviously, the network cannot do it.
Also, note that according to your description the true $\lambda$ does not depend at all on $x$, thus no model in the world would be able to predict $\lambda$ observing only $x$.
On the other hand, the $k$'s depend on $\lambda$ (and vice versa) and thus if one extends the training data set and includes $k$'s one could predict $\lambda$. However, in this situation NN would an overkill, because estimating $\lambda$ given $x$ and $k$'s is straightforward.
Can the neural network learn the average sum squared of the input?
Yes.
Update
Hence, my assumption is that $\lambda$
is the derivative value and a function of $x$.
Whatever this sentence means, the second derivative of quadratic function is constant and doesn’t depend on $x$.
Also, I am not trying to predict $k$. I have a model for it which is $\bar{k}$
Your loss function suggest that you do. Also note that it’s straightforward to estimate $\lambda$ if you have $x$ and $\bar{k}$. | Parameter optimization with Neural Networks
But beside the normal tricks - is there something fundamentally wrong with my problem.
Yes, I think there is something fundamentally wrong with your problem statement.
From your description of the |
50,342 | Parameter optimization with Neural Networks | If you think that $\lambda$ depends on $x$, then you need to model that explicitly in your network. You will need to choose the form of the dependency of $\lambda$ on $x$, eg polynomial, linear, Gaussian Process, or whatever seems like a good idea to you.
You probably want to set aside some hold-out data, because you're likely to overfit horribly in your model exploration.
Nowadays, you can use toolkits such as Tensorflow or Pytorch to handle the low-level weight operations, and you can write code such as:
x2 = torch.pow(x, 2)
lambda = nn.Linear(x, 1) # just make lambda a linear function of x
sum = (x2 * lambda).sum()
crit = nn.MSELoss()
loss = crit(sum, target_k)
loss.backward()
... etc ...
... and just write whatever model / mathematical formulae you want. (this is approximately written in Pytorch here, but Tensorflow lets you do the same kinds of things). | Parameter optimization with Neural Networks | If you think that $\lambda$ depends on $x$, then you need to model that explicitly in your network. You will need to choose the form of the dependency of $\lambda$ on $x$, eg polynomial, linear, Gauss | Parameter optimization with Neural Networks
If you think that $\lambda$ depends on $x$, then you need to model that explicitly in your network. You will need to choose the form of the dependency of $\lambda$ on $x$, eg polynomial, linear, Gaussian Process, or whatever seems like a good idea to you.
You probably want to set aside some hold-out data, because you're likely to overfit horribly in your model exploration.
Nowadays, you can use toolkits such as Tensorflow or Pytorch to handle the low-level weight operations, and you can write code such as:
x2 = torch.pow(x, 2)
lambda = nn.Linear(x, 1) # just make lambda a linear function of x
sum = (x2 * lambda).sum()
crit = nn.MSELoss()
loss = crit(sum, target_k)
loss.backward()
... etc ...
... and just write whatever model / mathematical formulae you want. (this is approximately written in Pytorch here, but Tensorflow lets you do the same kinds of things). | Parameter optimization with Neural Networks
If you think that $\lambda$ depends on $x$, then you need to model that explicitly in your network. You will need to choose the form of the dependency of $\lambda$ on $x$, eg polynomial, linear, Gauss |
50,343 | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator | Background:
For the case $\eta_0$ known, we assume the existence of a function $S(\theta,\eta)$ such that
1) $\tilde{\theta} = \theta_0 + Op(n^{-1/2})$
2) $S(\theta,\eta)$ is differentiable in $\theta$ at $(\theta_0,\eta_0)$ with a derivative matrix $\Gamma$ of full rank
3) $ S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) = S_n(\tilde{\theta},\eta_0) - S_n(\theta_0,\eta_0) + op\left(n^{-1/2}\right)$
From 2), we get a Taylor expansion about $\theta_0$,
$$S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0)
= \Gamma (\tilde{\theta} - \theta_0) + op(|\tilde{\theta} - \theta_0|)$$
Hence
$$ \tilde{\theta} - \theta_0 = \Gamma^{-1} \left( S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
From 3),
$$ \tilde{\theta} - \theta_0 = \Gamma^{-1} \left(S_n(\tilde{\theta},\eta_0) - S_n(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
Note that assumption 3) is satisfied if assumption 4-6 and 7a found here are true.
To have an equivalent estimator when $\eta$ is unknown, we need to have an equivalent linearization.
Solution 1:
Assume that, in addition to 1-3,
A) $\hat{\theta} = \theta_0 + Op(n^{-1/2})$
B) $ S(\hat{\theta},\eta_0) = S(\tilde{\theta},\eta_0) + op\left(n^{-1/2}\right)$
Then we can write, from A),
$$ \hat{\theta} - \theta_0 = \Gamma^{-1} \left( S(\hat{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
From B),
$$ \hat{\theta} - \theta_0 = \Gamma^{-1} \left( S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
Solution 2:
If we assume 1-3, A) and
C) $\hat{\eta} = \eta_0 + Op(n^{-1/2})$
D) $S(\theta,\eta)$ is differentiable in $\eta$ at $(\theta_0,\eta_0)$ with a derivative matrix equals to zero
E) $S(\hat{\theta},\hat{\eta}) = S(\tilde{\theta},\eta_0) + op(n^{-1/2})$
Then we can perform the following Taylor expansion about $(\theta_0, \eta_0)$,
$$S(\hat{\theta},\hat{\eta}) - S(\theta_0,\eta_0)
= \Gamma (\hat{\theta} - \theta_0) + op(|\hat{\theta} - \theta_0| + |\hat{\eta} - \eta_0|)$$
and thus
$$\begin{align} \hat{\theta} - \theta_0
&= \Gamma^{-1} \left( S(\hat{\theta},\hat{\eta}) - S(\theta_0,\eta_0)\right) + op(n^{-1/2}) \\
&= \Gamma^{-1} \left( S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0)\right) + op(n^{-1/2})
\end{align}$$
A sufficient condition for E) to hold is that 3) be true and
$\begin{align} S(\hat{\theta},\hat{\eta}) - S(\theta_0,\eta_0) &= S_n(\hat{\theta},\hat{\eta}) - S_n(\theta_0,\eta_0) + op(n^{-1/2}) \\
S_n(\hat{\theta},\hat{\eta}) - S_n(\tilde{\theta},\eta_0) &= op(n^{-1/2}) \end{align}$
Solution 3
If we assume 1-3, A) and
F) $\hat{\eta} = \eta_0 + op(1) $
G) $S(\theta,\eta)$ is uniformly differentiable in $\theta$ at $\theta_0$ on a neighborhood of $\eta_0$ with a derivative matrix $\Gamma(\eta)$
H) $\Gamma(\eta)$ is continuous and full rank at $\eta_0$, with $\Gamma = \Gamma(\eta_0)$
I) $S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) = S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) + op(n^{-1/2})$
Then from G) we can perform the following Taylor expansion about $\theta_0$, which is valid with probability tending to one,
$$\begin{align}S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta})
&= \Gamma(\hat{\eta}) (\hat{\theta} - \theta_0) + op(|\hat{\theta} - \theta_0|) \\
&= \Gamma(\hat{\theta} - \theta_0) + op(|\hat{\theta} - \theta_0|)
\end{align}\\$$
with the second line true because of F) and H).
Hence, with I)
$$\begin{align}\hat{\theta} - \theta_0 &= \Gamma^{-1}\left(S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) \right)
+ op(n^{-1/2}) \\
&= \Gamma^{-1}\left(S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})
\end{align}\\$$
Note that a sufficient condition for $I$ to be true is that both E) be true and
I') $S(\theta_0,\hat{\eta}) = S(\theta_0,\eta_0) + op(n^{-1/2})$
Both conditions D) and I') are asymptotic orthogonality assumptions. | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator | Background:
For the case $\eta_0$ known, we assume the existence of a function $S(\theta,\eta)$ such that
1) $\tilde{\theta} = \theta_0 + Op(n^{-1/2})$
2) $S(\theta,\eta)$ is differentiable in $\thet | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator
Background:
For the case $\eta_0$ known, we assume the existence of a function $S(\theta,\eta)$ such that
1) $\tilde{\theta} = \theta_0 + Op(n^{-1/2})$
2) $S(\theta,\eta)$ is differentiable in $\theta$ at $(\theta_0,\eta_0)$ with a derivative matrix $\Gamma$ of full rank
3) $ S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) = S_n(\tilde{\theta},\eta_0) - S_n(\theta_0,\eta_0) + op\left(n^{-1/2}\right)$
From 2), we get a Taylor expansion about $\theta_0$,
$$S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0)
= \Gamma (\tilde{\theta} - \theta_0) + op(|\tilde{\theta} - \theta_0|)$$
Hence
$$ \tilde{\theta} - \theta_0 = \Gamma^{-1} \left( S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
From 3),
$$ \tilde{\theta} - \theta_0 = \Gamma^{-1} \left(S_n(\tilde{\theta},\eta_0) - S_n(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
Note that assumption 3) is satisfied if assumption 4-6 and 7a found here are true.
To have an equivalent estimator when $\eta$ is unknown, we need to have an equivalent linearization.
Solution 1:
Assume that, in addition to 1-3,
A) $\hat{\theta} = \theta_0 + Op(n^{-1/2})$
B) $ S(\hat{\theta},\eta_0) = S(\tilde{\theta},\eta_0) + op\left(n^{-1/2}\right)$
Then we can write, from A),
$$ \hat{\theta} - \theta_0 = \Gamma^{-1} \left( S(\hat{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
From B),
$$ \hat{\theta} - \theta_0 = \Gamma^{-1} \left( S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})$$
Solution 2:
If we assume 1-3, A) and
C) $\hat{\eta} = \eta_0 + Op(n^{-1/2})$
D) $S(\theta,\eta)$ is differentiable in $\eta$ at $(\theta_0,\eta_0)$ with a derivative matrix equals to zero
E) $S(\hat{\theta},\hat{\eta}) = S(\tilde{\theta},\eta_0) + op(n^{-1/2})$
Then we can perform the following Taylor expansion about $(\theta_0, \eta_0)$,
$$S(\hat{\theta},\hat{\eta}) - S(\theta_0,\eta_0)
= \Gamma (\hat{\theta} - \theta_0) + op(|\hat{\theta} - \theta_0| + |\hat{\eta} - \eta_0|)$$
and thus
$$\begin{align} \hat{\theta} - \theta_0
&= \Gamma^{-1} \left( S(\hat{\theta},\hat{\eta}) - S(\theta_0,\eta_0)\right) + op(n^{-1/2}) \\
&= \Gamma^{-1} \left( S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0)\right) + op(n^{-1/2})
\end{align}$$
A sufficient condition for E) to hold is that 3) be true and
$\begin{align} S(\hat{\theta},\hat{\eta}) - S(\theta_0,\eta_0) &= S_n(\hat{\theta},\hat{\eta}) - S_n(\theta_0,\eta_0) + op(n^{-1/2}) \\
S_n(\hat{\theta},\hat{\eta}) - S_n(\tilde{\theta},\eta_0) &= op(n^{-1/2}) \end{align}$
Solution 3
If we assume 1-3, A) and
F) $\hat{\eta} = \eta_0 + op(1) $
G) $S(\theta,\eta)$ is uniformly differentiable in $\theta$ at $\theta_0$ on a neighborhood of $\eta_0$ with a derivative matrix $\Gamma(\eta)$
H) $\Gamma(\eta)$ is continuous and full rank at $\eta_0$, with $\Gamma = \Gamma(\eta_0)$
I) $S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) = S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) + op(n^{-1/2})$
Then from G) we can perform the following Taylor expansion about $\theta_0$, which is valid with probability tending to one,
$$\begin{align}S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta})
&= \Gamma(\hat{\eta}) (\hat{\theta} - \theta_0) + op(|\hat{\theta} - \theta_0|) \\
&= \Gamma(\hat{\theta} - \theta_0) + op(|\hat{\theta} - \theta_0|)
\end{align}\\$$
with the second line true because of F) and H).
Hence, with I)
$$\begin{align}\hat{\theta} - \theta_0 &= \Gamma^{-1}\left(S(\hat{\theta},\hat{\eta}) - S(\theta_0,\hat{\eta}) \right)
+ op(n^{-1/2}) \\
&= \Gamma^{-1}\left(S(\tilde{\theta},\eta_0) - S(\theta_0,\eta_0) \right)
+ op(n^{-1/2})
\end{align}\\$$
Note that a sufficient condition for $I$ to be true is that both E) be true and
I') $S(\theta_0,\hat{\eta}) = S(\theta_0,\eta_0) + op(n^{-1/2})$
Both conditions D) and I') are asymptotic orthogonality assumptions. | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator
Background:
For the case $\eta_0$ known, we assume the existence of a function $S(\theta,\eta)$ such that
1) $\tilde{\theta} = \theta_0 + Op(n^{-1/2})$
2) $S(\theta,\eta)$ is differentiable in $\thet |
50,344 | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator | The other answer doesn't assume that $S_n(\hat{\theta}, \eta_0)$ is differentiable. If we assume $S_n(\hat{\theta}, \eta_0)$ differentiable, our work is simplified somewhat.
Background: Assume
1) $\tilde{\theta} = \theta_0 + op(1); S_n(\tilde{\theta},\eta_0) = op(n^{-1/2}); S_n(\theta_0,\eta_0) = Op(n^{-1/2})$
2) $S_n(\theta,\eta)$ is equidifferentiable (in probability) in $\theta$ at $(\theta_0,\eta_0)$ with a derivative matrix $\Gamma_n$
3) $\Gamma_n = \Gamma + op(1)$, with $\Gamma$ invertible
With probability tending to one, we can do a Taylor expansion about $\theta_0$,
$$\begin{align}
S_n(\tilde{\theta},\eta_0) &= S_n(\theta_0,\eta_0) + \Gamma_n(\tilde{\theta} - \theta_0) + op(\tilde{\theta} - \theta_0) \\
&= S_n(\theta_0,\eta_0) + \Gamma(\tilde{\theta} - \theta_0) + op(\tilde{\theta} - \theta_0)
\end{align}$$
Hence
$$\begin{align}
\tilde{\theta} - \theta_0 &=
-\Gamma^{-1}\left( S_n(\theta_0,\eta_0) \right)
+ op(n^{-1/2} + |\tilde{\theta} - \theta_0|) \\
&= -\Gamma^{-1}\left( S_n(\theta_0,\eta_0) \right)
+ op(n^{-1/2})
\end{align}$$
For the generalization to $\hat{\theta}$, we additionally assume
4) $(\hat{\theta},\hat{\eta}) = (\theta_0,\eta_0) + op(1); S_n(\hat{\theta},\hat{\eta}) = op(n^{-1/2})$
Solution:
If we additionally assume either
5) $S_n(\hat{\theta},\eta_0) = op(n^{-1/2} + |\hat{\theta} - \theta_0|)$
6) $S_n(\theta_0,\eta_0) = -\Gamma(\hat{\theta} - \theta_0) + op(n^{-1/2} + |\hat{\theta} - \theta_0|)$
Then we can perform the same Taylor expansion as in the background and thus get asymptotic equivalence of the two estimators.
I propose the following conditions to satisfy either 5) or 6):
Condition 1:
If we assume
A) There is a $\Gamma$ invertible such that, for every sequence of ball $U_n$ that shrinks to $\eta_0$,
$$\sup_{\eta \in U_n}\left(- S_n(\hat{\theta},\eta) + S_n(\theta_0,\eta) + \Gamma(\hat{\theta} - \theta_0)\right) = op(n^{-1/2} + |\hat{\theta} - \theta_0|)$$
B) $S_n(\theta_0,\hat{\eta}) = S_n(\theta_0,\eta_0) + op(n^{-1/2} + |\hat{\theta} - \theta_0|) $
From A) and B)
$$ \begin{align} S_n(\hat{\theta},\hat{\eta}) &= S_n(\theta_0,\hat{\eta}) + \Gamma(\hat{\theta} - \theta_0)) = op(n^{-1/2} + |\hat{\theta} - \theta_0|) \\
&= S_n(\theta_0,\eta_0) + \Gamma(\hat{\theta} - \theta_0)) = op(n^{-1/2} + |\hat{\theta} - \theta_0|)
\end{align}$$
Therefore,
$$ S_n(\theta_0,\eta_0) = -\Gamma(\hat{\theta} - \theta_0) = op(n^{-1/2}
+ |\hat{\theta} - \theta_0|)$$
which is the result.
Note 1: Assumptions that each individually implies A) are
A') $S_n(\theta,\eta)$ is uniformly equidifferentiable (in probability) in $\theta$ at $\theta_0$ on a neighborhood of $\eta_0$ with a derivative matrix $\Gamma_n(\eta)$ stochastically equicontinuous at $\eta_0$, with $\Gamma_n(\eta_0) = \Gamma + op(1)$, with $\Gamma$ invertible
A'') $S_n(\theta,\eta)$ is differentiable (in probability) in $\theta$ in a neighborhood of $(\theta_0,\eta_0)$, with derivative $\Gamma_n(\theta,\eta)$ equicontinuous at $(\theta_0,\eta_0)$ and with $\Gamma_n(\theta_0,\eta_0) = \Gamma + op(1)$, with $\Gamma$ invertible
Condition 2:
Assume,
A) $\hat{\eta} = \eta_0 + Op(n^{-1/2}) $
B) $S_n(\theta,\eta)$ is equidifferentiable (in probability) at $(\theta_0,\eta_0)$ with derivative matrix $[\Gamma_n, \Psi_n] $
C) $[\Gamma_n,\Psi_n] = [\Gamma, {\bf 0}] + op(1)$, with $\Gamma$ invertible
Then, performing a Taylor expansion about $(\theta_0, \eta_0)$,
$$\begin{align}
S_n(\hat{\theta},\hat{\eta}) &= S_n(\theta_0,\eta_0) + \Gamma_n(\hat{\theta} - \theta_0) + \Psi_n(\hat{\eta} - \eta_0) + op(|\hat{\theta} - \theta_0| +|\hat{\eta} - \eta_0| ) \\
&= S_n(\theta_0,\eta_0) + \Gamma(\hat{\theta} - \theta_0) + op(n^{-1/2} + |\hat{\theta} - \theta_0 |) \end{align}
$$
Hence,
$$ S_n(\theta_0,\eta_0) = -\Gamma(\hat{\theta} - \theta_0) = op(n^{-1/2}
+ |\hat{\theta} - \theta_0|)$$ | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator | The other answer doesn't assume that $S_n(\hat{\theta}, \eta_0)$ is differentiable. If we assume $S_n(\hat{\theta}, \eta_0)$ differentiable, our work is simplified somewhat.
Background: Assume
1) $\ | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator
The other answer doesn't assume that $S_n(\hat{\theta}, \eta_0)$ is differentiable. If we assume $S_n(\hat{\theta}, \eta_0)$ differentiable, our work is simplified somewhat.
Background: Assume
1) $\tilde{\theta} = \theta_0 + op(1); S_n(\tilde{\theta},\eta_0) = op(n^{-1/2}); S_n(\theta_0,\eta_0) = Op(n^{-1/2})$
2) $S_n(\theta,\eta)$ is equidifferentiable (in probability) in $\theta$ at $(\theta_0,\eta_0)$ with a derivative matrix $\Gamma_n$
3) $\Gamma_n = \Gamma + op(1)$, with $\Gamma$ invertible
With probability tending to one, we can do a Taylor expansion about $\theta_0$,
$$\begin{align}
S_n(\tilde{\theta},\eta_0) &= S_n(\theta_0,\eta_0) + \Gamma_n(\tilde{\theta} - \theta_0) + op(\tilde{\theta} - \theta_0) \\
&= S_n(\theta_0,\eta_0) + \Gamma(\tilde{\theta} - \theta_0) + op(\tilde{\theta} - \theta_0)
\end{align}$$
Hence
$$\begin{align}
\tilde{\theta} - \theta_0 &=
-\Gamma^{-1}\left( S_n(\theta_0,\eta_0) \right)
+ op(n^{-1/2} + |\tilde{\theta} - \theta_0|) \\
&= -\Gamma^{-1}\left( S_n(\theta_0,\eta_0) \right)
+ op(n^{-1/2})
\end{align}$$
For the generalization to $\hat{\theta}$, we additionally assume
4) $(\hat{\theta},\hat{\eta}) = (\theta_0,\eta_0) + op(1); S_n(\hat{\theta},\hat{\eta}) = op(n^{-1/2})$
Solution:
If we additionally assume either
5) $S_n(\hat{\theta},\eta_0) = op(n^{-1/2} + |\hat{\theta} - \theta_0|)$
6) $S_n(\theta_0,\eta_0) = -\Gamma(\hat{\theta} - \theta_0) + op(n^{-1/2} + |\hat{\theta} - \theta_0|)$
Then we can perform the same Taylor expansion as in the background and thus get asymptotic equivalence of the two estimators.
I propose the following conditions to satisfy either 5) or 6):
Condition 1:
If we assume
A) There is a $\Gamma$ invertible such that, for every sequence of ball $U_n$ that shrinks to $\eta_0$,
$$\sup_{\eta \in U_n}\left(- S_n(\hat{\theta},\eta) + S_n(\theta_0,\eta) + \Gamma(\hat{\theta} - \theta_0)\right) = op(n^{-1/2} + |\hat{\theta} - \theta_0|)$$
B) $S_n(\theta_0,\hat{\eta}) = S_n(\theta_0,\eta_0) + op(n^{-1/2} + |\hat{\theta} - \theta_0|) $
From A) and B)
$$ \begin{align} S_n(\hat{\theta},\hat{\eta}) &= S_n(\theta_0,\hat{\eta}) + \Gamma(\hat{\theta} - \theta_0)) = op(n^{-1/2} + |\hat{\theta} - \theta_0|) \\
&= S_n(\theta_0,\eta_0) + \Gamma(\hat{\theta} - \theta_0)) = op(n^{-1/2} + |\hat{\theta} - \theta_0|)
\end{align}$$
Therefore,
$$ S_n(\theta_0,\eta_0) = -\Gamma(\hat{\theta} - \theta_0) = op(n^{-1/2}
+ |\hat{\theta} - \theta_0|)$$
which is the result.
Note 1: Assumptions that each individually implies A) are
A') $S_n(\theta,\eta)$ is uniformly equidifferentiable (in probability) in $\theta$ at $\theta_0$ on a neighborhood of $\eta_0$ with a derivative matrix $\Gamma_n(\eta)$ stochastically equicontinuous at $\eta_0$, with $\Gamma_n(\eta_0) = \Gamma + op(1)$, with $\Gamma$ invertible
A'') $S_n(\theta,\eta)$ is differentiable (in probability) in $\theta$ in a neighborhood of $(\theta_0,\eta_0)$, with derivative $\Gamma_n(\theta,\eta)$ equicontinuous at $(\theta_0,\eta_0)$ and with $\Gamma_n(\theta_0,\eta_0) = \Gamma + op(1)$, with $\Gamma$ invertible
Condition 2:
Assume,
A) $\hat{\eta} = \eta_0 + Op(n^{-1/2}) $
B) $S_n(\theta,\eta)$ is equidifferentiable (in probability) at $(\theta_0,\eta_0)$ with derivative matrix $[\Gamma_n, \Psi_n] $
C) $[\Gamma_n,\Psi_n] = [\Gamma, {\bf 0}] + op(1)$, with $\Gamma$ invertible
Then, performing a Taylor expansion about $(\theta_0, \eta_0)$,
$$\begin{align}
S_n(\hat{\theta},\hat{\eta}) &= S_n(\theta_0,\eta_0) + \Gamma_n(\hat{\theta} - \theta_0) + \Psi_n(\hat{\eta} - \eta_0) + op(|\hat{\theta} - \theta_0| +|\hat{\eta} - \eta_0| ) \\
&= S_n(\theta_0,\eta_0) + \Gamma(\hat{\theta} - \theta_0) + op(n^{-1/2} + |\hat{\theta} - \theta_0 |) \end{align}
$$
Hence,
$$ S_n(\theta_0,\eta_0) = -\Gamma(\hat{\theta} - \theta_0) = op(n^{-1/2}
+ |\hat{\theta} - \theta_0|)$$ | $\sqrt{n}$-equivalence of M-estimator based on plug-in estimator
The other answer doesn't assume that $S_n(\hat{\theta}, \eta_0)$ is differentiable. If we assume $S_n(\hat{\theta}, \eta_0)$ differentiable, our work is simplified somewhat.
Background: Assume
1) $\ |
50,345 | When does Simpson's Paradox "end"? | Yes, you are right, we can create situations where the conditional association of one variable with another will change for each additional covariate you control for. For a simple simulation, I suggest you look Dagitty's Simpson's Machine based on Pearl's paper.
However, the question you should ask yourself is the following: why are you worried that the marginal association is different from the conditional association? That's perfectly normal.
So when you ask
when is it ever possible to consider any result safe to use for future
calculations?
It seems you are not looking for associations only, but for stable, structural relationships. The short answer for your question is that data by itself, no matter how big, cannot help you---you need structural knowledge. Regarding more about Simpson's paradox, this answer might help. | When does Simpson's Paradox "end"? | Yes, you are right, we can create situations where the conditional association of one variable with another will change for each additional covariate you control for. For a simple simulation, I sugges | When does Simpson's Paradox "end"?
Yes, you are right, we can create situations where the conditional association of one variable with another will change for each additional covariate you control for. For a simple simulation, I suggest you look Dagitty's Simpson's Machine based on Pearl's paper.
However, the question you should ask yourself is the following: why are you worried that the marginal association is different from the conditional association? That's perfectly normal.
So when you ask
when is it ever possible to consider any result safe to use for future
calculations?
It seems you are not looking for associations only, but for stable, structural relationships. The short answer for your question is that data by itself, no matter how big, cannot help you---you need structural knowledge. Regarding more about Simpson's paradox, this answer might help. | When does Simpson's Paradox "end"?
Yes, you are right, we can create situations where the conditional association of one variable with another will change for each additional covariate you control for. For a simple simulation, I sugges |
50,346 | When does Simpson's Paradox "end"? | Yes, I think there can always be some unexplored factor that --- had we evaluated that factor --- would have changed our interpretation of the results. That's just a reality of imperfect knowledge. And particularly problematic in observational studies like the one described where the observations are not balanced. (That is, where there are unequal numbers of each sex in each class).
But we should take some solace in the fact that we have some opportunities to assess our data to the best of our abilities.
For this example, the odds ratio for the first table is 1.007, suggesting the difference in survival rate between the two classes is so tiny that we likely would not have considered it interesting. That is, the survival rate for each class is essentially 24%.
The upshot here is that I think this example is less an example of a paradox where the trend reverses, than an example of seeing nothing interesting in the first table, but finding something interesting when more information is added in the second table.
It's only when we have the information in the second table that we get some sense of the factors affecting survival.
Because the underlying question is about what we can conclude about the effect of Class on survival rate, I'll use logistic regression to answer this question.
##### Table 2 #####
Data = read.table(header=T, text="
Class Sex Survive NotSurvive
Third M 75 387
Third F 76 89
Crew M 192 670
Crew F 20 3
")
Trials = cbind(Data$Survive, Data$NotSurvive)
model = glm(Trials ~ Class + Sex + Class:Sex,
data = Data,
family = binomial(link="logit"))
library(car)
Anova(model)
### Analysis of Deviance Table (Type II tests)
###
### Response: Trials
### LR Chisq Df Pr(>Chisq)
### Class 13.510 1 0.0002373 ***
### Sex 88.568 1 < 2.2e-16 ***
### Class:Sex 8.502 1 0.0035472 **
Note that the interaction of Class and Sex is significant, suggesting that this is the effect that we should be paying attention to.
In the results below, prob is the probability calculated in the table in the question.
library(emmeans)
emmeans(model, ~ Class:Sex, type="response")
### Class Sex prob SE df asymp.LCL asymp.UCL
### Crew F 0.8695652 0.07022340 Inf 0.6645495 0.9573281
### Third F 0.4606061 0.03880395 Inf 0.3860325 0.5369860
### Crew M 0.2227378 0.01417187 Inf 0.1961989 0.2517422
### Third M 0.1623377 0.01715628 Inf 0.1314483 0.198824
We can also use estimated marginal means to estimate what the survival rate for each of classes would be had the sexes been balanced in each class. Below, we see that in fact, the survival in Crew is meaningfully and statistically higher.
This is a different conclusion than we would have come to from using the information in the first table only.
emmeans(model, ~ Class, type="response")
### Class prob SE df asymp.LCL asymp.UCL
### Crew 0.5802181 0.07605615 Inf 0.4284069 0.7182285
### Third 0.2891697 0.02063485 Inf 0.2504569 0.3312222
The addition of the information on sex has improved our understanding, but, still, there could always be some other important factor we have failed to measure that would have changed our interpretation. | When does Simpson's Paradox "end"? | Yes, I think there can always be some unexplored factor that --- had we evaluated that factor --- would have changed our interpretation of the results. That's just a reality of imperfect knowledge. | When does Simpson's Paradox "end"?
Yes, I think there can always be some unexplored factor that --- had we evaluated that factor --- would have changed our interpretation of the results. That's just a reality of imperfect knowledge. And particularly problematic in observational studies like the one described where the observations are not balanced. (That is, where there are unequal numbers of each sex in each class).
But we should take some solace in the fact that we have some opportunities to assess our data to the best of our abilities.
For this example, the odds ratio for the first table is 1.007, suggesting the difference in survival rate between the two classes is so tiny that we likely would not have considered it interesting. That is, the survival rate for each class is essentially 24%.
The upshot here is that I think this example is less an example of a paradox where the trend reverses, than an example of seeing nothing interesting in the first table, but finding something interesting when more information is added in the second table.
It's only when we have the information in the second table that we get some sense of the factors affecting survival.
Because the underlying question is about what we can conclude about the effect of Class on survival rate, I'll use logistic regression to answer this question.
##### Table 2 #####
Data = read.table(header=T, text="
Class Sex Survive NotSurvive
Third M 75 387
Third F 76 89
Crew M 192 670
Crew F 20 3
")
Trials = cbind(Data$Survive, Data$NotSurvive)
model = glm(Trials ~ Class + Sex + Class:Sex,
data = Data,
family = binomial(link="logit"))
library(car)
Anova(model)
### Analysis of Deviance Table (Type II tests)
###
### Response: Trials
### LR Chisq Df Pr(>Chisq)
### Class 13.510 1 0.0002373 ***
### Sex 88.568 1 < 2.2e-16 ***
### Class:Sex 8.502 1 0.0035472 **
Note that the interaction of Class and Sex is significant, suggesting that this is the effect that we should be paying attention to.
In the results below, prob is the probability calculated in the table in the question.
library(emmeans)
emmeans(model, ~ Class:Sex, type="response")
### Class Sex prob SE df asymp.LCL asymp.UCL
### Crew F 0.8695652 0.07022340 Inf 0.6645495 0.9573281
### Third F 0.4606061 0.03880395 Inf 0.3860325 0.5369860
### Crew M 0.2227378 0.01417187 Inf 0.1961989 0.2517422
### Third M 0.1623377 0.01715628 Inf 0.1314483 0.198824
We can also use estimated marginal means to estimate what the survival rate for each of classes would be had the sexes been balanced in each class. Below, we see that in fact, the survival in Crew is meaningfully and statistically higher.
This is a different conclusion than we would have come to from using the information in the first table only.
emmeans(model, ~ Class, type="response")
### Class prob SE df asymp.LCL asymp.UCL
### Crew 0.5802181 0.07605615 Inf 0.4284069 0.7182285
### Third 0.2891697 0.02063485 Inf 0.2504569 0.3312222
The addition of the information on sex has improved our understanding, but, still, there could always be some other important factor we have failed to measure that would have changed our interpretation. | When does Simpson's Paradox "end"?
Yes, I think there can always be some unexplored factor that --- had we evaluated that factor --- would have changed our interpretation of the results. That's just a reality of imperfect knowledge. |
50,347 | Unbounded likelihoods for unpenalized mixed effects | Taking the simple model from amoeba
$$y_{ij} \sim N(\mu_i,\sigma_f^2) \qquad \text{with} \qquad \mu_i \sim N(0,\sigma_r^2)$$
The probability density to observe a sample of $\mathbf{ y_{ij} }$ is:
$$f_{\mathbf{ Y_{ij} }}(\mathbf{ y_{ij} }) =det((2\pi)^k\Sigma)^{-\frac{1}{2}} e^{\mathbf{ y_{ij}^T\Sigma y_{ij}}}$$
With $\Sigma$ having a block structure like
$$\Sigma = \begin{bmatrix}
J_1 & 0 & \dots &0 \\
0 & J_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & J_n \\
\end{bmatrix}$$
and the blocks are like
$$J_i = \begin{bmatrix}
\sigma_f^2+\sigma_r^2 & \sigma_r^2 & \dots & \sigma_r^2 & \sigma_r^2 \\
\sigma_r^2 & \sigma_f^2+\sigma_r^2 & \dots & \sigma_r^2 & \sigma_r^2 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
\sigma_r^2 & \sigma_r^2 & \dots & \sigma_f^2+\sigma_r^2 & \sigma_r^2 \\
\sigma_r^2 & \sigma_r^2 & \dots & \sigma_r^2 & \sigma_f^2+\sigma_r^2
\end{bmatrix} $$
Nothing goes wrong when $\sigma_r \to 0$
...except $f_{\mu_i}(0) \to \infty$ becomes a degenerate distribution, which is however not relevant for the calculation/expression of the distribution $f_{\mathbf{Y_{ij} }}$.
If you consider the space of points $Y_{ij},\mu_i$ then you can see all the probability concentrating on a hyper-surface with $\mu_i=0$ and the density $f_{\mu_i}$ (along with $f_{Y_{ij},\mu_i}$) goes to infinity on this surface. But instead of $f_{Y_{ij},\mu_i}$ you wish to calculate $f_{Y_{ij}}$ $$f_{Y_{ij}}(y_{ij}) = \int f_{Y_{ij},\mu_i}(y_{ij},\mu_i) d\mu_i = \int f_{Y_{ij}|\mu_i}(y_{ij},\mu_i) f_{\mu_i}(\mu_i) d\mu_i$$ This density distribution $f_{Y_{ij}|\mu_i}$ for the distribution of $Y_{ij}$ on the hyper-surfaces with coordinates $\mu_i$ does not go to infinity. Or from another viewpoint, you integrate $f_{\mu_i}$ over an infinitely thin surface.
The case might be that you use the following probability density / likelihood:
$$f_\mathbf{y_{ij}}(\mathbf{y_{ij}}\vert \mathbf{\mu_i}, \sigma_f, \sigma_r) = \frac{1}{\left( \sqrt{2 \pi \sigma_f^2} \right)^{n_j}} e^ { \frac{\sum_{j=1}^{n_{j}} (y_{ij}-\mu_i)^2}{2 \sigma_f^2} } \cdot \frac{1}{\left( \sqrt{2 \pi \sigma_r^2} \right)^{n_i}} e^{ \frac{\sum_{i=1}^{n_i} (\mu_i)^2}{2 \sigma_r^2} } $$
but I would say that this is badly defined (the math looks ok, but the interpretation is not). This is not a density that only needs to be integrated over $d y_{ij}$, but also over $d \mathbf{\mu_i}$. You should not turn this into a likelihood function like $\mathcal{L}(\mathbf{\mu_i}, \sigma_f, \sigma_r \vert \mathbf{y_{ij}})$ but instead $\mathcal{L}( \sigma_f, \sigma_r \vert \mathbf{y_{ij}}, \mathbf{\mu_i})$ (yet you do not observe $\mathbf{\mu_i}$).
It is incorrect to impose a relationship between the parameters in the likelihood function and add a corresponding density term to the expression of the likelihood function (that not surprisingly will blow up to infinity, in this way every unobserved variable may be added and becomes an infinite density somewhere, you could also add unobserved unicorns if you like). | Unbounded likelihoods for unpenalized mixed effects | Taking the simple model from amoeba
$$y_{ij} \sim N(\mu_i,\sigma_f^2) \qquad \text{with} \qquad \mu_i \sim N(0,\sigma_r^2)$$
The probability density to observe a sample of $\mathbf{ y_{ij} }$ is:
$$f_ | Unbounded likelihoods for unpenalized mixed effects
Taking the simple model from amoeba
$$y_{ij} \sim N(\mu_i,\sigma_f^2) \qquad \text{with} \qquad \mu_i \sim N(0,\sigma_r^2)$$
The probability density to observe a sample of $\mathbf{ y_{ij} }$ is:
$$f_{\mathbf{ Y_{ij} }}(\mathbf{ y_{ij} }) =det((2\pi)^k\Sigma)^{-\frac{1}{2}} e^{\mathbf{ y_{ij}^T\Sigma y_{ij}}}$$
With $\Sigma$ having a block structure like
$$\Sigma = \begin{bmatrix}
J_1 & 0 & \dots &0 \\
0 & J_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & J_n \\
\end{bmatrix}$$
and the blocks are like
$$J_i = \begin{bmatrix}
\sigma_f^2+\sigma_r^2 & \sigma_r^2 & \dots & \sigma_r^2 & \sigma_r^2 \\
\sigma_r^2 & \sigma_f^2+\sigma_r^2 & \dots & \sigma_r^2 & \sigma_r^2 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
\sigma_r^2 & \sigma_r^2 & \dots & \sigma_f^2+\sigma_r^2 & \sigma_r^2 \\
\sigma_r^2 & \sigma_r^2 & \dots & \sigma_r^2 & \sigma_f^2+\sigma_r^2
\end{bmatrix} $$
Nothing goes wrong when $\sigma_r \to 0$
...except $f_{\mu_i}(0) \to \infty$ becomes a degenerate distribution, which is however not relevant for the calculation/expression of the distribution $f_{\mathbf{Y_{ij} }}$.
If you consider the space of points $Y_{ij},\mu_i$ then you can see all the probability concentrating on a hyper-surface with $\mu_i=0$ and the density $f_{\mu_i}$ (along with $f_{Y_{ij},\mu_i}$) goes to infinity on this surface. But instead of $f_{Y_{ij},\mu_i}$ you wish to calculate $f_{Y_{ij}}$ $$f_{Y_{ij}}(y_{ij}) = \int f_{Y_{ij},\mu_i}(y_{ij},\mu_i) d\mu_i = \int f_{Y_{ij}|\mu_i}(y_{ij},\mu_i) f_{\mu_i}(\mu_i) d\mu_i$$ This density distribution $f_{Y_{ij}|\mu_i}$ for the distribution of $Y_{ij}$ on the hyper-surfaces with coordinates $\mu_i$ does not go to infinity. Or from another viewpoint, you integrate $f_{\mu_i}$ over an infinitely thin surface.
The case might be that you use the following probability density / likelihood:
$$f_\mathbf{y_{ij}}(\mathbf{y_{ij}}\vert \mathbf{\mu_i}, \sigma_f, \sigma_r) = \frac{1}{\left( \sqrt{2 \pi \sigma_f^2} \right)^{n_j}} e^ { \frac{\sum_{j=1}^{n_{j}} (y_{ij}-\mu_i)^2}{2 \sigma_f^2} } \cdot \frac{1}{\left( \sqrt{2 \pi \sigma_r^2} \right)^{n_i}} e^{ \frac{\sum_{i=1}^{n_i} (\mu_i)^2}{2 \sigma_r^2} } $$
but I would say that this is badly defined (the math looks ok, but the interpretation is not). This is not a density that only needs to be integrated over $d y_{ij}$, but also over $d \mathbf{\mu_i}$. You should not turn this into a likelihood function like $\mathcal{L}(\mathbf{\mu_i}, \sigma_f, \sigma_r \vert \mathbf{y_{ij}})$ but instead $\mathcal{L}( \sigma_f, \sigma_r \vert \mathbf{y_{ij}}, \mathbf{\mu_i})$ (yet you do not observe $\mathbf{\mu_i}$).
It is incorrect to impose a relationship between the parameters in the likelihood function and add a corresponding density term to the expression of the likelihood function (that not surprisingly will blow up to infinity, in this way every unobserved variable may be added and becomes an infinite density somewhere, you could also add unobserved unicorns if you like). | Unbounded likelihoods for unpenalized mixed effects
Taking the simple model from amoeba
$$y_{ij} \sim N(\mu_i,\sigma_f^2) \qquad \text{with} \qquad \mu_i \sim N(0,\sigma_r^2)$$
The probability density to observe a sample of $\mathbf{ y_{ij} }$ is:
$$f_ |
50,348 | Position of knots in piecewise linear regression as 'random effects' | You can do this in the R package mcp. Although your actual full model may be outside the scope of mcp, this is a way to do "random effects" change points.
The mcp package contains a demo dataset called ex_varying:
> library(mcp)
> head(ex_varying)
id x id_numeric y
1 John 1 5 30.792018
2 John 5 5 1.027091
3 John 9 5 58.793870
4 John 13 5 40.300737
5 John 17 5 57.566408
6 John 21 5 80.876520
Model two joined slopes with the change point location varying by id. You will recognize this syntax from lme4:
model = list(
y ~ 1 + x, # intercept + slope
1 + (1|id) ~ 0 + x # joined slope, varying by id
)
fit = mcp(model, ex_varying)
plot(fit, facet_by = "id", cp_dens = FALSE)
You can visualize the change point posteriors using plot_pars(fit, "varying") and summarise them using ranef(fit). Read more in the mcp article on random effects (called "varying effects" in mcp cf. the terminology from the brms package). | Position of knots in piecewise linear regression as 'random effects' | You can do this in the R package mcp. Although your actual full model may be outside the scope of mcp, this is a way to do "random effects" change points.
The mcp package contains a demo dataset call | Position of knots in piecewise linear regression as 'random effects'
You can do this in the R package mcp. Although your actual full model may be outside the scope of mcp, this is a way to do "random effects" change points.
The mcp package contains a demo dataset called ex_varying:
> library(mcp)
> head(ex_varying)
id x id_numeric y
1 John 1 5 30.792018
2 John 5 5 1.027091
3 John 9 5 58.793870
4 John 13 5 40.300737
5 John 17 5 57.566408
6 John 21 5 80.876520
Model two joined slopes with the change point location varying by id. You will recognize this syntax from lme4:
model = list(
y ~ 1 + x, # intercept + slope
1 + (1|id) ~ 0 + x # joined slope, varying by id
)
fit = mcp(model, ex_varying)
plot(fit, facet_by = "id", cp_dens = FALSE)
You can visualize the change point posteriors using plot_pars(fit, "varying") and summarise them using ranef(fit). Read more in the mcp article on random effects (called "varying effects" in mcp cf. the terminology from the brms package). | Position of knots in piecewise linear regression as 'random effects'
You can do this in the R package mcp. Although your actual full model may be outside the scope of mcp, this is a way to do "random effects" change points.
The mcp package contains a demo dataset call |
50,349 | Why do orthogonal designs have the advantage of greater efficiency and interpretability? | In a multi-way ANOVA (e.g., a two-way ANOVA), unbalanced designs have the disadvantage that the main effects are not independent (orthogonal) to the interactions of which they are apart. As such, you get different estimates for the test of the main effects depending on whether you fit a Type I, II, or III sums-of-squares model. In a balanced design, this is not an issue. This is the justification for balanced factorial designs when doing experimental work where you have control over such things. This is not an issue in a one-way ANOVA because there is only one factor (i.e., no interaction). However, balanced sample sizes across groups in a one-way ANOVA will maximize statistical power, assuming a fixed total N and all other things being equal. This balance also helps the model be robust to violations of the equal variances assumption. That said, increasing the sample size in just one group does increase statistical power. This increase will be less than you would achieve by dividing this increase equally across groups. For example, a design with sample sizes of 30-30-30 (N=90) across three groups will have more power than a design with 20-30-40 (N=90). However, a design with 30-30-60 (N=120) will have more power than a design with 30-30-30 (N=90), but less than a design with 40-40-40 (N=120). | Why do orthogonal designs have the advantage of greater efficiency and interpretability? | In a multi-way ANOVA (e.g., a two-way ANOVA), unbalanced designs have the disadvantage that the main effects are not independent (orthogonal) to the interactions of which they are apart. As such, you | Why do orthogonal designs have the advantage of greater efficiency and interpretability?
In a multi-way ANOVA (e.g., a two-way ANOVA), unbalanced designs have the disadvantage that the main effects are not independent (orthogonal) to the interactions of which they are apart. As such, you get different estimates for the test of the main effects depending on whether you fit a Type I, II, or III sums-of-squares model. In a balanced design, this is not an issue. This is the justification for balanced factorial designs when doing experimental work where you have control over such things. This is not an issue in a one-way ANOVA because there is only one factor (i.e., no interaction). However, balanced sample sizes across groups in a one-way ANOVA will maximize statistical power, assuming a fixed total N and all other things being equal. This balance also helps the model be robust to violations of the equal variances assumption. That said, increasing the sample size in just one group does increase statistical power. This increase will be less than you would achieve by dividing this increase equally across groups. For example, a design with sample sizes of 30-30-30 (N=90) across three groups will have more power than a design with 20-30-40 (N=90). However, a design with 30-30-60 (N=120) will have more power than a design with 30-30-30 (N=90), but less than a design with 40-40-40 (N=120). | Why do orthogonal designs have the advantage of greater efficiency and interpretability?
In a multi-way ANOVA (e.g., a two-way ANOVA), unbalanced designs have the disadvantage that the main effects are not independent (orthogonal) to the interactions of which they are apart. As such, you |
50,350 | Drawing conclusions of several inferences with the same data in one study | First, in my understanding and in principle, testing a set of different predefined hypothesis on a given data set is a valid procedure.
However, it seems that your problematic is related to a set of none-predefined hypothesis and in my understanding, the very nature of your question is about what do you mean by "draw conclusions". As you mentioned in the comment, your hypothesis were not planned (or at least a part of them). Consequently, your analysis will be at best purely explanatory and drawing definitive conclusions is out of your scope. I suggest you this question and associated answers discussing about why this is the case. A brief summary could be: there is too much degree of freedom in a data set to draw conclusion from hypothesis generated after having see the data.
Nevertheless,documenting and discussing the effect sizes of side-observations is relevant and useful. Just be aware and make your readers aware that these are observations needing to be tested properly (but that still may served a reasoned discussion). | Drawing conclusions of several inferences with the same data in one study | First, in my understanding and in principle, testing a set of different predefined hypothesis on a given data set is a valid procedure.
However, it seems that your problematic is related to a set of | Drawing conclusions of several inferences with the same data in one study
First, in my understanding and in principle, testing a set of different predefined hypothesis on a given data set is a valid procedure.
However, it seems that your problematic is related to a set of none-predefined hypothesis and in my understanding, the very nature of your question is about what do you mean by "draw conclusions". As you mentioned in the comment, your hypothesis were not planned (or at least a part of them). Consequently, your analysis will be at best purely explanatory and drawing definitive conclusions is out of your scope. I suggest you this question and associated answers discussing about why this is the case. A brief summary could be: there is too much degree of freedom in a data set to draw conclusion from hypothesis generated after having see the data.
Nevertheless,documenting and discussing the effect sizes of side-observations is relevant and useful. Just be aware and make your readers aware that these are observations needing to be tested properly (but that still may served a reasoned discussion). | Drawing conclusions of several inferences with the same data in one study
First, in my understanding and in principle, testing a set of different predefined hypothesis on a given data set is a valid procedure.
However, it seems that your problematic is related to a set of |
50,351 | Drawing conclusions of several inferences with the same data in one study | The answer is yet tentative; I'll add to it -- or remove it -- later.
In principle you can extract as many different conclusions from your data as you want. This includes hypotheses and also inferences. You will notice however, that these conclusions might overlap or even contradict each other. You could argue that is especially then the case, if the statistical power is insufficient to draw a certain conclusion conclusively.
But it would be a severe error, if you were using the same data to train, test and/or validate some extraction or refining method. But this might or might not be the case here. You have a notion that there might be some feature present and you test for this feature. This test can be implemented in a lot of ways. The questions (i) "feature A is present" and (ii) "feature A is not present" are not the same; if you find that you have data to support (i), you still might not be able to reject (ii).
Barnard's test statistic, including its refinement of Buschloo, are the best ways to do this testing, afaik. | Drawing conclusions of several inferences with the same data in one study | The answer is yet tentative; I'll add to it -- or remove it -- later.
In principle you can extract as many different conclusions from your data as you want. This includes hypotheses and also inference | Drawing conclusions of several inferences with the same data in one study
The answer is yet tentative; I'll add to it -- or remove it -- later.
In principle you can extract as many different conclusions from your data as you want. This includes hypotheses and also inferences. You will notice however, that these conclusions might overlap or even contradict each other. You could argue that is especially then the case, if the statistical power is insufficient to draw a certain conclusion conclusively.
But it would be a severe error, if you were using the same data to train, test and/or validate some extraction or refining method. But this might or might not be the case here. You have a notion that there might be some feature present and you test for this feature. This test can be implemented in a lot of ways. The questions (i) "feature A is present" and (ii) "feature A is not present" are not the same; if you find that you have data to support (i), you still might not be able to reject (ii).
Barnard's test statistic, including its refinement of Buschloo, are the best ways to do this testing, afaik. | Drawing conclusions of several inferences with the same data in one study
The answer is yet tentative; I'll add to it -- or remove it -- later.
In principle you can extract as many different conclusions from your data as you want. This includes hypotheses and also inference |
50,352 | Pooling homogenous studies vs. using meta-analysis/bayesian | I will address frequentist meta-analysis, for which the answer is: The two approaches will give asymptotically equivalent point estimates if (1) you are using an effect size measure satisfying a property I will give below; and (2) the samples are homogenous not only in true effect size, but also in within-study variance.
Let $\mathbf{X}$ be your entire sample of size 300, and let $\mathbf{X}_1$ and $\mathbf{X}_2$ be the first and second parts of this sample (of sizes 100 and 200).
Let $\widehat{y}_i$ with $i \in {1,2}$ be point estimates from each sample $\mathbf{X}_1$ and $\mathbf{X}_2$. They have within-study variances $\sigma_i^2$, assumed fixed and known (the usual assumption in meta-analysis), and assumed equal by homogeneity. Let $\widehat{y}_R$ be the pooled point estimate from a random-effects meta-analysis, and let $\tau^2$ be the estimated heterogeneity (the variance of the true effects). Suppose you're meta-analyzing a statistic $g(\cdot)$.
Let's check whether the meta-analytically pooled point estimate matches the simple estimate pooling all the data (call it the aggregate-data estimate).
$$\eqalign{
\widehat{y}_R &:= \frac{ \sum_{i=1}^2 \frac{1}{\tau^2 +\sigma^2_i }\widehat{y}_i }{\sum_{i=1}^2\frac{1}{\tau^2 +\sigma^2_i }} \\
&\to \frac{ \sum_{i=1}^2 \frac{1}{\sigma^2_i }\widehat{y}_i }{\sum_{i=1}^2\frac{1}{\sigma^2_i }} \tag{$\tau^2 \to 0$, the truth}\\
&= \frac{ \frac{1}{\sigma^2}\widehat{y}_1 + \frac{1}{\sigma^2}\widehat{y}_2}{\frac{2}{\sigma^2}} \\
&= \frac{ \widehat{y}_1 + \widehat{y}_2}{2}
}$$
(The penultimate line comes from assuming heterogeneity within-study variances.)
Now, this last expression is a simple average of the two samples' point estimates. So that means that if you choose a test statistic such that:
$$\frac{ g(\mathbf{X}_1) + g(\mathbf{X}_2) }{2} = g(\mathbf{X}) \tag{*}$$
then the meta-analytic estimate will be asymptotically equivalent to your aggregrate-data estimate. $(*)$ holds, for instance, if $g(\cdot)$ is the sample mean, but not if $g(\cdot)$ is the odds ratio. (However, note that you wouldn't want to meta-analyze untransformed odds ratios anyway since they do not fulfill the often-used normality assumption.)
Here is a code example to illustrate the equivalence when the statistic of interest is the sample mean. The inference also appears to be quite similar.
n1 = 100
n2 = 300 - n1
theta = 2 # true mean
sigw = 1 # common within-study variance
# generate whole dataset
X = rnorm( n1 + n2, mean = theta, sd = sigw )
# split into 2 subsamples
X1 = X[1:n1]
X2 = X[1:n2]
# get point estimates and SEs for each subsample
ests = c( mean(X1), mean(X2) )
ses = c( sd(X1) / sqrt(n1), sd(X2) / sqrt(n2) )
# meta-analyze them
library(metafor)
ES = escalc( measure = "MD", yi = ests, sei = ses )
m = rma.uni(ES, method = "REML")
# compare point estimate to aggregate analysis
m$b; mean(X)
# compare inference to aggregate analysis
sqrt(m$vb); sd(X) / sqrt(n1 + n2) | Pooling homogenous studies vs. using meta-analysis/bayesian | I will address frequentist meta-analysis, for which the answer is: The two approaches will give asymptotically equivalent point estimates if (1) you are using an effect size measure satisfying a prope | Pooling homogenous studies vs. using meta-analysis/bayesian
I will address frequentist meta-analysis, for which the answer is: The two approaches will give asymptotically equivalent point estimates if (1) you are using an effect size measure satisfying a property I will give below; and (2) the samples are homogenous not only in true effect size, but also in within-study variance.
Let $\mathbf{X}$ be your entire sample of size 300, and let $\mathbf{X}_1$ and $\mathbf{X}_2$ be the first and second parts of this sample (of sizes 100 and 200).
Let $\widehat{y}_i$ with $i \in {1,2}$ be point estimates from each sample $\mathbf{X}_1$ and $\mathbf{X}_2$. They have within-study variances $\sigma_i^2$, assumed fixed and known (the usual assumption in meta-analysis), and assumed equal by homogeneity. Let $\widehat{y}_R$ be the pooled point estimate from a random-effects meta-analysis, and let $\tau^2$ be the estimated heterogeneity (the variance of the true effects). Suppose you're meta-analyzing a statistic $g(\cdot)$.
Let's check whether the meta-analytically pooled point estimate matches the simple estimate pooling all the data (call it the aggregate-data estimate).
$$\eqalign{
\widehat{y}_R &:= \frac{ \sum_{i=1}^2 \frac{1}{\tau^2 +\sigma^2_i }\widehat{y}_i }{\sum_{i=1}^2\frac{1}{\tau^2 +\sigma^2_i }} \\
&\to \frac{ \sum_{i=1}^2 \frac{1}{\sigma^2_i }\widehat{y}_i }{\sum_{i=1}^2\frac{1}{\sigma^2_i }} \tag{$\tau^2 \to 0$, the truth}\\
&= \frac{ \frac{1}{\sigma^2}\widehat{y}_1 + \frac{1}{\sigma^2}\widehat{y}_2}{\frac{2}{\sigma^2}} \\
&= \frac{ \widehat{y}_1 + \widehat{y}_2}{2}
}$$
(The penultimate line comes from assuming heterogeneity within-study variances.)
Now, this last expression is a simple average of the two samples' point estimates. So that means that if you choose a test statistic such that:
$$\frac{ g(\mathbf{X}_1) + g(\mathbf{X}_2) }{2} = g(\mathbf{X}) \tag{*}$$
then the meta-analytic estimate will be asymptotically equivalent to your aggregrate-data estimate. $(*)$ holds, for instance, if $g(\cdot)$ is the sample mean, but not if $g(\cdot)$ is the odds ratio. (However, note that you wouldn't want to meta-analyze untransformed odds ratios anyway since they do not fulfill the often-used normality assumption.)
Here is a code example to illustrate the equivalence when the statistic of interest is the sample mean. The inference also appears to be quite similar.
n1 = 100
n2 = 300 - n1
theta = 2 # true mean
sigw = 1 # common within-study variance
# generate whole dataset
X = rnorm( n1 + n2, mean = theta, sd = sigw )
# split into 2 subsamples
X1 = X[1:n1]
X2 = X[1:n2]
# get point estimates and SEs for each subsample
ests = c( mean(X1), mean(X2) )
ses = c( sd(X1) / sqrt(n1), sd(X2) / sqrt(n2) )
# meta-analyze them
library(metafor)
ES = escalc( measure = "MD", yi = ests, sei = ses )
m = rma.uni(ES, method = "REML")
# compare point estimate to aggregate analysis
m$b; mean(X)
# compare inference to aggregate analysis
sqrt(m$vb); sd(X) / sqrt(n1 + n2) | Pooling homogenous studies vs. using meta-analysis/bayesian
I will address frequentist meta-analysis, for which the answer is: The two approaches will give asymptotically equivalent point estimates if (1) you are using an effect size measure satisfying a prope |
50,353 | A question about the inversion method | This is an interesting question, somewhat related with copulas. In the first proposal, when defining
$$\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_d\end{pmatrix} = \begin{pmatrix} f_{1}(z_1) \\ f_{2}(z_2) \\ \vdots \\ f_{d}(z_d)\end{pmatrix}$$
the transforms are over the marginals. Therefore, $X_1$ has the correct marginal distribution $P_1$, $X_2$ has the correct marginal distribution $P_2$, &tc. But the transform of the vector ${\bf Z}=(Z_1,\ldots,Z_n)$ carries the correlation structure of this vector into a correlation structure for the vector ${\bf X}=(X_1,\ldots,X_n)$ that is not the original correlation structure (except for rare cases, as when the components are independent for both $\bf X$ and $\bf Z$). This transform fails to reproduce the joint distribution of $\bf X$.
In the second case, the joint distribution of $\bf X$ is correctly preserved: when$$X_1=F_1^{-1}(G_1(Z_1)) \qquad X_2=F_2^{-1}(G_2(Z_2|Z_1)|X_1)$$equivalent to
$$X_1=F_1^{-1}(U_1) \qquad X_2=F_2^{-1}(U_2|X_1)$$with $U_1$ and $U_2$ independent ${\cal U}(01,1)$, they satisfy$$X_1\sim F_1(x_2)\qquad X_2|X_1=x_1\sim F_{2|1}(x_2|x_1)$$and hence$$(X_1,X_2)\sim F_{1,2}(x_1,_2)$$ the correct joint distribution.
Comparing with the first proposal, $$X_1=F_1^{-1}(G_1(Z_1)) \qquad X_2=F_2^{-1}(G_2(Z_2))$$is equivalent to $$X_1=F_1^{-1}(U_1) \qquad X_2=F_2^{-1}(U_2)$$with $U_1$ and $U_2$ dependent ${\cal U}(01,1)$. | A question about the inversion method | This is an interesting question, somewhat related with copulas. In the first proposal, when defining
$$\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_d\end{pmatrix} = \begin{pmatrix} f_{1}(z_1) \\ f_{2}(z_ | A question about the inversion method
This is an interesting question, somewhat related with copulas. In the first proposal, when defining
$$\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_d\end{pmatrix} = \begin{pmatrix} f_{1}(z_1) \\ f_{2}(z_2) \\ \vdots \\ f_{d}(z_d)\end{pmatrix}$$
the transforms are over the marginals. Therefore, $X_1$ has the correct marginal distribution $P_1$, $X_2$ has the correct marginal distribution $P_2$, &tc. But the transform of the vector ${\bf Z}=(Z_1,\ldots,Z_n)$ carries the correlation structure of this vector into a correlation structure for the vector ${\bf X}=(X_1,\ldots,X_n)$ that is not the original correlation structure (except for rare cases, as when the components are independent for both $\bf X$ and $\bf Z$). This transform fails to reproduce the joint distribution of $\bf X$.
In the second case, the joint distribution of $\bf X$ is correctly preserved: when$$X_1=F_1^{-1}(G_1(Z_1)) \qquad X_2=F_2^{-1}(G_2(Z_2|Z_1)|X_1)$$equivalent to
$$X_1=F_1^{-1}(U_1) \qquad X_2=F_2^{-1}(U_2|X_1)$$with $U_1$ and $U_2$ independent ${\cal U}(01,1)$, they satisfy$$X_1\sim F_1(x_2)\qquad X_2|X_1=x_1\sim F_{2|1}(x_2|x_1)$$and hence$$(X_1,X_2)\sim F_{1,2}(x_1,_2)$$ the correct joint distribution.
Comparing with the first proposal, $$X_1=F_1^{-1}(G_1(Z_1)) \qquad X_2=F_2^{-1}(G_2(Z_2))$$is equivalent to $$X_1=F_1^{-1}(U_1) \qquad X_2=F_2^{-1}(U_2)$$with $U_1$ and $U_2$ dependent ${\cal U}(01,1)$. | A question about the inversion method
This is an interesting question, somewhat related with copulas. In the first proposal, when defining
$$\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_d\end{pmatrix} = \begin{pmatrix} f_{1}(z_1) \\ f_{2}(z_ |
50,354 | LSTM NN produces "shifted" forecast (low quality result) | So, after trying many input and parameter tweaks, I came to a conclusion that LSTM cannot long dependencies until it gets long enough vector of past time series values. In my experiments a so-so good quality of forecast could be obtained after feeding the net with 64 lags, which span over the seasonalities in the model.
Another thing is that minibatches are a bad idea if they were sampled randomly. In the realization of neural networks I played with I made it work with 100% of examples passed in iteration. That way I ensured that all examples come in time-wise sequences.
Also it is worth mentioning that the LSTM result compared poorly against a linear benchmarking model.
If you think I am wrong, give me good counter arguments. | LSTM NN produces "shifted" forecast (low quality result) | So, after trying many input and parameter tweaks, I came to a conclusion that LSTM cannot long dependencies until it gets long enough vector of past time series values. In my experiments a so-so good | LSTM NN produces "shifted" forecast (low quality result)
So, after trying many input and parameter tweaks, I came to a conclusion that LSTM cannot long dependencies until it gets long enough vector of past time series values. In my experiments a so-so good quality of forecast could be obtained after feeding the net with 64 lags, which span over the seasonalities in the model.
Another thing is that minibatches are a bad idea if they were sampled randomly. In the realization of neural networks I played with I made it work with 100% of examples passed in iteration. That way I ensured that all examples come in time-wise sequences.
Also it is worth mentioning that the LSTM result compared poorly against a linear benchmarking model.
If you think I am wrong, give me good counter arguments. | LSTM NN produces "shifted" forecast (low quality result)
So, after trying many input and parameter tweaks, I came to a conclusion that LSTM cannot long dependencies until it gets long enough vector of past time series values. In my experiments a so-so good |
50,355 | Is stratified sampling and oversampling contradictory in imbalanced datasets? | To a large extent, the desire to apply artificial balancing comes from using improper scoring rules, chiefly accuracy.
In particular, it seems that people realized that a model with strong imbalance could achieve an impressive-looking $98\%$ accuracy by predicting yet underperform a model that always predicts the majority category, such as if the majority category represents $99\%$ of all observations. Consequently, people seem to have changed the class ratio to allow for such a high accuracy to be more reflective of strong performance: if you balance the classes, then predicting one class every time results in $50\%$ accuracy (or worse, if there or three or more classes), so scoring $98\%$ would be quite an improvement.
I see this as a major drawback of accuracy, and a simple remedy, comparison of error rates, makes it more comparable to $R^2$ in regression and might be a more useful measure of performance. I show in the link what happens when you have an accuracy score than looks high but underperforms predicting the majority class every time, and this statistic indeed flags that as poor performance.
However, accuracy, comparison of error rates, and some classics like sensitivity, specificity, and $F_1$ score, all have the downside of being based on hard classifications. Most "classification" models, such a logistic regressions and neural networks, do not output predicted class labels. Instead, they output values on a continuum that often can be interpreted as a probability, and then you can make decisions based on those probabilities. Importantly, those decisions can depend on factors other than the probabilities (such a features in the model...might be more willing to make a certain decision for the usual people than with a high-roller) and can be more numerous than the categories.
There are exceptions to the upcoming statement, such as in data collection or perhaps for computational reasons when it comes to numerical optimization of neural networks, but the apparent problems when it comes to class imbalance largely do not manifest when models are evaluated on the continuous predictions.
You are correct to point out that representative samples and oversampling contradict each other, but oversampling is largely a solution to a non-problem. For the most part, I am with you that it makes sense to develop models on representative data. If a category is rare, then we should be skeptical that an observation belongs to it by assigning a low prior probability and making the features have to shine through to prove that there is a strong chance that the observation indeed belongs to that category.
This link is already elsewhere in this answer, but many of the claimed issues with class imbalance are debunked here by our Stephan Kolassa. | Is stratified sampling and oversampling contradictory in imbalanced datasets? | To a large extent, the desire to apply artificial balancing comes from using improper scoring rules, chiefly accuracy.
In particular, it seems that people realized that a model with strong imbalance c | Is stratified sampling and oversampling contradictory in imbalanced datasets?
To a large extent, the desire to apply artificial balancing comes from using improper scoring rules, chiefly accuracy.
In particular, it seems that people realized that a model with strong imbalance could achieve an impressive-looking $98\%$ accuracy by predicting yet underperform a model that always predicts the majority category, such as if the majority category represents $99\%$ of all observations. Consequently, people seem to have changed the class ratio to allow for such a high accuracy to be more reflective of strong performance: if you balance the classes, then predicting one class every time results in $50\%$ accuracy (or worse, if there or three or more classes), so scoring $98\%$ would be quite an improvement.
I see this as a major drawback of accuracy, and a simple remedy, comparison of error rates, makes it more comparable to $R^2$ in regression and might be a more useful measure of performance. I show in the link what happens when you have an accuracy score than looks high but underperforms predicting the majority class every time, and this statistic indeed flags that as poor performance.
However, accuracy, comparison of error rates, and some classics like sensitivity, specificity, and $F_1$ score, all have the downside of being based on hard classifications. Most "classification" models, such a logistic regressions and neural networks, do not output predicted class labels. Instead, they output values on a continuum that often can be interpreted as a probability, and then you can make decisions based on those probabilities. Importantly, those decisions can depend on factors other than the probabilities (such a features in the model...might be more willing to make a certain decision for the usual people than with a high-roller) and can be more numerous than the categories.
There are exceptions to the upcoming statement, such as in data collection or perhaps for computational reasons when it comes to numerical optimization of neural networks, but the apparent problems when it comes to class imbalance largely do not manifest when models are evaluated on the continuous predictions.
You are correct to point out that representative samples and oversampling contradict each other, but oversampling is largely a solution to a non-problem. For the most part, I am with you that it makes sense to develop models on representative data. If a category is rare, then we should be skeptical that an observation belongs to it by assigning a low prior probability and making the features have to shine through to prove that there is a strong chance that the observation indeed belongs to that category.
This link is already elsewhere in this answer, but many of the claimed issues with class imbalance are debunked here by our Stephan Kolassa. | Is stratified sampling and oversampling contradictory in imbalanced datasets?
To a large extent, the desire to apply artificial balancing comes from using improper scoring rules, chiefly accuracy.
In particular, it seems that people realized that a model with strong imbalance c |
50,356 | Run MAP estimates before MCMC in most cases? | MAP is mode of the posterior distribution, as noticed in the comments, it does not have to be a reasonable estimate to consider. Likely, you can see MAP in the tutorials, because they want to show different possible functionalities of their software, rather then the most methodologically sound solution. In some cases it may be reasonable to use MAP as a starting point for sampling, since this enables sampler to start at a reasonable starting point, what should give you the reasonable samples faster, then if started from completely random initialization. Notice however that this does not have to work, for example PyMC3 documentation discourages using MAP as an initialization for NUTS sampler and uses different form of initialization as a default. So definitely this is not a one-size-fit-all solution. | Run MAP estimates before MCMC in most cases? | MAP is mode of the posterior distribution, as noticed in the comments, it does not have to be a reasonable estimate to consider. Likely, you can see MAP in the tutorials, because they want to show dif | Run MAP estimates before MCMC in most cases?
MAP is mode of the posterior distribution, as noticed in the comments, it does not have to be a reasonable estimate to consider. Likely, you can see MAP in the tutorials, because they want to show different possible functionalities of their software, rather then the most methodologically sound solution. In some cases it may be reasonable to use MAP as a starting point for sampling, since this enables sampler to start at a reasonable starting point, what should give you the reasonable samples faster, then if started from completely random initialization. Notice however that this does not have to work, for example PyMC3 documentation discourages using MAP as an initialization for NUTS sampler and uses different form of initialization as a default. So definitely this is not a one-size-fit-all solution. | Run MAP estimates before MCMC in most cases?
MAP is mode of the posterior distribution, as noticed in the comments, it does not have to be a reasonable estimate to consider. Likely, you can see MAP in the tutorials, because they want to show dif |
50,357 | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost? | I think that part of the misunderstanding stems from using the symbol $h$ in two different places for two different meanings.
The code portion of the question seems to have little to do with the mathematics of XGBoost, since the code snippets are not part of the XGBoost software.
Denote the binary cross-entropy loss for a single sample
$$
L(y_i, \hat{y}_i) = -\left[ y_i \log(\hat{y}_i) + (1- y_i) \log(1 - \hat{y}_i) \right].
$$
The loss for the model is $\sum_i L(y_i, \hat{y}_i)$. This is a quantity that we want to minimize.
The authors provide that $g_i = \partial_{\hat{y}_i^{(t-1)}} L\left(y_i, \hat{y}_i^{(t-1)}\right)$, with the notation $\text{something}^{(t-1)}$ denoting that this is the prediction for trees up to and including tree number $t-1$. We can write, dropping indices on $y$ because life is short,
$$
\begin{align}
g_i &= \frac{\partial L}{\partial \hat{y}} L(y, \hat{y}) \\
&=\frac{y}{\hat{y}} - \frac{1 - y}{1 - \hat{y}} \\
&=\frac{y(1 - \hat{y}) - \hat{y}(1-y)}{\hat{y}(1 - \hat{y})} \\
&= \frac{ y - y\hat{y} - \hat{y}+y\hat{y} }{\hat{y}(1 - \hat{y})} \\
&= \frac{y - \hat{y}}{\hat{y}(1 - \hat{y})}
\end{align}
$$
For $h_i$, we can follow the same procedure.
$$
\begin{align}
h_i &= \partial^2_{\hat{y}_i^{(t-1)}} L\left(y_i, \hat{y}_i^{(t-1)}\right) \\
&= \frac{\partial}{\partial \hat{y}} g_i \\
&= \frac{\partial}{\partial \hat{y}} \left[\frac{y - \hat{y}}{\hat{y}(1 - \hat{y})}\right] \\
&= \frac{ y - 1}{(\hat{y} -1)^2} - \frac{y}{\hat{y}^2}
\end{align}
$$
but remember that for compactness/ease of reading I dropped all of the super- and sub-scripts. | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost | I think that part of the misunderstanding stems from using the symbol $h$ in two different places for two different meanings.
The code portion of the question seems to have little to do with the math | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost?
I think that part of the misunderstanding stems from using the symbol $h$ in two different places for two different meanings.
The code portion of the question seems to have little to do with the mathematics of XGBoost, since the code snippets are not part of the XGBoost software.
Denote the binary cross-entropy loss for a single sample
$$
L(y_i, \hat{y}_i) = -\left[ y_i \log(\hat{y}_i) + (1- y_i) \log(1 - \hat{y}_i) \right].
$$
The loss for the model is $\sum_i L(y_i, \hat{y}_i)$. This is a quantity that we want to minimize.
The authors provide that $g_i = \partial_{\hat{y}_i^{(t-1)}} L\left(y_i, \hat{y}_i^{(t-1)}\right)$, with the notation $\text{something}^{(t-1)}$ denoting that this is the prediction for trees up to and including tree number $t-1$. We can write, dropping indices on $y$ because life is short,
$$
\begin{align}
g_i &= \frac{\partial L}{\partial \hat{y}} L(y, \hat{y}) \\
&=\frac{y}{\hat{y}} - \frac{1 - y}{1 - \hat{y}} \\
&=\frac{y(1 - \hat{y}) - \hat{y}(1-y)}{\hat{y}(1 - \hat{y})} \\
&= \frac{ y - y\hat{y} - \hat{y}+y\hat{y} }{\hat{y}(1 - \hat{y})} \\
&= \frac{y - \hat{y}}{\hat{y}(1 - \hat{y})}
\end{align}
$$
For $h_i$, we can follow the same procedure.
$$
\begin{align}
h_i &= \partial^2_{\hat{y}_i^{(t-1)}} L\left(y_i, \hat{y}_i^{(t-1)}\right) \\
&= \frac{\partial}{\partial \hat{y}} g_i \\
&= \frac{\partial}{\partial \hat{y}} \left[\frac{y - \hat{y}}{\hat{y}(1 - \hat{y})}\right] \\
&= \frac{ y - 1}{(\hat{y} -1)^2} - \frac{y}{\hat{y}^2}
\end{align}
$$
but remember that for compactness/ease of reading I dropped all of the super- and sub-scripts. | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost
I think that part of the misunderstanding stems from using the symbol $h$ in two different places for two different meanings.
The code portion of the question seems to have little to do with the math |
50,358 | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost? | The discrepancy is due to the interpretation of $y_i^{t-1}$. In your derivation, you're assuming it is the probability $h_i$, whereas the code author has defined it as the log odds $logit(h_i) = log(\frac{h_i}{1-h_i})$. Re-express the loss as a function of log odds instead of probability (define $O_i = logit(h_i)$):
$$
L_i = -y_i O_i + log(1 + exp(O_i))
$$
And find the derivative with respect to the log odds:
$$
g_i = \frac{d L_i}{d O_i} = h_i - y_i
$$
(Side note: As stated by @Sycorax, you're overloading the term $h_i$ because the xgboost paper authors define it as the 2nd order gradient statistic) | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost | The discrepancy is due to the interpretation of $y_i^{t-1}$. In your derivation, you're assuming it is the probability $h_i$, whereas the code author has defined it as the log odds $logit(h_i) = log(\ | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost?
The discrepancy is due to the interpretation of $y_i^{t-1}$. In your derivation, you're assuming it is the probability $h_i$, whereas the code author has defined it as the log odds $logit(h_i) = log(\frac{h_i}{1-h_i})$. Re-express the loss as a function of log odds instead of probability (define $O_i = logit(h_i)$):
$$
L_i = -y_i O_i + log(1 + exp(O_i))
$$
And find the derivative with respect to the log odds:
$$
g_i = \frac{d L_i}{d O_i} = h_i - y_i
$$
(Side note: As stated by @Sycorax, you're overloading the term $h_i$ because the xgboost paper authors define it as the 2nd order gradient statistic) | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost
The discrepancy is due to the interpretation of $y_i^{t-1}$. In your derivation, you're assuming it is the probability $h_i$, whereas the code author has defined it as the log odds $logit(h_i) = log(\ |
50,359 | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost? | As mentioned by @StayLearning, in slide 4, the author defines the logistic loss $L = \sum_{i=1}^n l(y_i,\hat y_i)$ where
$$ l(y_i,\hat y_i) = y_i\log(1+\exp(-\hat y_i)) + (1-y_i)\log(1+\exp(\hat y_i)) $$
then grad =
\begin{align}
\frac{\partial l}{\partial \hat y_i} &=
- y_i\times\frac{1}{1+\exp(\hat y_i)} + (1-y_i)\times\frac{\exp(\hat y_i)}{1+\exp(\hat y_i)}\\ &= \frac{\exp(\hat y_i)}{1+\exp(\hat y_i)} - y_i
\end{align}
and hess =
$$
\frac{\partial^2 l}{(\partial \hat y_i)^2} = \frac{\exp(\hat y_i)}{1+\exp(\hat y_i)} \times \frac{1}{1+\exp(\hat y_i)},
$$
where preds = 1.0 / (1.0 + np.exp(-yhat_i)) and label = y_i.
See more here. | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost | As mentioned by @StayLearning, in slide 4, the author defines the logistic loss $L = \sum_{i=1}^n l(y_i,\hat y_i)$ where
$$ l(y_i,\hat y_i) = y_i\log(1+\exp(-\hat y_i)) + (1-y_i)\log(1+\exp(\hat y_i)) | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost?
As mentioned by @StayLearning, in slide 4, the author defines the logistic loss $L = \sum_{i=1}^n l(y_i,\hat y_i)$ where
$$ l(y_i,\hat y_i) = y_i\log(1+\exp(-\hat y_i)) + (1-y_i)\log(1+\exp(\hat y_i)) $$
then grad =
\begin{align}
\frac{\partial l}{\partial \hat y_i} &=
- y_i\times\frac{1}{1+\exp(\hat y_i)} + (1-y_i)\times\frac{\exp(\hat y_i)}{1+\exp(\hat y_i)}\\ &= \frac{\exp(\hat y_i)}{1+\exp(\hat y_i)} - y_i
\end{align}
and hess =
$$
\frac{\partial^2 l}{(\partial \hat y_i)^2} = \frac{\exp(\hat y_i)}{1+\exp(\hat y_i)} \times \frac{1}{1+\exp(\hat y_i)},
$$
where preds = 1.0 / (1.0 + np.exp(-yhat_i)) and label = y_i.
See more here. | How to compute $g_i$ and $h_i$, i.e. the first and second derivative of the loss function in XGBoost
As mentioned by @StayLearning, in slide 4, the author defines the logistic loss $L = \sum_{i=1}^n l(y_i,\hat y_i)$ where
$$ l(y_i,\hat y_i) = y_i\log(1+\exp(-\hat y_i)) + (1-y_i)\log(1+\exp(\hat y_i)) |
50,360 | sampling from an unnormalised distribution | The question is too general, as @whuber points out, to be answered. For instance, you could think of $\{x_1,x_2,...\}$ to be an enumeration of the rational numbers (this can be done, of course), and $\{\omega_1,\omega_2,...\}$ to be an infinite sequence of weights with $\sum\omega_j=1$ and $\omega_j>0$ for all $j$ (perhaps computationally difficult to calculate). This is, clearly, a terribly difficult scenario as the rationals are dense in ${\mathbb R}$.
Perhaps focusing on the finite scenario may help to narrow down your question, or on ordered sequences, or so ... | sampling from an unnormalised distribution | The question is too general, as @whuber points out, to be answered. For instance, you could think of $\{x_1,x_2,...\}$ to be an enumeration of the rational numbers (this can be done, of course), and $ | sampling from an unnormalised distribution
The question is too general, as @whuber points out, to be answered. For instance, you could think of $\{x_1,x_2,...\}$ to be an enumeration of the rational numbers (this can be done, of course), and $\{\omega_1,\omega_2,...\}$ to be an infinite sequence of weights with $\sum\omega_j=1$ and $\omega_j>0$ for all $j$ (perhaps computationally difficult to calculate). This is, clearly, a terribly difficult scenario as the rationals are dense in ${\mathbb R}$.
Perhaps focusing on the finite scenario may help to narrow down your question, or on ordered sequences, or so ... | sampling from an unnormalised distribution
The question is too general, as @whuber points out, to be answered. For instance, you could think of $\{x_1,x_2,...\}$ to be an enumeration of the rational numbers (this can be done, of course), and $ |
50,361 | Can I use tanh activation function in the output layer for binary classification? | The line dotted = Dot(axes=1,normalize=True)([x1, x2]) computes the cosine of the angle $\theta$ between x1 and x2. If it's always true that $\cos(\theta)>0$, that implies $0 < \tanh(\cos(\theta)) < 1$. Under these conditions, this resolves the riddle of how you're getting proper probabilities using $\tanh$. But remember that you're applying a linear transformation, rather than $\tanh(\cos(\theta))$ directly, so you further require that even after applying the linear transformation, the bounds are still respected.
As for why performance for $\tanh$ is better than $\text{sigmoid}$ in this case, it could be the usual reason that NN researcher suggest: $\tanh$ has steeper gradients, so backprop is more effective. | Can I use tanh activation function in the output layer for binary classification? | The line dotted = Dot(axes=1,normalize=True)([x1, x2]) computes the cosine of the angle $\theta$ between x1 and x2. If it's always true that $\cos(\theta)>0$, that implies $0 < \tanh(\cos(\theta)) < 1 | Can I use tanh activation function in the output layer for binary classification?
The line dotted = Dot(axes=1,normalize=True)([x1, x2]) computes the cosine of the angle $\theta$ between x1 and x2. If it's always true that $\cos(\theta)>0$, that implies $0 < \tanh(\cos(\theta)) < 1$. Under these conditions, this resolves the riddle of how you're getting proper probabilities using $\tanh$. But remember that you're applying a linear transformation, rather than $\tanh(\cos(\theta))$ directly, so you further require that even after applying the linear transformation, the bounds are still respected.
As for why performance for $\tanh$ is better than $\text{sigmoid}$ in this case, it could be the usual reason that NN researcher suggest: $\tanh$ has steeper gradients, so backprop is more effective. | Can I use tanh activation function in the output layer for binary classification?
The line dotted = Dot(axes=1,normalize=True)([x1, x2]) computes the cosine of the angle $\theta$ between x1 and x2. If it's always true that $\cos(\theta)>0$, that implies $0 < \tanh(\cos(\theta)) < 1 |
50,362 | Why KL-divergence is not used as a measure to compare clusterings? | KL divergence assumes that you know which cluster is which label. But what if the number of clusters and classes is not the same? A good clustering may need to split a class into two parts, if the data has such a structure. Plus, KL is asymmetric.
NMI is closely related, but as it compares every cluster to every label, you don't have the problem of mapping clusters to classes. | Why KL-divergence is not used as a measure to compare clusterings? | KL divergence assumes that you know which cluster is which label. But what if the number of clusters and classes is not the same? A good clustering may need to split a class into two parts, if the dat | Why KL-divergence is not used as a measure to compare clusterings?
KL divergence assumes that you know which cluster is which label. But what if the number of clusters and classes is not the same? A good clustering may need to split a class into two parts, if the data has such a structure. Plus, KL is asymmetric.
NMI is closely related, but as it compares every cluster to every label, you don't have the problem of mapping clusters to classes. | Why KL-divergence is not used as a measure to compare clusterings?
KL divergence assumes that you know which cluster is which label. But what if the number of clusters and classes is not the same? A good clustering may need to split a class into two parts, if the dat |
50,363 | Why KL-divergence is not used as a measure to compare clusterings? | The OP has phrased their question in terms of 'popularity.' This may not be the right way to think about the use of KL divergence wrt clustering. In point of fact, KL metrics are used in information-theoretic and complexity based cluster algorithms but evaluating the 'popularity' of such routines would be difficult.
Permutation distribution clustering is one such routine. PDC is described in several papers. Here is a link to the PDC R module which contains a description of the use of KL divergence ... https://cran.r-project.org/web/packages/pdc/pdc.pdf
Then there's Eamonn Keogh's SAX and iSAX routines which are similar to PDC but may well be more 'popular' ... http://www.cs.ucr.edu/~eamonn/SAX.htm | Why KL-divergence is not used as a measure to compare clusterings? | The OP has phrased their question in terms of 'popularity.' This may not be the right way to think about the use of KL divergence wrt clustering. In point of fact, KL metrics are used in information-t | Why KL-divergence is not used as a measure to compare clusterings?
The OP has phrased their question in terms of 'popularity.' This may not be the right way to think about the use of KL divergence wrt clustering. In point of fact, KL metrics are used in information-theoretic and complexity based cluster algorithms but evaluating the 'popularity' of such routines would be difficult.
Permutation distribution clustering is one such routine. PDC is described in several papers. Here is a link to the PDC R module which contains a description of the use of KL divergence ... https://cran.r-project.org/web/packages/pdc/pdc.pdf
Then there's Eamonn Keogh's SAX and iSAX routines which are similar to PDC but may well be more 'popular' ... http://www.cs.ucr.edu/~eamonn/SAX.htm | Why KL-divergence is not used as a measure to compare clusterings?
The OP has phrased their question in terms of 'popularity.' This may not be the right way to think about the use of KL divergence wrt clustering. In point of fact, KL metrics are used in information-t |
50,364 | Conditional distribution of $Z = (X-Y) / 3$, given $Y = y$ | Perhaps a solution based on understanding what geometric distributions mean would be of more interest than a purely algebraic one.
Preliminaries: notation; Geometric distributions
Recall that a Geometric distribution with parameter $\theta$ describes the chances of observing a sequence of $x\in\{0,1,2,\ldots\}$ failures before the first success in a series of independent Bernoulli$(\theta)$ trials, whose values I will write $U_1,U_2, \ldots, U_n,\ldots,$ encoding (as usual) $U_i=1$ to represent success. Writing $p_\theta(x)$ for those quantities, independence implies
$$p_\theta(x+1) = p_\theta(x)\Pr(U_{n+1}=1) = p_\theta(x)\theta.$$
Conversely, this relation completely determines the distribution from the facts that (1) all probabilities must sum to unity and (2) the nonzero probabilities are those for $x\in\{0,1,2,\ldots\}$.
Solution
Let's interpret the $Y$ and $Z$ of the problem. It is an obvious number-theoretic fact that any possible value $x$ of $X$ can be written in the form $$x=3z+y$$ where $y\in\{0,1,2\}$ and $z\in\{0,1,2,\ldots\}.$ When we condition on $Y=y$, we're saying there initially are $y$ failures and then there are $z$ groups of three failures each before success is observed. The independence of the Bernoulli trials in each group of three implies any sequence of three failures has a chance
$$\rho = (1-\theta)^3.$$
Consequently, the independence of each (nonoverlapping) group of three failures implies
$$\Pr(Z=z+1) = \Pr(Z=z) (1-\theta)^3= \Pr(Z=z) \rho$$
for any $z=0,1,2,\ldots.$ Thus, conditional on $Y=y$, $Z$ has a Geometric distribution with parameter $\rho$.
Among the salient implications of this observation--ones that immediately solve the problem--are
The distribution of $Z$ is independent of $Y$.
The distribution of $Z$ is Geometric with parameter $\rho=(1-\theta)^3.$
You can write the probabilities down immediately using the usual formulas for the geometric distribution, using the parameter $\rho$.
For those who prefer pure algebra, an amusing (and perhaps surprising) solution method is provided by the technique of decimation described at https://stats.stackexchange.com/a/35138/919. This directly gives the distributions of $(Y,Z)$, from which the conditional distribution of $Z$ is found by dividing by the chance that $Y=y$. | Conditional distribution of $Z = (X-Y) / 3$, given $Y = y$ | Perhaps a solution based on understanding what geometric distributions mean would be of more interest than a purely algebraic one.
Preliminaries: notation; Geometric distributions
Recall that a Geomet | Conditional distribution of $Z = (X-Y) / 3$, given $Y = y$
Perhaps a solution based on understanding what geometric distributions mean would be of more interest than a purely algebraic one.
Preliminaries: notation; Geometric distributions
Recall that a Geometric distribution with parameter $\theta$ describes the chances of observing a sequence of $x\in\{0,1,2,\ldots\}$ failures before the first success in a series of independent Bernoulli$(\theta)$ trials, whose values I will write $U_1,U_2, \ldots, U_n,\ldots,$ encoding (as usual) $U_i=1$ to represent success. Writing $p_\theta(x)$ for those quantities, independence implies
$$p_\theta(x+1) = p_\theta(x)\Pr(U_{n+1}=1) = p_\theta(x)\theta.$$
Conversely, this relation completely determines the distribution from the facts that (1) all probabilities must sum to unity and (2) the nonzero probabilities are those for $x\in\{0,1,2,\ldots\}$.
Solution
Let's interpret the $Y$ and $Z$ of the problem. It is an obvious number-theoretic fact that any possible value $x$ of $X$ can be written in the form $$x=3z+y$$ where $y\in\{0,1,2\}$ and $z\in\{0,1,2,\ldots\}.$ When we condition on $Y=y$, we're saying there initially are $y$ failures and then there are $z$ groups of three failures each before success is observed. The independence of the Bernoulli trials in each group of three implies any sequence of three failures has a chance
$$\rho = (1-\theta)^3.$$
Consequently, the independence of each (nonoverlapping) group of three failures implies
$$\Pr(Z=z+1) = \Pr(Z=z) (1-\theta)^3= \Pr(Z=z) \rho$$
for any $z=0,1,2,\ldots.$ Thus, conditional on $Y=y$, $Z$ has a Geometric distribution with parameter $\rho$.
Among the salient implications of this observation--ones that immediately solve the problem--are
The distribution of $Z$ is independent of $Y$.
The distribution of $Z$ is Geometric with parameter $\rho=(1-\theta)^3.$
You can write the probabilities down immediately using the usual formulas for the geometric distribution, using the parameter $\rho$.
For those who prefer pure algebra, an amusing (and perhaps surprising) solution method is provided by the technique of decimation described at https://stats.stackexchange.com/a/35138/919. This directly gives the distributions of $(Y,Z)$, from which the conditional distribution of $Z$ is found by dividing by the chance that $Y=y$. | Conditional distribution of $Z = (X-Y) / 3$, given $Y = y$
Perhaps a solution based on understanding what geometric distributions mean would be of more interest than a purely algebraic one.
Preliminaries: notation; Geometric distributions
Recall that a Geomet |
50,365 | When is a likelihood a likelihood? | In a way, you are correct: all our models are wrong, hence even "exact" likelihoods are but convenient pseudolikelihoods to the likelihood function for the true underlying data process (assuming it could even be parameterized).
However, to understand likelihood, you need to move away from Dr. Box's adage of 'all models are wrong..." and live in a world where we pretend our model is correct. This means moving from applied to mathematical statistics.
In this more constrained context, the explanation of what is a likelihood is given by the definition of quasi and pseudo likelihood themselves:
A function $L$ is a likelihood iff it is developed using the true underlying distribution of the data.
Pseudolikelihood breaks with this definition by approximating $L$ using a different, but asymptotically correct, probability model. Quasilikelihood functions $Q$ represent an even further break from the definition of likelihood because they cannot be generated by any valid probability distribution. For example, if your data are iid, then:
$$\neg \exists P \in \mathcal{P}: L(\theta;x) = \prod_{i=1}^n P(x_i;\theta)$$ | When is a likelihood a likelihood? | In a way, you are correct: all our models are wrong, hence even "exact" likelihoods are but convenient pseudolikelihoods to the likelihood function for the true underlying data process (assuming it co | When is a likelihood a likelihood?
In a way, you are correct: all our models are wrong, hence even "exact" likelihoods are but convenient pseudolikelihoods to the likelihood function for the true underlying data process (assuming it could even be parameterized).
However, to understand likelihood, you need to move away from Dr. Box's adage of 'all models are wrong..." and live in a world where we pretend our model is correct. This means moving from applied to mathematical statistics.
In this more constrained context, the explanation of what is a likelihood is given by the definition of quasi and pseudo likelihood themselves:
A function $L$ is a likelihood iff it is developed using the true underlying distribution of the data.
Pseudolikelihood breaks with this definition by approximating $L$ using a different, but asymptotically correct, probability model. Quasilikelihood functions $Q$ represent an even further break from the definition of likelihood because they cannot be generated by any valid probability distribution. For example, if your data are iid, then:
$$\neg \exists P \in \mathcal{P}: L(\theta;x) = \prod_{i=1}^n P(x_i;\theta)$$ | When is a likelihood a likelihood?
In a way, you are correct: all our models are wrong, hence even "exact" likelihoods are but convenient pseudolikelihoods to the likelihood function for the true underlying data process (assuming it co |
50,366 | Regression: zeros in heavy-tailed independent variable from quantization | There are no distributional assumptions made for variables you condition on such as predictors. Having zero frequency of a categorical variable cell for a city should not be a problem. If you believe there is a slope discontinuity at zero for all cities, you could model that variable (let's say it's coded as a fraction) using at least two variables: an indicator variable to denote non-zero and the actual value to allow for a post-zero linear effect. Nonlinear effects can also be added. | Regression: zeros in heavy-tailed independent variable from quantization | There are no distributional assumptions made for variables you condition on such as predictors. Having zero frequency of a categorical variable cell for a city should not be a problem. If you believ | Regression: zeros in heavy-tailed independent variable from quantization
There are no distributional assumptions made for variables you condition on such as predictors. Having zero frequency of a categorical variable cell for a city should not be a problem. If you believe there is a slope discontinuity at zero for all cities, you could model that variable (let's say it's coded as a fraction) using at least two variables: an indicator variable to denote non-zero and the actual value to allow for a post-zero linear effect. Nonlinear effects can also be added. | Regression: zeros in heavy-tailed independent variable from quantization
There are no distributional assumptions made for variables you condition on such as predictors. Having zero frequency of a categorical variable cell for a city should not be a problem. If you believ |
50,367 | Regression: zeros in heavy-tailed independent variable from quantization | You are on the right track. You could do a log constant transformation, where you add a constant to each observation and then log transform it. Determining the constant should be defensible, but one suggestion is given by Rob Hybdman on his blog (https://robjhyndman.com/hyndsight/transformations/) as half of the smallest non-zero value. Make sure to account for this constant when interpreting the coefficients. | Regression: zeros in heavy-tailed independent variable from quantization | You are on the right track. You could do a log constant transformation, where you add a constant to each observation and then log transform it. Determining the constant should be defensible, but one s | Regression: zeros in heavy-tailed independent variable from quantization
You are on the right track. You could do a log constant transformation, where you add a constant to each observation and then log transform it. Determining the constant should be defensible, but one suggestion is given by Rob Hybdman on his blog (https://robjhyndman.com/hyndsight/transformations/) as half of the smallest non-zero value. Make sure to account for this constant when interpreting the coefficients. | Regression: zeros in heavy-tailed independent variable from quantization
You are on the right track. You could do a log constant transformation, where you add a constant to each observation and then log transform it. Determining the constant should be defensible, but one s |
50,368 | Conditional Probability vs Conditional Probability Distribution | A conditional probability "distribution" is essentially just a bunch of conditional probabilities, sufficient to fully characterise the conditional behaviour of one random variable given another event or random variable. A probability "distribution" can be characterised in various different ways (e.g., by a probability measure, mass/density function, CDF, generating function, etc.) and while there is no single mathematical object that is the distribution, we may refer to them as such as a shorthand.
In general, there are two main classes of mathematical objects which would characterise a "conditional probability distribution" and which we might refer to as such:
Conditional distribution at a given conditioning point: This is characterised by any function that fully characterises the probabilitic behaviour of $X$ conditional on a specific event $Y=y$.
Conditional distribution for any conditioning point: This is characterised by any function that fully characterises the probabilitic behaviour of $X$ conditional on any value of another random variable $Y$.
Let me illustrate this by example. Suppose we have two random variables $X$ and $Y$ and suppose we define the conditional cumulative distribution function (CDF):
$$F(x|y) \equiv \mathbb{P}(X \leqslant x | Y=y) \quad \quad \quad \text{for all } x \in \mathscr{X} \text{ and } y \in \mathscr{Y}.$$
The function $F( \cdot |y)$ for a fixed value of $y$ fully characterises the distribution of $X$ given the conditioning point $Y=y$, so we would consider this to be a "conditional distribution" in the shorthand sense previously described. The function $F( \cdot | \cdot)$ fully characterises the distribution of $X$ given any conditioning point for $Y$, so we would also consider this to be a "conditional distribution", again, in the shorthand sense. (The latter object is much more general, and it actually gives a whole bunch of conditional distributions, corresponding to each of the possible values for $Y=y$.) | Conditional Probability vs Conditional Probability Distribution | A conditional probability "distribution" is essentially just a bunch of conditional probabilities, sufficient to fully characterise the conditional behaviour of one random variable given another event | Conditional Probability vs Conditional Probability Distribution
A conditional probability "distribution" is essentially just a bunch of conditional probabilities, sufficient to fully characterise the conditional behaviour of one random variable given another event or random variable. A probability "distribution" can be characterised in various different ways (e.g., by a probability measure, mass/density function, CDF, generating function, etc.) and while there is no single mathematical object that is the distribution, we may refer to them as such as a shorthand.
In general, there are two main classes of mathematical objects which would characterise a "conditional probability distribution" and which we might refer to as such:
Conditional distribution at a given conditioning point: This is characterised by any function that fully characterises the probabilitic behaviour of $X$ conditional on a specific event $Y=y$.
Conditional distribution for any conditioning point: This is characterised by any function that fully characterises the probabilitic behaviour of $X$ conditional on any value of another random variable $Y$.
Let me illustrate this by example. Suppose we have two random variables $X$ and $Y$ and suppose we define the conditional cumulative distribution function (CDF):
$$F(x|y) \equiv \mathbb{P}(X \leqslant x | Y=y) \quad \quad \quad \text{for all } x \in \mathscr{X} \text{ and } y \in \mathscr{Y}.$$
The function $F( \cdot |y)$ for a fixed value of $y$ fully characterises the distribution of $X$ given the conditioning point $Y=y$, so we would consider this to be a "conditional distribution" in the shorthand sense previously described. The function $F( \cdot | \cdot)$ fully characterises the distribution of $X$ given any conditioning point for $Y$, so we would also consider this to be a "conditional distribution", again, in the shorthand sense. (The latter object is much more general, and it actually gives a whole bunch of conditional distributions, corresponding to each of the possible values for $Y=y$.) | Conditional Probability vs Conditional Probability Distribution
A conditional probability "distribution" is essentially just a bunch of conditional probabilities, sufficient to fully characterise the conditional behaviour of one random variable given another event |
50,369 | Conditional Probability vs Conditional Probability Distribution | It seems to me we can use what we already know, provided we have heard of distribution functions and conditional probabilities. Thus, the following remarks offer nothing new, but I hope that in making them the basic simplicity and familiarity of the situation will become apparent.
When you have any real-valued random variable $X$ and an event $\mathcal E$ (defined on the same probability space, of course), then you can extend the definition of a (cumulative) distribution function in the most natural way possible: namely, for any number $x,$ define
$$F_X(x;\mathcal E) = \Pr(X\le x\mid \mathcal E).$$
When $\mathcal E$ has positive probability you can even avoid all technicalities and apply the elementary formula for conditional probability,
$$\Pr(X\le x\mid \mathcal E) = \frac{\Pr(X\le x\,\cap\,\mathcal E)}{\Pr(\mathcal E)}.$$
The numerator, which might look strange to the mathematically sophisticated reader, is the probability of the intersection of two events. The conventional shorthand "$X\le x$" stands for the set of outcomes where $X$ does not exceed $x:$ $\{\omega\in\Omega\mid X(\omega)\le x\}.$
This extends the usual distribution function very nicely in the sense that when $\Omega$ is the universal event (that is, the underlying set of all outcomes in the probability space), then since $(X\le x)\subseteq \Omega$ and $\Pr(\Omega)=1,$
$$F_X(x)= \Pr(X\le x) = \frac{\Pr(X\le x\,\cap\,\Omega)}{1} = \frac{\Pr(X\le x\,\cap\,\Omega)}{\Pr(\Omega)}= F_X(x;\Omega) .$$
Comments
Note that only one random variable $X$ is needed, showing that the concept of conditional distribution does not depend on a joint distribution. As a simple example, the right-truncated Normal distribution studied at Expected value of x in a normal distribution, GIVEN that it is below a certain value is determined by a Normally-distributed random variable $X$ and the event $X\le T$ (for the fixed truncation limit $T$).
Another example, just to make these distinctions very clear, models a population of people where we are interested in their sex and age (at a specified time, because both these properties can change!). By agreeing on a unit of measure of age (seconds, say), and (for simplicity) focusing on those people with a definite sex, we may take the sample space to be
$$\Omega = \{\text{male}, \text{female}\}\times [0,\infty).$$
Elements of $\Omega$ represent people. A sample from $\Omega$ could be represented by rows in a two-column table: one for sex, the other for age. That's what the Cartesian product $\times$ in the definition of $\Omega$ means.
The probabilities of interest will attach to intervals of ages for each sex separately (or combined). Thus, relevant events will be composed out of age intervals of the form $\{\text{male}\}\times (a,b]$ (for lower and upper ages $a$ and $b$ of males) and $\{\text{female}\}\times (a,b]$ (an interval of ages for females).
As a shorthand, "$\{\text{male}\}$" is the event $\{\text{male}\}\times [0,\infty) = \{(\text{male},x)\mid x \ge 0\},$ and similarly for "$\{\text{female}\}.$" By definition, these are both events -- or "subpopulations" if you like.
Let $X$ be the random variable giving the age of a person rounded to the nearest year. Then (for instance) we might be interested in $F_X$ (the distribution of all ages), of $F_X(\ \mid \{\text{male}\})$ (the distribution of male ages), or of $F_X(\ \mid \{\text{female}\}).$
This nice example shows that the conditioning event $\mathcal E$ (the sex) needn't have anything to do with $X$ (the age).
Clearly, this formulation of conditional distributions does not require us to define a random variable to condition on a characteristic like sex in the example. We could have done it that way, and there are some analytical and computational advantages to doing so, but conceptually such a construct would be artificial and superfluous.
When there are multiple random variables $(X,Y)$ (and $Y$ can be vector-valued), nothing new emerges because conditioning on $Y$ means conditioning on the events it defines. | Conditional Probability vs Conditional Probability Distribution | It seems to me we can use what we already know, provided we have heard of distribution functions and conditional probabilities. Thus, the following remarks offer nothing new, but I hope that in makin | Conditional Probability vs Conditional Probability Distribution
It seems to me we can use what we already know, provided we have heard of distribution functions and conditional probabilities. Thus, the following remarks offer nothing new, but I hope that in making them the basic simplicity and familiarity of the situation will become apparent.
When you have any real-valued random variable $X$ and an event $\mathcal E$ (defined on the same probability space, of course), then you can extend the definition of a (cumulative) distribution function in the most natural way possible: namely, for any number $x,$ define
$$F_X(x;\mathcal E) = \Pr(X\le x\mid \mathcal E).$$
When $\mathcal E$ has positive probability you can even avoid all technicalities and apply the elementary formula for conditional probability,
$$\Pr(X\le x\mid \mathcal E) = \frac{\Pr(X\le x\,\cap\,\mathcal E)}{\Pr(\mathcal E)}.$$
The numerator, which might look strange to the mathematically sophisticated reader, is the probability of the intersection of two events. The conventional shorthand "$X\le x$" stands for the set of outcomes where $X$ does not exceed $x:$ $\{\omega\in\Omega\mid X(\omega)\le x\}.$
This extends the usual distribution function very nicely in the sense that when $\Omega$ is the universal event (that is, the underlying set of all outcomes in the probability space), then since $(X\le x)\subseteq \Omega$ and $\Pr(\Omega)=1,$
$$F_X(x)= \Pr(X\le x) = \frac{\Pr(X\le x\,\cap\,\Omega)}{1} = \frac{\Pr(X\le x\,\cap\,\Omega)}{\Pr(\Omega)}= F_X(x;\Omega) .$$
Comments
Note that only one random variable $X$ is needed, showing that the concept of conditional distribution does not depend on a joint distribution. As a simple example, the right-truncated Normal distribution studied at Expected value of x in a normal distribution, GIVEN that it is below a certain value is determined by a Normally-distributed random variable $X$ and the event $X\le T$ (for the fixed truncation limit $T$).
Another example, just to make these distinctions very clear, models a population of people where we are interested in their sex and age (at a specified time, because both these properties can change!). By agreeing on a unit of measure of age (seconds, say), and (for simplicity) focusing on those people with a definite sex, we may take the sample space to be
$$\Omega = \{\text{male}, \text{female}\}\times [0,\infty).$$
Elements of $\Omega$ represent people. A sample from $\Omega$ could be represented by rows in a two-column table: one for sex, the other for age. That's what the Cartesian product $\times$ in the definition of $\Omega$ means.
The probabilities of interest will attach to intervals of ages for each sex separately (or combined). Thus, relevant events will be composed out of age intervals of the form $\{\text{male}\}\times (a,b]$ (for lower and upper ages $a$ and $b$ of males) and $\{\text{female}\}\times (a,b]$ (an interval of ages for females).
As a shorthand, "$\{\text{male}\}$" is the event $\{\text{male}\}\times [0,\infty) = \{(\text{male},x)\mid x \ge 0\},$ and similarly for "$\{\text{female}\}.$" By definition, these are both events -- or "subpopulations" if you like.
Let $X$ be the random variable giving the age of a person rounded to the nearest year. Then (for instance) we might be interested in $F_X$ (the distribution of all ages), of $F_X(\ \mid \{\text{male}\})$ (the distribution of male ages), or of $F_X(\ \mid \{\text{female}\}).$
This nice example shows that the conditioning event $\mathcal E$ (the sex) needn't have anything to do with $X$ (the age).
Clearly, this formulation of conditional distributions does not require us to define a random variable to condition on a characteristic like sex in the example. We could have done it that way, and there are some analytical and computational advantages to doing so, but conceptually such a construct would be artificial and superfluous.
When there are multiple random variables $(X,Y)$ (and $Y$ can be vector-valued), nothing new emerges because conditioning on $Y$ means conditioning on the events it defines. | Conditional Probability vs Conditional Probability Distribution
It seems to me we can use what we already know, provided we have heard of distribution functions and conditional probabilities. Thus, the following remarks offer nothing new, but I hope that in makin |
50,370 | Conditional Probability vs Conditional Probability Distribution | Well I think the term "conditional probability" or "conditional probability distribution" can both extend to two or more variables (correct me if I am wrong). For example, let us suppose we have $X,Y,Z$ are i.i.d random variables uniformly distributed on (0,1) for simplicity. Then we are required to find the conditional probability of $P\left(X \ge YZ|Y>\frac{1}{2}\right)$. This should be a valid question about finding conditional probability.
Then,
\begin{align}P\left(X \ge YZ|Y>\frac{1}{2}\right) &= \frac{P\left(X \ge YZ,Y>\frac{1}{2}\right)}{P(Y>\frac{1}{2})} \\&= \frac{\int_0^1\int_\frac{1}{2}^1\int_{yz}^1 1 dxdydz}{\frac{1}{2}}\\&= 2 \int_0^1\int_\frac{1}{2}^1(1-yz)dydz \\&= 2\int_0^1 \left(\frac{1}{2}-\frac{3z}{8}\right)dz\\&=2 \left( \frac{1}{2}-\frac{3}{16}\right)\\&=\frac{5}{8}\end{align}
Similarly, $$P\left(X \ge YZ|Y \le\frac{1}{2}\right)= \frac{\int_0^1\int_0^\frac{1}{2}\int_{yz}^1 1 dxdydz}{\frac{1}{2}} = \frac{7}{8},$$
as you can verify.
In a nutshell, the concept of conditional probability is not only valid on single variable case. | Conditional Probability vs Conditional Probability Distribution | Well I think the term "conditional probability" or "conditional probability distribution" can both extend to two or more variables (correct me if I am wrong). For example, let us suppose we have $X,Y, | Conditional Probability vs Conditional Probability Distribution
Well I think the term "conditional probability" or "conditional probability distribution" can both extend to two or more variables (correct me if I am wrong). For example, let us suppose we have $X,Y,Z$ are i.i.d random variables uniformly distributed on (0,1) for simplicity. Then we are required to find the conditional probability of $P\left(X \ge YZ|Y>\frac{1}{2}\right)$. This should be a valid question about finding conditional probability.
Then,
\begin{align}P\left(X \ge YZ|Y>\frac{1}{2}\right) &= \frac{P\left(X \ge YZ,Y>\frac{1}{2}\right)}{P(Y>\frac{1}{2})} \\&= \frac{\int_0^1\int_\frac{1}{2}^1\int_{yz}^1 1 dxdydz}{\frac{1}{2}}\\&= 2 \int_0^1\int_\frac{1}{2}^1(1-yz)dydz \\&= 2\int_0^1 \left(\frac{1}{2}-\frac{3z}{8}\right)dz\\&=2 \left( \frac{1}{2}-\frac{3}{16}\right)\\&=\frac{5}{8}\end{align}
Similarly, $$P\left(X \ge YZ|Y \le\frac{1}{2}\right)= \frac{\int_0^1\int_0^\frac{1}{2}\int_{yz}^1 1 dxdydz}{\frac{1}{2}} = \frac{7}{8},$$
as you can verify.
In a nutshell, the concept of conditional probability is not only valid on single variable case. | Conditional Probability vs Conditional Probability Distribution
Well I think the term "conditional probability" or "conditional probability distribution" can both extend to two or more variables (correct me if I am wrong). For example, let us suppose we have $X,Y, |
50,371 | Fit a Weibull distribution to...right-censored data? | Fitting is implemented in R's classic survival package (this function), or flexsurv, which has more flexibility, but a different parametrization for Weibull (flexsurvreg function here). Both also provide a few other distributions to try out.
Although I am not entirely sure about the method, it looks like both packages use likelihood maximization (from inspection of source functions, e.g. body(flexsurv::flexsurvreg)).
For more detailed R instructions and plotting examples, see this question on SO. The idea there is to take various survival fractions $S(t)$, use predict.survreg as $S^{-1}(t)$ to get the corresponding times, and plot those on your own. In the no-covariate case that you have the call can be just predict(weibull_fit, type="quantile", p=x)[1], with x - some quantile.
Note that these are just the predictions given estimated parameters - if you want to incorporate the uncertainty about the estimates as well, you will need to switch to log scale, add se.fit=T to the predict call and use that to calculate the confidence intervals for the predicted times. Something like:
pr = predict(weibull_fit, type="uquantile", p=0.5, se.fit=T)
lims = c(pr$fit[1] + 1.96*pr$se.fit[1], pr$fit[1] - 1.96*pr$se.fit[1])
lims = exp(lims)
should give 95 % CI for predicted median survival time. | Fit a Weibull distribution to...right-censored data? | Fitting is implemented in R's classic survival package (this function), or flexsurv, which has more flexibility, but a different parametrization for Weibull (flexsurvreg function here). Both also prov | Fit a Weibull distribution to...right-censored data?
Fitting is implemented in R's classic survival package (this function), or flexsurv, which has more flexibility, but a different parametrization for Weibull (flexsurvreg function here). Both also provide a few other distributions to try out.
Although I am not entirely sure about the method, it looks like both packages use likelihood maximization (from inspection of source functions, e.g. body(flexsurv::flexsurvreg)).
For more detailed R instructions and plotting examples, see this question on SO. The idea there is to take various survival fractions $S(t)$, use predict.survreg as $S^{-1}(t)$ to get the corresponding times, and plot those on your own. In the no-covariate case that you have the call can be just predict(weibull_fit, type="quantile", p=x)[1], with x - some quantile.
Note that these are just the predictions given estimated parameters - if you want to incorporate the uncertainty about the estimates as well, you will need to switch to log scale, add se.fit=T to the predict call and use that to calculate the confidence intervals for the predicted times. Something like:
pr = predict(weibull_fit, type="uquantile", p=0.5, se.fit=T)
lims = c(pr$fit[1] + 1.96*pr$se.fit[1], pr$fit[1] - 1.96*pr$se.fit[1])
lims = exp(lims)
should give 95 % CI for predicted median survival time. | Fit a Weibull distribution to...right-censored data?
Fitting is implemented in R's classic survival package (this function), or flexsurv, which has more flexibility, but a different parametrization for Weibull (flexsurvreg function here). Both also prov |
50,372 | How to interpret concordance index in Cox models? | From the documentation on predict.coxph, the choices for type are
" the linear predictor ("lp"), the risk score exp(lp) ("risk"), the
expected number of events given the covariates and follow-up time
("expected"), and the terms of the linear predictor ("terms"). The
survival probability for a subject is equal to exp(-expected)"
These are not all numerically the same things. As far as I know, the concordance index for survival analysis is designed to run only on the predicted risk, which is the default output of the coxph function.
For example, the documentation on concordance.index function in the survcomp package says that the input x must be a predicted risk. | How to interpret concordance index in Cox models? | From the documentation on predict.coxph, the choices for type are
" the linear predictor ("lp"), the risk score exp(lp) ("risk"), the
expected number of events given the covariates and follow-up ti | How to interpret concordance index in Cox models?
From the documentation on predict.coxph, the choices for type are
" the linear predictor ("lp"), the risk score exp(lp) ("risk"), the
expected number of events given the covariates and follow-up time
("expected"), and the terms of the linear predictor ("terms"). The
survival probability for a subject is equal to exp(-expected)"
These are not all numerically the same things. As far as I know, the concordance index for survival analysis is designed to run only on the predicted risk, which is the default output of the coxph function.
For example, the documentation on concordance.index function in the survcomp package says that the input x must be a predicted risk. | How to interpret concordance index in Cox models?
From the documentation on predict.coxph, the choices for type are
" the linear predictor ("lp"), the risk score exp(lp) ("risk"), the
expected number of events given the covariates and follow-up ti |
50,373 | Under which assumptions does weak stationarity imply strong stationarity | Hint: consider what happens when you make more assumptions about the specific distribution of the errors. Then you can write down exact conditional densities. After multiplying a few together, you will have the joint density of all the time observations, and strong stationarity deals with this joint distribution.
For your model:
$$
p(y_1, y_2, \ldots , y_n) = \prod_{t=3}^n p(y_t \mid y_{t-1}, y_{t-2} ) p(y_1, y_2)\tag{1}.
$$
If you assumed that the errors were Normally distributed then
$$
p(y_t \mid y_{t-1}, y_{t-2} ) = N(.8 y_{t-1} +.1 y_{t-2}, \sigma^2).
$$
Another hint:
If this Normal distribution does lead to strong stationarity, then the
joint distribution of all the observations $\{y_t\}$ should have the right means, and the right variances and (auto-)covariances. Arrange all of those autocovariances and variances into a matrix $\Gamma$. Then your joint density should be
$$
(2\pi)^{-n/2}(\det\Gamma)^{-1/2}\exp\left[-\frac{1}{2}\mathbf{y}_t'\Gamma^{-1}\mathbf{y}_t \right].
$$ | Under which assumptions does weak stationarity imply strong stationarity | Hint: consider what happens when you make more assumptions about the specific distribution of the errors. Then you can write down exact conditional densities. After multiplying a few together, you wil | Under which assumptions does weak stationarity imply strong stationarity
Hint: consider what happens when you make more assumptions about the specific distribution of the errors. Then you can write down exact conditional densities. After multiplying a few together, you will have the joint density of all the time observations, and strong stationarity deals with this joint distribution.
For your model:
$$
p(y_1, y_2, \ldots , y_n) = \prod_{t=3}^n p(y_t \mid y_{t-1}, y_{t-2} ) p(y_1, y_2)\tag{1}.
$$
If you assumed that the errors were Normally distributed then
$$
p(y_t \mid y_{t-1}, y_{t-2} ) = N(.8 y_{t-1} +.1 y_{t-2}, \sigma^2).
$$
Another hint:
If this Normal distribution does lead to strong stationarity, then the
joint distribution of all the observations $\{y_t\}$ should have the right means, and the right variances and (auto-)covariances. Arrange all of those autocovariances and variances into a matrix $\Gamma$. Then your joint density should be
$$
(2\pi)^{-n/2}(\det\Gamma)^{-1/2}\exp\left[-\frac{1}{2}\mathbf{y}_t'\Gamma^{-1}\mathbf{y}_t \right].
$$ | Under which assumptions does weak stationarity imply strong stationarity
Hint: consider what happens when you make more assumptions about the specific distribution of the errors. Then you can write down exact conditional densities. After multiplying a few together, you wil |
50,374 | What's the relation between Matrix Factorization (MF) and Latent Dirichlet Allocation (LDA)? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This paper suggests an answer:
Faleiros, Thiago de Paulo, and Alneu de Andrade Lopes. "On the
equivalence between algorithms for non-negative matrix factorization
and latent Dirichlet allocation." European Symposium on Artificial
Neural Networks, Computational Intelligence and Machine Learning,
XXIV. European Neural Network Society-ENNS, 2016. (PDF link) | What's the relation between Matrix Factorization (MF) and Latent Dirichlet Allocation (LDA)? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| What's the relation between Matrix Factorization (MF) and Latent Dirichlet Allocation (LDA)?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This paper suggests an answer:
Faleiros, Thiago de Paulo, and Alneu de Andrade Lopes. "On the
equivalence between algorithms for non-negative matrix factorization
and latent Dirichlet allocation." European Symposium on Artificial
Neural Networks, Computational Intelligence and Machine Learning,
XXIV. European Neural Network Society-ENNS, 2016. (PDF link) | What's the relation between Matrix Factorization (MF) and Latent Dirichlet Allocation (LDA)?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
50,375 | Goodness of fit for logistic regression in r | I suggest to use the Hosmer-Lemeshow goodness of fit test for logistic regression which is implemented in the ResourceSelection library with the hoslem.test function. See: thestatsgeek.com/2014/02/16/ - Marco Sandri
But as @kjetilbhalvorsen points out below, Frank Harrell disagrees:
The Hosmer-Lemeshow test is to some extent obsolete because it
requires arbitrary binning of predicted probabilities and does not
possess excellent power to detect lack of calibration. It also does
not fully penalize for extreme overfitting of the model. Better
methods are available such as
Hosmer, D. W.; Hosmer, T.; le Cessie, S.
& Lemeshow, S. A comparison of goodness-of-fit tests for the logistic
regression model. Statistics in Medicine, 1997, 16, 965-980
Their new measure is implemented in the R rms package. | Goodness of fit for logistic regression in r | I suggest to use the Hosmer-Lemeshow goodness of fit test for logistic regression which is implemented in the ResourceSelection library with the hoslem.test function. See: thestatsgeek.com/2014/02/16/ | Goodness of fit for logistic regression in r
I suggest to use the Hosmer-Lemeshow goodness of fit test for logistic regression which is implemented in the ResourceSelection library with the hoslem.test function. See: thestatsgeek.com/2014/02/16/ - Marco Sandri
But as @kjetilbhalvorsen points out below, Frank Harrell disagrees:
The Hosmer-Lemeshow test is to some extent obsolete because it
requires arbitrary binning of predicted probabilities and does not
possess excellent power to detect lack of calibration. It also does
not fully penalize for extreme overfitting of the model. Better
methods are available such as
Hosmer, D. W.; Hosmer, T.; le Cessie, S.
& Lemeshow, S. A comparison of goodness-of-fit tests for the logistic
regression model. Statistics in Medicine, 1997, 16, 965-980
Their new measure is implemented in the R rms package. | Goodness of fit for logistic regression in r
I suggest to use the Hosmer-Lemeshow goodness of fit test for logistic regression which is implemented in the ResourceSelection library with the hoslem.test function. See: thestatsgeek.com/2014/02/16/ |
50,376 | Differences between p-value, level of significane and size of a test | level and size
Wikipedia has the following:
A test is said to have significance level $\alpha$ if its size is less than or equal to $\alpha$.
I agree with this. It also says:
the size of a test is [...] the probability of making a Type I error.
this is not quite always true. (It corrects it lower down in the article.)
In the case of a composite null hypothesis, the size is the supremum of the rejection rates over all the possibilities under the null.
Loosely, it's the largest rejection rate under the null.
Note that in the general case (ponder a potentially composite null and possibly discrete test statistic) we may not be able to actually attain a rejection rate of some pre-specified $\alpha$
e.g. consider a two-tailed sign test with n=18 -- you can get a rejection rate under the null of 3.1% or 9.6% but you can't actually get 5% unless you resort to devices like randomized tests, or
consider that the actual type I error rate may depend on where in the null we happen to be situated. For example, with a one-sided t-test where $H_0: \mu\leq 0$, if the true $\mu=-0.5$ the type $I$ error rate will generally be lower than it would be if $\mu=-0.03$.
So now consider I want a significance level of 5% with one tailed sign test with $n=18$ under the composite null $H_0: \tilde{\mu}\leq 0$ vs $H_1:\tilde{\mu}> 0$. Now if $\tilde{\mu}$ is actually $0$ then my type I error rate is just over 4.8%. On the other hand if $\tilde{\mu}$ is $<0$ then my type I error rate will be something smaller than 4.8%; lets say we are in a particular situation under the null (depending on the specifics of the distribution) and our type I error rate there is 3.2%. We'd have a test with a 5% significance level, a size of 4.81% and an actual type I error rate of 3.2% (though in practice we couldn't figure this last one out because we wouldn't know either the population shape or its median).
Note in particular that both size and level don't relate to the sample -- if we draw another random sample of the same size (and other relevant characteristics), we should not expect size or level to change.
p value
The p-value is the probability of obtaining a test statistic at least as extreme as the one we observed from the sample, if the null hypothesis were true.
So by contrast with the other two things, the p-value is a function of the sample. New sample, new p-value.
It may be less than or greater than the the type I error rate, the size or the significance level. | Differences between p-value, level of significane and size of a test | level and size
Wikipedia has the following:
A test is said to have significance level $\alpha$ if its size is less than or equal to $\alpha$.
I agree with this. It also says:
the size of a test is | Differences between p-value, level of significane and size of a test
level and size
Wikipedia has the following:
A test is said to have significance level $\alpha$ if its size is less than or equal to $\alpha$.
I agree with this. It also says:
the size of a test is [...] the probability of making a Type I error.
this is not quite always true. (It corrects it lower down in the article.)
In the case of a composite null hypothesis, the size is the supremum of the rejection rates over all the possibilities under the null.
Loosely, it's the largest rejection rate under the null.
Note that in the general case (ponder a potentially composite null and possibly discrete test statistic) we may not be able to actually attain a rejection rate of some pre-specified $\alpha$
e.g. consider a two-tailed sign test with n=18 -- you can get a rejection rate under the null of 3.1% or 9.6% but you can't actually get 5% unless you resort to devices like randomized tests, or
consider that the actual type I error rate may depend on where in the null we happen to be situated. For example, with a one-sided t-test where $H_0: \mu\leq 0$, if the true $\mu=-0.5$ the type $I$ error rate will generally be lower than it would be if $\mu=-0.03$.
So now consider I want a significance level of 5% with one tailed sign test with $n=18$ under the composite null $H_0: \tilde{\mu}\leq 0$ vs $H_1:\tilde{\mu}> 0$. Now if $\tilde{\mu}$ is actually $0$ then my type I error rate is just over 4.8%. On the other hand if $\tilde{\mu}$ is $<0$ then my type I error rate will be something smaller than 4.8%; lets say we are in a particular situation under the null (depending on the specifics of the distribution) and our type I error rate there is 3.2%. We'd have a test with a 5% significance level, a size of 4.81% and an actual type I error rate of 3.2% (though in practice we couldn't figure this last one out because we wouldn't know either the population shape or its median).
Note in particular that both size and level don't relate to the sample -- if we draw another random sample of the same size (and other relevant characteristics), we should not expect size or level to change.
p value
The p-value is the probability of obtaining a test statistic at least as extreme as the one we observed from the sample, if the null hypothesis were true.
So by contrast with the other two things, the p-value is a function of the sample. New sample, new p-value.
It may be less than or greater than the the type I error rate, the size or the significance level. | Differences between p-value, level of significane and size of a test
level and size
Wikipedia has the following:
A test is said to have significance level $\alpha$ if its size is less than or equal to $\alpha$.
I agree with this. It also says:
the size of a test is |
50,377 | Gaussian Mixture for detecting outliers | There is a smart way to do this that is implemented by JMP software. In the GMM fitting, there is an option for "outlier cluster" that can be checked. The description of this is below:
The outlier cluster option assumes a uniform distribution and is less
sensitive to outliers than the standard Normal Mixtures method. This
fits a cluster to catch outliers that do not fall into any of the
normal clusters. The distribution of observations that fall in the outlier
cluster is assumed to be uniform over the hypercube that encompasses
the observations.
So what does this mean? Well, it's just an additional latent factor (distribution) with a prior (same as the other mixture components) that is updated during the expectation step. Naturally the data points that don't fall near a legitimate Gaussian cluster end up with a higher probability of being part of the [sparse] uniform distribution.
It works well and is something akin to finding outliers via DBSCAN clustering except with less tuning and investigation up front to set hyperparameters....but frankly it's not really that much more magical than just fitting a GMM without it and taking something like the lowest 0.5% quantile of points or similar (the quantile % then becomes a hyperparameter). The only difference here is that the output of the algorithm chooses them as a result of the fitting. Note however the group membership results will change with the number of latent units (which is a hyperparameter in the case of a GMM), so you either pay Peter or Paul...there's nothing out there that will tell you what an outlier is without making some kind of assumption or setting a hyper-parameter up front. | Gaussian Mixture for detecting outliers | There is a smart way to do this that is implemented by JMP software. In the GMM fitting, there is an option for "outlier cluster" that can be checked. The description of this is below:
The outlier | Gaussian Mixture for detecting outliers
There is a smart way to do this that is implemented by JMP software. In the GMM fitting, there is an option for "outlier cluster" that can be checked. The description of this is below:
The outlier cluster option assumes a uniform distribution and is less
sensitive to outliers than the standard Normal Mixtures method. This
fits a cluster to catch outliers that do not fall into any of the
normal clusters. The distribution of observations that fall in the outlier
cluster is assumed to be uniform over the hypercube that encompasses
the observations.
So what does this mean? Well, it's just an additional latent factor (distribution) with a prior (same as the other mixture components) that is updated during the expectation step. Naturally the data points that don't fall near a legitimate Gaussian cluster end up with a higher probability of being part of the [sparse] uniform distribution.
It works well and is something akin to finding outliers via DBSCAN clustering except with less tuning and investigation up front to set hyperparameters....but frankly it's not really that much more magical than just fitting a GMM without it and taking something like the lowest 0.5% quantile of points or similar (the quantile % then becomes a hyperparameter). The only difference here is that the output of the algorithm chooses them as a result of the fitting. Note however the group membership results will change with the number of latent units (which is a hyperparameter in the case of a GMM), so you either pay Peter or Paul...there's nothing out there that will tell you what an outlier is without making some kind of assumption or setting a hyper-parameter up front. | Gaussian Mixture for detecting outliers
There is a smart way to do this that is implemented by JMP software. In the GMM fitting, there is an option for "outlier cluster" that can be checked. The description of this is below:
The outlier |
50,378 | Gaussian Mixture for detecting outliers | Just an idea, not using Gaussian processes:
If your dataset is not too big, you could use hierarchical clustering with a linkage method that creates unbalanced trees. The thinner branches of the tree would then represent the outliers.
I happen to know that the single linkage method of R's hclust() function method tends to produce unblanced trees. You would call it as hclust(dist(mydata), method = "single") | Gaussian Mixture for detecting outliers | Just an idea, not using Gaussian processes:
If your dataset is not too big, you could use hierarchical clustering with a linkage method that creates unbalanced trees. The thinner branches of the tree | Gaussian Mixture for detecting outliers
Just an idea, not using Gaussian processes:
If your dataset is not too big, you could use hierarchical clustering with a linkage method that creates unbalanced trees. The thinner branches of the tree would then represent the outliers.
I happen to know that the single linkage method of R's hclust() function method tends to produce unblanced trees. You would call it as hclust(dist(mydata), method = "single") | Gaussian Mixture for detecting outliers
Just an idea, not using Gaussian processes:
If your dataset is not too big, you could use hierarchical clustering with a linkage method that creates unbalanced trees. The thinner branches of the tree |
50,379 | How to interpret OOBerror while doing data imputation with missForest | Unless I'm mistaken, the units of the Mean Squared Errors of your imputations are expressed in your variables units-squared, not in percentage. Therefore, I believe it is yours to interpret whether a Root Mean Square Error of 0.02, 0.007 or 0.017 is acceptable or not with regards to the units of your variables (as I'm not a statistician, I would be glad if someone would tell me if I understood this right).
Regarding the rest of your question, I do not know how Stekhoven & Buehlmann actually coded missForest() but, according to Wikipedia, Normalized RMSE is usually computed by dividing the RMSE by the observations mean or range. Consequently, the global NRMSE returned by missForest() is probably an aggregation (perhaps the average) of the NMRSE computed for each individual variable. | How to interpret OOBerror while doing data imputation with missForest | Unless I'm mistaken, the units of the Mean Squared Errors of your imputations are expressed in your variables units-squared, not in percentage. Therefore, I believe it is yours to interpret whether a | How to interpret OOBerror while doing data imputation with missForest
Unless I'm mistaken, the units of the Mean Squared Errors of your imputations are expressed in your variables units-squared, not in percentage. Therefore, I believe it is yours to interpret whether a Root Mean Square Error of 0.02, 0.007 or 0.017 is acceptable or not with regards to the units of your variables (as I'm not a statistician, I would be glad if someone would tell me if I understood this right).
Regarding the rest of your question, I do not know how Stekhoven & Buehlmann actually coded missForest() but, according to Wikipedia, Normalized RMSE is usually computed by dividing the RMSE by the observations mean or range. Consequently, the global NRMSE returned by missForest() is probably an aggregation (perhaps the average) of the NMRSE computed for each individual variable. | How to interpret OOBerror while doing data imputation with missForest
Unless I'm mistaken, the units of the Mean Squared Errors of your imputations are expressed in your variables units-squared, not in percentage. Therefore, I believe it is yours to interpret whether a |
50,380 | When is logistic regression Bayes-optimal? | The key question lies in modelling versus knowing the true law.
Assume your data obbeys an unknown perfect law $P(y=1|x)=f(x)$. Then the Bayes optimal classifier is "classify y=1 when $f(x)>0.5$". This is true for any law and not related to anything algebraic. In practice you don't know $f$ and you can't, so that the Bayes-optimal classifier is only a theoretical object.
Now, imagine you don't know $f$ but you know that $f(x)=logit^{-1}(\beta x)$ and only ignore $\beta$. This happens only in simulations where you control the underlying true law and hide $\beta$. You estimate it as $\hat\beta$ and you say "classify y=1 when $logit^{-1}(\hat\beta x)>0.5$". This is not Bayes optimal since don't have the exact $\beta$. It is asymptotically Bayes optimal since with infinite training data $\hat\beta=\beta$.
But in a real situation, logistic regression in only a guess for the unknown law and it's always false. You not only ignore the parameter, you also ignore how much logistic regression is a good approximation for the true unknown law. Then logistic regression predictor is not Bayes optimal. Not even asymptotically. Worse: you can't known how far it is to optimality.
There is a case where you can measure this: simulate data with an $f$ that is not logistic and see how good the logistic approximation is. This is not a real situation though. | When is logistic regression Bayes-optimal? | The key question lies in modelling versus knowing the true law.
Assume your data obbeys an unknown perfect law $P(y=1|x)=f(x)$. Then the Bayes optimal classifier is "classify y=1 when $f(x)>0.5$". Thi | When is logistic regression Bayes-optimal?
The key question lies in modelling versus knowing the true law.
Assume your data obbeys an unknown perfect law $P(y=1|x)=f(x)$. Then the Bayes optimal classifier is "classify y=1 when $f(x)>0.5$". This is true for any law and not related to anything algebraic. In practice you don't know $f$ and you can't, so that the Bayes-optimal classifier is only a theoretical object.
Now, imagine you don't know $f$ but you know that $f(x)=logit^{-1}(\beta x)$ and only ignore $\beta$. This happens only in simulations where you control the underlying true law and hide $\beta$. You estimate it as $\hat\beta$ and you say "classify y=1 when $logit^{-1}(\hat\beta x)>0.5$". This is not Bayes optimal since don't have the exact $\beta$. It is asymptotically Bayes optimal since with infinite training data $\hat\beta=\beta$.
But in a real situation, logistic regression in only a guess for the unknown law and it's always false. You not only ignore the parameter, you also ignore how much logistic regression is a good approximation for the true unknown law. Then logistic regression predictor is not Bayes optimal. Not even asymptotically. Worse: you can't known how far it is to optimality.
There is a case where you can measure this: simulate data with an $f$ that is not logistic and see how good the logistic approximation is. This is not a real situation though. | When is logistic regression Bayes-optimal?
The key question lies in modelling versus knowing the true law.
Assume your data obbeys an unknown perfect law $P(y=1|x)=f(x)$. Then the Bayes optimal classifier is "classify y=1 when $f(x)>0.5$". Thi |
50,381 | When is logistic regression Bayes-optimal? | I think one could construct an example when the logistic regression is asymptotically Bayes-optimal (i.e., it minimises the expected 0/1 loss).
One way to do this would be to consider a domain with two balanced (i.e., with equal marginal probabilities) normally distributed classes with the same covariance matrix. In this case, the logistic regression would learn the same classifier as LDA (linear discrimination analysis) which is asymptotically Bayes-optimal in this domain (this follows from Theorem 22.7 in L. Wasserman, All of Statistics). | When is logistic regression Bayes-optimal? | I think one could construct an example when the logistic regression is asymptotically Bayes-optimal (i.e., it minimises the expected 0/1 loss).
One way to do this would be to consider a domain with tw | When is logistic regression Bayes-optimal?
I think one could construct an example when the logistic regression is asymptotically Bayes-optimal (i.e., it minimises the expected 0/1 loss).
One way to do this would be to consider a domain with two balanced (i.e., with equal marginal probabilities) normally distributed classes with the same covariance matrix. In this case, the logistic regression would learn the same classifier as LDA (linear discrimination analysis) which is asymptotically Bayes-optimal in this domain (this follows from Theorem 22.7 in L. Wasserman, All of Statistics). | When is logistic regression Bayes-optimal?
I think one could construct an example when the logistic regression is asymptotically Bayes-optimal (i.e., it minimises the expected 0/1 loss).
One way to do this would be to consider a domain with tw |
50,382 | Convert predicted probabilities after downsampling to actual probabilities in classification | The two formulas are equivalent (the first is rather more elegant, IMO).
Let $\alpha$ denote the "original fraction" from the second link, the fraction of the positive class in the population, and let $\alpha'$ denote the (re/over/under)sampled fraction. Keeping $p_s$ as the model's output "probability" score and $p$ the calibrated score as in the first link, the second formula is given in symbols as
$$ p = \frac{1}{1+\frac{\left(\frac{1}{\alpha}-1\right)}{\left(\frac{1}{\alpha'}-1\right)} \cdot \left(\frac{1}{p_s}-1\right)}.$$
That's a terrible mess, but it does have the advantage that each variable appears only once (maybe that's why the post gives it that way?).
The first formula can be rewritten similarly, by dividing numerator and denominator by $\beta p_s$:
$$p = \frac{\beta p_s}{(\beta-1)p_s+1} = \frac{1}{\left(1-\frac{1}{\beta}\right) + \frac{1}{\beta p_s}} = \frac{1}{1+\frac{1}{\beta}\left(-1 + \frac{1}{p_s}\right)}.$$
So now it's clear that these two are equivalent, provided that
$$\beta = \left(\frac{1}{\alpha'}-1\right) / \left(\frac{1}{\alpha}-1\right),$$
which it might be worth pointing out is just the ratio (resampled data to population) of the odds of selecting a positive sample. And indeed, the two formulas for adjusting probabilities have a simpler explanation in terms of the odds: the adjusted odds are $\beta$ times the raw model "odds."
Now, the context of the first link is that we just undersample the negative majority class, and the definition of $\beta$ is the probability that a negative sample is selected. That does use the oversampled prevalence, just not as explicitly.
See also https://datascience.stackexchange.com/q/58631/55122 | Convert predicted probabilities after downsampling to actual probabilities in classification | The two formulas are equivalent (the first is rather more elegant, IMO).
Let $\alpha$ denote the "original fraction" from the second link, the fraction of the positive class in the population, and let | Convert predicted probabilities after downsampling to actual probabilities in classification
The two formulas are equivalent (the first is rather more elegant, IMO).
Let $\alpha$ denote the "original fraction" from the second link, the fraction of the positive class in the population, and let $\alpha'$ denote the (re/over/under)sampled fraction. Keeping $p_s$ as the model's output "probability" score and $p$ the calibrated score as in the first link, the second formula is given in symbols as
$$ p = \frac{1}{1+\frac{\left(\frac{1}{\alpha}-1\right)}{\left(\frac{1}{\alpha'}-1\right)} \cdot \left(\frac{1}{p_s}-1\right)}.$$
That's a terrible mess, but it does have the advantage that each variable appears only once (maybe that's why the post gives it that way?).
The first formula can be rewritten similarly, by dividing numerator and denominator by $\beta p_s$:
$$p = \frac{\beta p_s}{(\beta-1)p_s+1} = \frac{1}{\left(1-\frac{1}{\beta}\right) + \frac{1}{\beta p_s}} = \frac{1}{1+\frac{1}{\beta}\left(-1 + \frac{1}{p_s}\right)}.$$
So now it's clear that these two are equivalent, provided that
$$\beta = \left(\frac{1}{\alpha'}-1\right) / \left(\frac{1}{\alpha}-1\right),$$
which it might be worth pointing out is just the ratio (resampled data to population) of the odds of selecting a positive sample. And indeed, the two formulas for adjusting probabilities have a simpler explanation in terms of the odds: the adjusted odds are $\beta$ times the raw model "odds."
Now, the context of the first link is that we just undersample the negative majority class, and the definition of $\beta$ is the probability that a negative sample is selected. That does use the oversampled prevalence, just not as explicitly.
See also https://datascience.stackexchange.com/q/58631/55122 | Convert predicted probabilities after downsampling to actual probabilities in classification
The two formulas are equivalent (the first is rather more elegant, IMO).
Let $\alpha$ denote the "original fraction" from the second link, the fraction of the positive class in the population, and let |
50,383 | how to handle missing data in clustering problem | If you exclude features with missing values, you might bias your conclusions or lose information.
Consider a dataset with 10 patients and their cholesterol values. You are interested in predicting cholesterol values based on these features. You might have one feature, age at beginning of study, and one feature # chol checks last month. The latter is missing in 5 of the patients because they were so healthy that they decided to not even follow up by sending you the data. In this case, if you exclude that feature, you might exclude your best predictor.
A better way is to note that all of those patients who didn't follow up also happened to be the young ones. Also you might note that for the 5 patients who did have # cholesterol check records sent to you, the data was like this
age # checks
50 10
60 20
70 30
80 40
You can see that there is a relationship between cholesterol checks and age; you could even figure out the parameters of a regression. You can use this regression to then fill in the missing values for the young patients. This is the idea behind matrix completion.
The values that you impute will however be single values, and you won't have a sense of how good they really are. For making predictions, you can hold out a test set and see whether your imputation method actually improve results. For clustering, depending on your application, because its difficult to evaluate your imputation method as a step in some larger pipeline, it might be wise to also consider multiple imputation as suggested by @mkt. | how to handle missing data in clustering problem | If you exclude features with missing values, you might bias your conclusions or lose information.
Consider a dataset with 10 patients and their cholesterol values. You are interested in predicting | how to handle missing data in clustering problem
If you exclude features with missing values, you might bias your conclusions or lose information.
Consider a dataset with 10 patients and their cholesterol values. You are interested in predicting cholesterol values based on these features. You might have one feature, age at beginning of study, and one feature # chol checks last month. The latter is missing in 5 of the patients because they were so healthy that they decided to not even follow up by sending you the data. In this case, if you exclude that feature, you might exclude your best predictor.
A better way is to note that all of those patients who didn't follow up also happened to be the young ones. Also you might note that for the 5 patients who did have # cholesterol check records sent to you, the data was like this
age # checks
50 10
60 20
70 30
80 40
You can see that there is a relationship between cholesterol checks and age; you could even figure out the parameters of a regression. You can use this regression to then fill in the missing values for the young patients. This is the idea behind matrix completion.
The values that you impute will however be single values, and you won't have a sense of how good they really are. For making predictions, you can hold out a test set and see whether your imputation method actually improve results. For clustering, depending on your application, because its difficult to evaluate your imputation method as a step in some larger pipeline, it might be wise to also consider multiple imputation as suggested by @mkt. | how to handle missing data in clustering problem
If you exclude features with missing values, you might bias your conclusions or lose information.
Consider a dataset with 10 patients and their cholesterol values. You are interested in predicting |
50,384 | Machine learning models that combine sequences and static features? | Just a suggestion, if your classifying a sequence with an RNN you could add a final fully-connected layer that combines the output of the RNN with your static features (by concatenation) before going to the softmax and outputting the predicted class probabilities. Since this final layer is fully-connect with its own set of weights, as long as you scale the features to zero mean and unit variance, the weighting would be done automatically when training the network. | Machine learning models that combine sequences and static features? | Just a suggestion, if your classifying a sequence with an RNN you could add a final fully-connected layer that combines the output of the RNN with your static features (by concatenation) before going | Machine learning models that combine sequences and static features?
Just a suggestion, if your classifying a sequence with an RNN you could add a final fully-connected layer that combines the output of the RNN with your static features (by concatenation) before going to the softmax and outputting the predicted class probabilities. Since this final layer is fully-connect with its own set of weights, as long as you scale the features to zero mean and unit variance, the weighting would be done automatically when training the network. | Machine learning models that combine sequences and static features?
Just a suggestion, if your classifying a sequence with an RNN you could add a final fully-connected layer that combines the output of the RNN with your static features (by concatenation) before going |
50,385 | correlation between independent variables in linear multiple regression | I apologize in advance for the really long answer, I just don't want to assume any level of familiarity with linear regression. Also, the answer touches on 2-3 different topics, so I wanted to cover all bases.
The answer to your question has to do with what are $B1$ and $B2$. Linear regression is trying to find the unique $\hat{\beta}$ that minimizes the least squares, i.e. in your case: $$\hat{B_0}, \hat{B_1}, \hat{B_2} = \arg\min_{B_0, B_1, B_2} \sum_{i=1}^n\left( sale - B_0 - B_1TV - B_2online\right)^2 $$
or in vector notation:
$$ \hat{\beta} = \arg\min_{\beta\in\mathbb{R}^3} || y - X\beta||^2_2$$ where $y$ is a $n\times1$ vector with your $sale$ data, and $X$ is a $n\times3$ matrix with your variables, i.e. the first column has all 1s, the second column has your $TV$ data, and the third has the $online$ data.
The way to solve this system of equations is: $$\hat{\beta} = (X^TX)^{-1}X^Ty$$
If your data satisfy the full rank assumption, then your $\hat{\beta}$ is unique, because $(X^TX)^{-1}$ is unique. So in the end, it's nothing more than just solving a system of equations.
Where does correlation come into play? If the correlation between $TV$ and $online$ is 1, then your data do not satisfy the full rank assumption, so the matrix above is not invertible, and there's not unique solution. Practically, this means that the algorithm doesn't know where to assign predictive/explanatory power. If it's close to 1 (e.g. > .9), the computer might have trouble finding the exact inverse, so be careful there. If it's less that that, but still high this will probably inflate your standard errors (multicollinearity).
Finally, why are both variables significant in the univariate cases, but not in the multiple regression? Exactly, because they're correlated, the each have a direct and an indirect effect (through the other variable) on $y$. By including only one of them, you're picking up both effects, but if you include both, you're picking up their direct effects (plus any indirect through other missing variables). Assuming that both variables have some effect on $y$ and they correlate with each other, you should include both of them in the regression, because otherwise you're introducing bias (omitted variable bias). | correlation between independent variables in linear multiple regression | I apologize in advance for the really long answer, I just don't want to assume any level of familiarity with linear regression. Also, the answer touches on 2-3 different topics, so I wanted to cover a | correlation between independent variables in linear multiple regression
I apologize in advance for the really long answer, I just don't want to assume any level of familiarity with linear regression. Also, the answer touches on 2-3 different topics, so I wanted to cover all bases.
The answer to your question has to do with what are $B1$ and $B2$. Linear regression is trying to find the unique $\hat{\beta}$ that minimizes the least squares, i.e. in your case: $$\hat{B_0}, \hat{B_1}, \hat{B_2} = \arg\min_{B_0, B_1, B_2} \sum_{i=1}^n\left( sale - B_0 - B_1TV - B_2online\right)^2 $$
or in vector notation:
$$ \hat{\beta} = \arg\min_{\beta\in\mathbb{R}^3} || y - X\beta||^2_2$$ where $y$ is a $n\times1$ vector with your $sale$ data, and $X$ is a $n\times3$ matrix with your variables, i.e. the first column has all 1s, the second column has your $TV$ data, and the third has the $online$ data.
The way to solve this system of equations is: $$\hat{\beta} = (X^TX)^{-1}X^Ty$$
If your data satisfy the full rank assumption, then your $\hat{\beta}$ is unique, because $(X^TX)^{-1}$ is unique. So in the end, it's nothing more than just solving a system of equations.
Where does correlation come into play? If the correlation between $TV$ and $online$ is 1, then your data do not satisfy the full rank assumption, so the matrix above is not invertible, and there's not unique solution. Practically, this means that the algorithm doesn't know where to assign predictive/explanatory power. If it's close to 1 (e.g. > .9), the computer might have trouble finding the exact inverse, so be careful there. If it's less that that, but still high this will probably inflate your standard errors (multicollinearity).
Finally, why are both variables significant in the univariate cases, but not in the multiple regression? Exactly, because they're correlated, the each have a direct and an indirect effect (through the other variable) on $y$. By including only one of them, you're picking up both effects, but if you include both, you're picking up their direct effects (plus any indirect through other missing variables). Assuming that both variables have some effect on $y$ and they correlate with each other, you should include both of them in the regression, because otherwise you're introducing bias (omitted variable bias). | correlation between independent variables in linear multiple regression
I apologize in advance for the really long answer, I just don't want to assume any level of familiarity with linear regression. Also, the answer touches on 2-3 different topics, so I wanted to cover a |
50,386 | correlation between independent variables in linear multiple regression | If you are asking which one is the main driver then it will be TV because spending on TV will result in statistically increased sales and online ad due to being close to zero represents that it has no effect on sales but I can't say that for sure because you haven't mentioned the p-value of online ad budget as it will tell us whether online ad effect is significantly different from zero or not. This whole thing tells you that TV ad is actually effecting the sales and online ad has little effect and the only reason you are seeing the significant coefficient of online ad is only due to the correlation between TV budget and online budget. | correlation between independent variables in linear multiple regression | If you are asking which one is the main driver then it will be TV because spending on TV will result in statistically increased sales and online ad due to being close to zero represents that it has no | correlation between independent variables in linear multiple regression
If you are asking which one is the main driver then it will be TV because spending on TV will result in statistically increased sales and online ad due to being close to zero represents that it has no effect on sales but I can't say that for sure because you haven't mentioned the p-value of online ad budget as it will tell us whether online ad effect is significantly different from zero or not. This whole thing tells you that TV ad is actually effecting the sales and online ad has little effect and the only reason you are seeing the significant coefficient of online ad is only due to the correlation between TV budget and online budget. | correlation between independent variables in linear multiple regression
If you are asking which one is the main driver then it will be TV because spending on TV will result in statistically increased sales and online ad due to being close to zero represents that it has no |
50,387 | correlation between independent variables in linear multiple regression | I think a simple example may help. Assume $sale = TV$ (yes, exactly) and
$online = sale + \epsilon$, where $\epsilon \sim \mathcal{N}(0, 0.001)$ is some noise with small amplitude. If your regress $sale$ against $TV$ and $online$, the minimizer will always give all weight to $TV$ (because this gives an exact fit). If you regress $sale$ against $TV$ alone you will also get a perfect fit. But if you regress $sale$ against $online$ alone you will get pretty good results (because they are almost the same).
It looks like in your situation both variables are good predictors, it's just that one of them is better than the other. | correlation between independent variables in linear multiple regression | I think a simple example may help. Assume $sale = TV$ (yes, exactly) and
$online = sale + \epsilon$, where $\epsilon \sim \mathcal{N}(0, 0.001)$ is some noise with small amplitude. If your regress $sa | correlation between independent variables in linear multiple regression
I think a simple example may help. Assume $sale = TV$ (yes, exactly) and
$online = sale + \epsilon$, where $\epsilon \sim \mathcal{N}(0, 0.001)$ is some noise with small amplitude. If your regress $sale$ against $TV$ and $online$, the minimizer will always give all weight to $TV$ (because this gives an exact fit). If you regress $sale$ against $TV$ alone you will also get a perfect fit. But if you regress $sale$ against $online$ alone you will get pretty good results (because they are almost the same).
It looks like in your situation both variables are good predictors, it's just that one of them is better than the other. | correlation between independent variables in linear multiple regression
I think a simple example may help. Assume $sale = TV$ (yes, exactly) and
$online = sale + \epsilon$, where $\epsilon \sim \mathcal{N}(0, 0.001)$ is some noise with small amplitude. If your regress $sa |
50,388 | Reported Coefficients for Glmnet using Caret | Caret will fit the final model using glmnet again, so it reports the coefficients in the same way as glmnet, which is in the scale of the original data:
library(mlbench)
library(caret)
library(glmnet)
data(BostonHousing)
mymodel = train(medv ~ .,data=BostonHousing,
method="glmnet",tuneLength=5,family="gaussian",
trControl=trainControl(method="cv",number=3))
coef(mymodel$finalModel, mymodel$bestTune$lambda)
1
(Intercept) 35.320709389
crim -0.103881511
zn 0.043895667
indus 0.003208220
chas1 2.711134571
nox -16.888148979
rm 3.839322105
age .
dis -1.440898136
rad 0.276505032
tax -0.010852819
ptratio -0.938477290
b 0.009195566
lstat -0.521371464
gmodel = glmnet(x=as.matrix(BostonHousing[,-14]),y=BostonHousing[,14],
lambda=mymodel$bestTune$lambda)
s0
crim -0.098276800
zn 0.041402890
indus .
chas 2.680135523
nox -16.309105862
rm 3.862803869
age .
dis -1.395580453
rad 0.253522033
tax -0.009853769
ptratio -0.930332033
b 0.009020162
lstat -0.522732773 | Reported Coefficients for Glmnet using Caret | Caret will fit the final model using glmnet again, so it reports the coefficients in the same way as glmnet, which is in the scale of the original data:
library(mlbench)
library(caret)
library(glmnet) | Reported Coefficients for Glmnet using Caret
Caret will fit the final model using glmnet again, so it reports the coefficients in the same way as glmnet, which is in the scale of the original data:
library(mlbench)
library(caret)
library(glmnet)
data(BostonHousing)
mymodel = train(medv ~ .,data=BostonHousing,
method="glmnet",tuneLength=5,family="gaussian",
trControl=trainControl(method="cv",number=3))
coef(mymodel$finalModel, mymodel$bestTune$lambda)
1
(Intercept) 35.320709389
crim -0.103881511
zn 0.043895667
indus 0.003208220
chas1 2.711134571
nox -16.888148979
rm 3.839322105
age .
dis -1.440898136
rad 0.276505032
tax -0.010852819
ptratio -0.938477290
b 0.009195566
lstat -0.521371464
gmodel = glmnet(x=as.matrix(BostonHousing[,-14]),y=BostonHousing[,14],
lambda=mymodel$bestTune$lambda)
s0
crim -0.098276800
zn 0.041402890
indus .
chas 2.680135523
nox -16.309105862
rm 3.862803869
age .
dis -1.395580453
rad 0.253522033
tax -0.009853769
ptratio -0.930332033
b 0.009020162
lstat -0.522732773 | Reported Coefficients for Glmnet using Caret
Caret will fit the final model using glmnet again, so it reports the coefficients in the same way as glmnet, which is in the scale of the original data:
library(mlbench)
library(caret)
library(glmnet) |
50,389 | Decompose a time series data into deterministic trend and stochastic trend | Before I receive your data I would like to take the "bully pulpit" and expound on the task at hand and how I would go about solving this riddle. Your suggested approach I believe is to form an ARIMA model using procedures which implicitly specify no time trend variables thus incorrectly concluding about required differencing etc.. You assume no outliers, pulses/seasonal pulses and no level shifts(intercept changes). After probable mis-specification of the ARIMA filter/structure you then assume 1 trend/1 intercept and piece it together. This is an approach which although programmable is fraught with logical flaws never mind non-constant error variance or non-constant parameters over time.
The first step in analysis is to list the possible sample space that should be investigated and in the absence of direct solution conduct a computer based solution (trial and error) which uses a myriad of possible trials/combinations yielding a possible suggested optimal solution.
The sample space contains
the number of distinct trends
2 the number of possible intercepts
3 the number and kind of differencing operators
the form of the ARMA model
5 the number of one-time pulses
6 the number of seasonal pulses ( seasonal factors )
7 any required error variance change points suggesting the need for weighted Least Squares
8 any required power transformation reflecting a linkage between the error variance and the expected value
Simply evaluate all possible permutations of these 8 factors and select that unique combination that minimizes some error measurement because ORDER IS IMPORTANT !
.
If this is onerous , so be it and I look forward to receiving your tsim2 so I can (possibly) demonstrate an approach that speaks to this "thorny issue" using some of my favorite toys.
Note that if you simulated (tightly) then your approach might be the answer but the question that I have is "your approach robust to data violations" or is simply a cook-book approach that works on this data set and fails on others. Trust but Verify !
EDITED AFTER RECEIPT OF DATA (100 VALUES)
I trust that this discussion will highlight the need for comprehensive/programmable approaches to forming useful models. As discussed above an efficient computer based tournament looking at possible different combinations (max of possible 256 ) yielded the following suggest initial model approach .
The concept here is to "duplicate/approximate the human eye" by examining competing alternatives which is what (in my opinion) we do when performing visual identification of structure. Note this case most eyeballs will not see the level shift at period 65 and simply focus on the major break in trend around period 51.
1 IDENTIFY DETERMINISTIC BREAK POINTS IN TREND
2 IDENTIFY INTERCEPT CHANGES
2 EVALUATE NEED FOR ARIMA AUGMENTATION
4 EVALUATE NEED FOR PULSES
SIMPLIFY VIA NECESSITY TESTS
detailing both a trend change (51) and an intercept change (65). Model diagnostic checking (always a good idea in iterative approaches to model form) yielded the following acf suggesting that improvement was necessary to render a set of residuals free of structure. An augmented model was then suggested of the form with an insignificant AR(1) coefficient.
The final model is here with model statistics and here
The residuals from this model are presented here with an acf of
The Actual/Fit and Forecast graph is here . The cleansed vs the actual is revealing as it details the level shift effect
In summary where the OP simulated a (1,1,0) for the fitst 50 observations, he then abridged the last 50 observations effectively coloring/changing the composite ARMA process to a (1,0,0) while embodying the empirically identified 3 predictors.
Comprehensive data analysis incorporating advanced search procedures is the objective . This data set is "thorny" and I look forward to any suggested improvements that may arise from this discussion. I used a beta version of AUTOBOX (which I have helped to develop) as my tool of choice.
As to your "proposed method" it may work for this series but there are way too many assumptions such as one and only one stochastic trends, one and only one deterministic trend (1,2,3,...), no pulses , no level shifts (intercept changes) , no seasonal pulses , constant error variance , constant parameters over time et al to suggest generality of approach. You are arguing from the specific to the general. There are tons of wrong ad hoc solutions waiting to be specified and just a handful of "correct solutions" of which my approach is just one.
A close-up showing observations 51 to 100 suggest a significant deviation/change in pattern (i.e. implied intercept) starting at period 65 ( which was picked/identified by the analytics as a level shift (change in intercept)) suggesting a possible simulation flaw as obs 51-64 have a different pattern than obs 65-100. | Decompose a time series data into deterministic trend and stochastic trend | Before I receive your data I would like to take the "bully pulpit" and expound on the task at hand and how I would go about solving this riddle. Your suggested approach I believe is to form an ARIMA m | Decompose a time series data into deterministic trend and stochastic trend
Before I receive your data I would like to take the "bully pulpit" and expound on the task at hand and how I would go about solving this riddle. Your suggested approach I believe is to form an ARIMA model using procedures which implicitly specify no time trend variables thus incorrectly concluding about required differencing etc.. You assume no outliers, pulses/seasonal pulses and no level shifts(intercept changes). After probable mis-specification of the ARIMA filter/structure you then assume 1 trend/1 intercept and piece it together. This is an approach which although programmable is fraught with logical flaws never mind non-constant error variance or non-constant parameters over time.
The first step in analysis is to list the possible sample space that should be investigated and in the absence of direct solution conduct a computer based solution (trial and error) which uses a myriad of possible trials/combinations yielding a possible suggested optimal solution.
The sample space contains
the number of distinct trends
2 the number of possible intercepts
3 the number and kind of differencing operators
the form of the ARMA model
5 the number of one-time pulses
6 the number of seasonal pulses ( seasonal factors )
7 any required error variance change points suggesting the need for weighted Least Squares
8 any required power transformation reflecting a linkage between the error variance and the expected value
Simply evaluate all possible permutations of these 8 factors and select that unique combination that minimizes some error measurement because ORDER IS IMPORTANT !
.
If this is onerous , so be it and I look forward to receiving your tsim2 so I can (possibly) demonstrate an approach that speaks to this "thorny issue" using some of my favorite toys.
Note that if you simulated (tightly) then your approach might be the answer but the question that I have is "your approach robust to data violations" or is simply a cook-book approach that works on this data set and fails on others. Trust but Verify !
EDITED AFTER RECEIPT OF DATA (100 VALUES)
I trust that this discussion will highlight the need for comprehensive/programmable approaches to forming useful models. As discussed above an efficient computer based tournament looking at possible different combinations (max of possible 256 ) yielded the following suggest initial model approach .
The concept here is to "duplicate/approximate the human eye" by examining competing alternatives which is what (in my opinion) we do when performing visual identification of structure. Note this case most eyeballs will not see the level shift at period 65 and simply focus on the major break in trend around period 51.
1 IDENTIFY DETERMINISTIC BREAK POINTS IN TREND
2 IDENTIFY INTERCEPT CHANGES
2 EVALUATE NEED FOR ARIMA AUGMENTATION
4 EVALUATE NEED FOR PULSES
SIMPLIFY VIA NECESSITY TESTS
detailing both a trend change (51) and an intercept change (65). Model diagnostic checking (always a good idea in iterative approaches to model form) yielded the following acf suggesting that improvement was necessary to render a set of residuals free of structure. An augmented model was then suggested of the form with an insignificant AR(1) coefficient.
The final model is here with model statistics and here
The residuals from this model are presented here with an acf of
The Actual/Fit and Forecast graph is here . The cleansed vs the actual is revealing as it details the level shift effect
In summary where the OP simulated a (1,1,0) for the fitst 50 observations, he then abridged the last 50 observations effectively coloring/changing the composite ARMA process to a (1,0,0) while embodying the empirically identified 3 predictors.
Comprehensive data analysis incorporating advanced search procedures is the objective . This data set is "thorny" and I look forward to any suggested improvements that may arise from this discussion. I used a beta version of AUTOBOX (which I have helped to develop) as my tool of choice.
As to your "proposed method" it may work for this series but there are way too many assumptions such as one and only one stochastic trends, one and only one deterministic trend (1,2,3,...), no pulses , no level shifts (intercept changes) , no seasonal pulses , constant error variance , constant parameters over time et al to suggest generality of approach. You are arguing from the specific to the general. There are tons of wrong ad hoc solutions waiting to be specified and just a handful of "correct solutions" of which my approach is just one.
A close-up showing observations 51 to 100 suggest a significant deviation/change in pattern (i.e. implied intercept) starting at period 65 ( which was picked/identified by the analytics as a level shift (change in intercept)) suggesting a possible simulation flaw as obs 51-64 have a different pattern than obs 65-100. | Decompose a time series data into deterministic trend and stochastic trend
Before I receive your data I would like to take the "bully pulpit" and expound on the task at hand and how I would go about solving this riddle. Your suggested approach I believe is to form an ARIMA m |
50,390 | predicting tree structure | I know its too late to answer, but still find below:
I think you are looking for Natural language to SQL statement kind of problem statement, there are few solutions developed in last few months listed below:
SEQ2SEQ method for SEQ2SQL
SQLNET https://arxiv.org/pdf/1711.04436.pdf
few more: https://github.com/sriniiyer/nl2sql
Enjoy! | predicting tree structure | I know its too late to answer, but still find below:
I think you are looking for Natural language to SQL statement kind of problem statement, there are few solutions developed in last few months liste | predicting tree structure
I know its too late to answer, but still find below:
I think you are looking for Natural language to SQL statement kind of problem statement, there are few solutions developed in last few months listed below:
SEQ2SEQ method for SEQ2SQL
SQLNET https://arxiv.org/pdf/1711.04436.pdf
few more: https://github.com/sriniiyer/nl2sql
Enjoy! | predicting tree structure
I know its too late to answer, but still find below:
I think you are looking for Natural language to SQL statement kind of problem statement, there are few solutions developed in last few months liste |
50,391 | Bias-variance: is it really a "trade-off"? | I share your skepticism that there is a tradeoff. A typical way to think about the bias-variance decomposition of MSE, such as in regularized regression, is that we accept a bit of bias in our estimator in exchange for a large reduction in variance. However, we do this to achieve lower MSE, not to maintain the MSE. Thus, while I understand the "trade" being used to describe trading your unbiased, high variance estimator for a slightly biased, low variance estimator, "tradeoff" to me implies keeping MSE constant, and I try not to describe it as a "tradeoff", preferring to refer to a bias-variance "decomposition". | Bias-variance: is it really a "trade-off"? | I share your skepticism that there is a tradeoff. A typical way to think about the bias-variance decomposition of MSE, such as in regularized regression, is that we accept a bit of bias in our estimat | Bias-variance: is it really a "trade-off"?
I share your skepticism that there is a tradeoff. A typical way to think about the bias-variance decomposition of MSE, such as in regularized regression, is that we accept a bit of bias in our estimator in exchange for a large reduction in variance. However, we do this to achieve lower MSE, not to maintain the MSE. Thus, while I understand the "trade" being used to describe trading your unbiased, high variance estimator for a slightly biased, low variance estimator, "tradeoff" to me implies keeping MSE constant, and I try not to describe it as a "tradeoff", preferring to refer to a bias-variance "decomposition". | Bias-variance: is it really a "trade-off"?
I share your skepticism that there is a tradeoff. A typical way to think about the bias-variance decomposition of MSE, such as in regularized regression, is that we accept a bit of bias in our estimat |
50,392 | Unsure if this derivation for covariance function is valid? | You are correct.
The computation boils down to figuring out the experession of:
$$f(s,t) = \mathbb{E}\left[\left(\int_0^t e^{au}dW_u\right) \left(\int_0^s e^{av}dW_v\right) \right]$$
We can suppose $s \leq t$ without any loss of generality.
Developing, then using the fact that the brownian motion has independent increments, and finally Ito's isometry, we can write:
$$\begin{aligned}
f(s,t)
& = \mathbb{E}\left[\int_0^s e^{au}dW_u \int_0^s e^{au}dW_u \right] +
\mathbb{E}\left[\int_s^t e^{au}dW_u \int_0^s e^{au}dW_u \right] \\
& = \mathbb{E}\left[\left(\int_0^s e^{au}dW_u \right)^2\right] + 0 \ \ \text{ (independent increments)}\\
& = \int_0^s e^{2au} du \ \ \ \text{ (Ito's isometry)} \\
& = \frac{1}{2a} (e^{2as} - 1)
\end{aligned}$$ | Unsure if this derivation for covariance function is valid? | You are correct.
The computation boils down to figuring out the experession of:
$$f(s,t) = \mathbb{E}\left[\left(\int_0^t e^{au}dW_u\right) \left(\int_0^s e^{av}dW_v\right) \right]$$
We can suppose $s | Unsure if this derivation for covariance function is valid?
You are correct.
The computation boils down to figuring out the experession of:
$$f(s,t) = \mathbb{E}\left[\left(\int_0^t e^{au}dW_u\right) \left(\int_0^s e^{av}dW_v\right) \right]$$
We can suppose $s \leq t$ without any loss of generality.
Developing, then using the fact that the brownian motion has independent increments, and finally Ito's isometry, we can write:
$$\begin{aligned}
f(s,t)
& = \mathbb{E}\left[\int_0^s e^{au}dW_u \int_0^s e^{au}dW_u \right] +
\mathbb{E}\left[\int_s^t e^{au}dW_u \int_0^s e^{au}dW_u \right] \\
& = \mathbb{E}\left[\left(\int_0^s e^{au}dW_u \right)^2\right] + 0 \ \ \text{ (independent increments)}\\
& = \int_0^s e^{2au} du \ \ \ \text{ (Ito's isometry)} \\
& = \frac{1}{2a} (e^{2as} - 1)
\end{aligned}$$ | Unsure if this derivation for covariance function is valid?
You are correct.
The computation boils down to figuring out the experession of:
$$f(s,t) = \mathbb{E}\left[\left(\int_0^t e^{au}dW_u\right) \left(\int_0^s e^{av}dW_v\right) \right]$$
We can suppose $s |
50,393 | How can eigenfaces (PCA eigenvectors on face image data) be displayed as images? | PCA does dimensional reduction by expressing $D$ dimensional vectors on an $M$ dimensional subspace, with $M<D.$ The vector itself can be written as a linear combination of $M$ eigenvectors, where the eigenvector is itself a unit vector that lives in the $D$ dimensional space.
Consider, for example, a two dimensional space which we reduce to one dimension using PCA. We find that the principal eigenvector is the unit vector that points equally in the positive $\hat{x}$ and $\hat{y}$ direction, i.e.
$$
\hat{v} = \frac{1}{\sqrt{2}} (\hat{x} + \hat{y}).
$$
In this case I'm using the hat ($\hat{x}$) symbol to indicate that it's a unit vector. You can think of this as a one-dimensional line going through a two-dimensional plane. In our reduced space, we can express any point $w$ in the two dimensional space as a one-dimensional (or scalar) value by projecting it onto the eigenvector, i.e. by calculating $w \cdot \hat{v}.$ So the point $(3,2)$ becomes $5/\sqrt{2},$ etc. But the eigenvector $\hat{v}$ is still expressed in the original two dimensions.
In general, we express a $D$ dimensional vector, $x,$ as a reduced $M$ dimensional vector $a$, where each component $a_i$ of $a$ is given by,
$$
a_i = \sum_j x_j V_{i j}
$$
where $V_{i j}$ is the $j$th component of the $i$'th eigenvector, and $i = 1, \dots, M$ and $j = 1, \dots, D.$ For that to work, the $i$th eigenvector must have $D$ components to take an inner product with $x$.
In your case, you can express a "reduced" vector of 200 components by taking the original image, a vector of 65025 components, and taking its inner product with each of the 200 images, each of which has 65025 components. Each inner product result is a component of your 200-dimensional vector. We expect each eigenvector to have the same number of dimensions as the original space. That is, we expect $M$ eigenvectors, each of which are $D$-dimensional. | How can eigenfaces (PCA eigenvectors on face image data) be displayed as images? | PCA does dimensional reduction by expressing $D$ dimensional vectors on an $M$ dimensional subspace, with $M<D.$ The vector itself can be written as a linear combination of $M$ eigenvectors, where the | How can eigenfaces (PCA eigenvectors on face image data) be displayed as images?
PCA does dimensional reduction by expressing $D$ dimensional vectors on an $M$ dimensional subspace, with $M<D.$ The vector itself can be written as a linear combination of $M$ eigenvectors, where the eigenvector is itself a unit vector that lives in the $D$ dimensional space.
Consider, for example, a two dimensional space which we reduce to one dimension using PCA. We find that the principal eigenvector is the unit vector that points equally in the positive $\hat{x}$ and $\hat{y}$ direction, i.e.
$$
\hat{v} = \frac{1}{\sqrt{2}} (\hat{x} + \hat{y}).
$$
In this case I'm using the hat ($\hat{x}$) symbol to indicate that it's a unit vector. You can think of this as a one-dimensional line going through a two-dimensional plane. In our reduced space, we can express any point $w$ in the two dimensional space as a one-dimensional (or scalar) value by projecting it onto the eigenvector, i.e. by calculating $w \cdot \hat{v}.$ So the point $(3,2)$ becomes $5/\sqrt{2},$ etc. But the eigenvector $\hat{v}$ is still expressed in the original two dimensions.
In general, we express a $D$ dimensional vector, $x,$ as a reduced $M$ dimensional vector $a$, where each component $a_i$ of $a$ is given by,
$$
a_i = \sum_j x_j V_{i j}
$$
where $V_{i j}$ is the $j$th component of the $i$'th eigenvector, and $i = 1, \dots, M$ and $j = 1, \dots, D.$ For that to work, the $i$th eigenvector must have $D$ components to take an inner product with $x$.
In your case, you can express a "reduced" vector of 200 components by taking the original image, a vector of 65025 components, and taking its inner product with each of the 200 images, each of which has 65025 components. Each inner product result is a component of your 200-dimensional vector. We expect each eigenvector to have the same number of dimensions as the original space. That is, we expect $M$ eigenvectors, each of which are $D$-dimensional. | How can eigenfaces (PCA eigenvectors on face image data) be displayed as images?
PCA does dimensional reduction by expressing $D$ dimensional vectors on an $M$ dimensional subspace, with $M<D.$ The vector itself can be written as a linear combination of $M$ eigenvectors, where the |
50,394 | Tails of products of random variables | A counter-example:
Let X be the distribution with 99% of its probability mass at 100, and the rest of its probability mass at 0. Let t be 99.5.
In cases where the realized value of X is 0, multiplication by Y will never result in a product above 99.5. (This is essentially true even if 1% of the probability mass concentrates slightly above zero, rather than exactly on zero). In cases where the realized value of X is 100, multiplication by Y will frequently result in a product less than 99.5. | Tails of products of random variables | A counter-example:
Let X be the distribution with 99% of its probability mass at 100, and the rest of its probability mass at 0. Let t be 99.5.
In cases where the realized value of X is 0, multiplica | Tails of products of random variables
A counter-example:
Let X be the distribution with 99% of its probability mass at 100, and the rest of its probability mass at 0. Let t be 99.5.
In cases where the realized value of X is 0, multiplication by Y will never result in a product above 99.5. (This is essentially true even if 1% of the probability mass concentrates slightly above zero, rather than exactly on zero). In cases where the realized value of X is 100, multiplication by Y will frequently result in a product less than 99.5. | Tails of products of random variables
A counter-example:
Let X be the distribution with 99% of its probability mass at 100, and the rest of its probability mass at 0. Let t be 99.5.
In cases where the realized value of X is 0, multiplica |
50,395 | Tails of products of random variables | This property doesn't hold true for all non-negative distributions of $X$.
Consider the case $X \sim \text{Bernouli}(p)$, for some $0<p<1 \implies E(X)=p$
and $Y \sim \chi^2(1)$
For $t\ \text{such that, }\ p<t<1$, $P(X>t) = P(X=1) =p$
$P(X.Y>t) = P(Y>t/X=1)*P(X=1) = P(Y>t)*p<p$
$\implies P(X>t) > P(X.Y>t)\\$
#
Update on the special case of X
$X = z'Kz/z'z$ where $z \sim N(0,I)$ and $K$ is positive definite
$\text{So K can be written as,} \\ K = U'DU \text{ where } U \text{ is orthogonal and } D \text{ is diagonal matrix with } d_i>0 \ \forall i$
$ \implies X=z'U'DUz/z'U'Uz =\sum_i d_iv_i^2/\sum_iv_i^2 \text{, where } V=Uz \sim N(0,I)$
$X = \sum_id_iv_i^2/\sum_i v_i^2,$ where $v_i^2 \sim \chi^2(1)$
Let us define $w_i = v_i^2/\sum v_i^2 \implies w_i \sim Beta(1/2,(n-1)/2)$
$E(w_i) = 1/n \implies E(X) = (\sum_{i=1}^n d_i)/n$
Since $w_i$'s are not independent it gets a bit complicated to derive the closed form distribution of X.
For simplicity let us look at the case when K is 2x2 matrix and D = Diagonal$(d_1,d_2)$
$X = d_1 w_1 + d_2 (1-w_1) = d_2 + (d_1 - d_2)*w_1 $, where $w_1 \sim Beta(1/2,1/2)$
$Y \sim Gamma(n/2,n/2)$
A Contradicting Example for special case of X
$Y \sim Gamma(1/2,1/2),\ \ Median(Y) \approx 0.47$
$X = d_1 + (d_2 - d_1)W,$ let $d_1=0.2,d_2=0.3 \implies X \in [0.2,0.3],$ $Median(X) = 0.25$
$XY \le 0.3Y \implies Median(XY) \le 0.3*Median(Y) \le 0.14$
Now since $E(X) = 0.25,$ consider $t=0.25 + \epsilon >E(X),$ for some small $\epsilon > 0.$ Also, $t > Median(XY)$
$P(X>t) \approx 0.5$
$P(XY>t) < P(XY>Median(XY)) = 0.5 \implies P(XY>t) < 0.5 $
$P(XY>t) < P(X>t),$ this example disproves it even for your special case as well.
I have ran a few simulations with different values of $K$ and found a few more contradicting cases. | Tails of products of random variables | This property doesn't hold true for all non-negative distributions of $X$.
Consider the case $X \sim \text{Bernouli}(p)$, for some $0<p<1 \implies E(X)=p$
and $Y \sim \chi^2(1)$
For $t\ \text{such t | Tails of products of random variables
This property doesn't hold true for all non-negative distributions of $X$.
Consider the case $X \sim \text{Bernouli}(p)$, for some $0<p<1 \implies E(X)=p$
and $Y \sim \chi^2(1)$
For $t\ \text{such that, }\ p<t<1$, $P(X>t) = P(X=1) =p$
$P(X.Y>t) = P(Y>t/X=1)*P(X=1) = P(Y>t)*p<p$
$\implies P(X>t) > P(X.Y>t)\\$
#
Update on the special case of X
$X = z'Kz/z'z$ where $z \sim N(0,I)$ and $K$ is positive definite
$\text{So K can be written as,} \\ K = U'DU \text{ where } U \text{ is orthogonal and } D \text{ is diagonal matrix with } d_i>0 \ \forall i$
$ \implies X=z'U'DUz/z'U'Uz =\sum_i d_iv_i^2/\sum_iv_i^2 \text{, where } V=Uz \sim N(0,I)$
$X = \sum_id_iv_i^2/\sum_i v_i^2,$ where $v_i^2 \sim \chi^2(1)$
Let us define $w_i = v_i^2/\sum v_i^2 \implies w_i \sim Beta(1/2,(n-1)/2)$
$E(w_i) = 1/n \implies E(X) = (\sum_{i=1}^n d_i)/n$
Since $w_i$'s are not independent it gets a bit complicated to derive the closed form distribution of X.
For simplicity let us look at the case when K is 2x2 matrix and D = Diagonal$(d_1,d_2)$
$X = d_1 w_1 + d_2 (1-w_1) = d_2 + (d_1 - d_2)*w_1 $, where $w_1 \sim Beta(1/2,1/2)$
$Y \sim Gamma(n/2,n/2)$
A Contradicting Example for special case of X
$Y \sim Gamma(1/2,1/2),\ \ Median(Y) \approx 0.47$
$X = d_1 + (d_2 - d_1)W,$ let $d_1=0.2,d_2=0.3 \implies X \in [0.2,0.3],$ $Median(X) = 0.25$
$XY \le 0.3Y \implies Median(XY) \le 0.3*Median(Y) \le 0.14$
Now since $E(X) = 0.25,$ consider $t=0.25 + \epsilon >E(X),$ for some small $\epsilon > 0.$ Also, $t > Median(XY)$
$P(X>t) \approx 0.5$
$P(XY>t) < P(XY>Median(XY)) = 0.5 \implies P(XY>t) < 0.5 $
$P(XY>t) < P(X>t),$ this example disproves it even for your special case as well.
I have ran a few simulations with different values of $K$ and found a few more contradicting cases. | Tails of products of random variables
This property doesn't hold true for all non-negative distributions of $X$.
Consider the case $X \sim \text{Bernouli}(p)$, for some $0<p<1 \implies E(X)=p$
and $Y \sim \chi^2(1)$
For $t\ \text{such t |
50,396 | Tails of products of random variables | I see the intuition behind your question, but I'm not sure this holds for the general case.
First, you can re-write your original inequality as:
$$
\Pr(X < t) > \Pr(X\cdot Y < t)
$$
This is equal to:
$$ F_X(t) > F_{X\cdot Y}(t) $$
Which is:
$$ \int_{0}^{t} f_X(i)di > \int_{0}^{t} f_X(i)g_Y(i)di $$
where $f_X(i)$ is the PDF of $X$, and $g_X(i)$ is the PDF of $Y$. You know the latter but not the former. As such, I see no way you can proof the above. Maybe you could think the proof would come from some type of monotonicity property of the Chi-Squared distribution, but $g_Y(i)$ is not monotonic for $n>2$.
In fact, you can very easily think of two distributions with same mean but different variance such that the accumulation of mass in the tails as be move up along the domain from the mean might be faster or slower, depending on the kurtosis (e.g. for normal).
Naturally, I haven't proved anything, nor given you a counterexample, so I might be wrong. | Tails of products of random variables | I see the intuition behind your question, but I'm not sure this holds for the general case.
First, you can re-write your original inequality as:
$$
\Pr(X < t) > \Pr(X\cdot Y < t)
$$
This is equal to:
| Tails of products of random variables
I see the intuition behind your question, but I'm not sure this holds for the general case.
First, you can re-write your original inequality as:
$$
\Pr(X < t) > \Pr(X\cdot Y < t)
$$
This is equal to:
$$ F_X(t) > F_{X\cdot Y}(t) $$
Which is:
$$ \int_{0}^{t} f_X(i)di > \int_{0}^{t} f_X(i)g_Y(i)di $$
where $f_X(i)$ is the PDF of $X$, and $g_X(i)$ is the PDF of $Y$. You know the latter but not the former. As such, I see no way you can proof the above. Maybe you could think the proof would come from some type of monotonicity property of the Chi-Squared distribution, but $g_Y(i)$ is not monotonic for $n>2$.
In fact, you can very easily think of two distributions with same mean but different variance such that the accumulation of mass in the tails as be move up along the domain from the mean might be faster or slower, depending on the kurtosis (e.g. for normal).
Naturally, I haven't proved anything, nor given you a counterexample, so I might be wrong. | Tails of products of random variables
I see the intuition behind your question, but I'm not sure this holds for the general case.
First, you can re-write your original inequality as:
$$
\Pr(X < t) > \Pr(X\cdot Y < t)
$$
This is equal to:
|
50,397 | Confused about order in probability | If we think of seating the people in 4 seats and insist that the first seat be occupied by a Mexican, the second by an Asian, the third by an African American and the fourth by a Caucasian, then the chance of this is: 5/12 x 2/11 x 3/10 x 2/9. In the problem you have set we don't care which person sits in which seat so long as one of each is present. There are 4! ways of re-arranging the people in the seats, so the answer to your question is 4! times the product I just gave, which once you re-arrange it is the same as the answer given in the book. | Confused about order in probability | If we think of seating the people in 4 seats and insist that the first seat be occupied by a Mexican, the second by an Asian, the third by an African American and the fourth by a Caucasian, then the c | Confused about order in probability
If we think of seating the people in 4 seats and insist that the first seat be occupied by a Mexican, the second by an Asian, the third by an African American and the fourth by a Caucasian, then the chance of this is: 5/12 x 2/11 x 3/10 x 2/9. In the problem you have set we don't care which person sits in which seat so long as one of each is present. There are 4! ways of re-arranging the people in the seats, so the answer to your question is 4! times the product I just gave, which once you re-arrange it is the same as the answer given in the book. | Confused about order in probability
If we think of seating the people in 4 seats and insist that the first seat be occupied by a Mexican, the second by an Asian, the third by an African American and the fourth by a Caucasian, then the c |
50,398 | Confused about order in probability | There are $\binom{12}{5}$ possible committees.
Committees of $5$ that include at least one member from each group necessarily have just one group having two members on the committee, all other groups have just one. So, the number of such committees is
$$\binom{5}{2}\cdot 2\cdot 3\cdot 2 + 5\cdot \binom{2}{2}\cdot 3\cdot 2
+ 5\cdot 2 \cdot \binom{3}{2}\cdot 2 + 5\cdot 2 \cdot 3
\cdot \binom{2}{2}$$ | Confused about order in probability | There are $\binom{12}{5}$ possible committees.
Committees of $5$ that include at least one member from each group necessarily have just one group having two members on the committee, all other groups | Confused about order in probability
There are $\binom{12}{5}$ possible committees.
Committees of $5$ that include at least one member from each group necessarily have just one group having two members on the committee, all other groups have just one. So, the number of such committees is
$$\binom{5}{2}\cdot 2\cdot 3\cdot 2 + 5\cdot \binom{2}{2}\cdot 3\cdot 2
+ 5\cdot 2 \cdot \binom{3}{2}\cdot 2 + 5\cdot 2 \cdot 3
\cdot \binom{2}{2}$$ | Confused about order in probability
There are $\binom{12}{5}$ possible committees.
Committees of $5$ that include at least one member from each group necessarily have just one group having two members on the committee, all other groups |
50,399 | Confused about order in probability | The numerator is the number of ways to choose $4$ people from $12$ people such that exactly one person is chosen from each ethnic group. The numerator is thus ${5 \choose 1} {2 \choose 1}{3 \choose 2}{2 \choose 2}$. The denominator is clearly ${12 \choose 4}$.
Alternatively, you could assume that the subcommittee has a name for each of the positions - say president, vice president, secretary and treasurer. Then, the numerator would be ${5 \choose 1} {2 \choose 1}{3 \choose 2}{2 \choose 2} 4!$ because once the 4 people are chosen so that each ethnic group is represented, they can be placed in various positions in $4!$ ways. For the same reason, the denominator is ${12 \choose 4} 4!$. The $4!$'s cancel out and we get the same probability. | Confused about order in probability | The numerator is the number of ways to choose $4$ people from $12$ people such that exactly one person is chosen from each ethnic group. The numerator is thus ${5 \choose 1} {2 \choose 1}{3 \choose 2 | Confused about order in probability
The numerator is the number of ways to choose $4$ people from $12$ people such that exactly one person is chosen from each ethnic group. The numerator is thus ${5 \choose 1} {2 \choose 1}{3 \choose 2}{2 \choose 2}$. The denominator is clearly ${12 \choose 4}$.
Alternatively, you could assume that the subcommittee has a name for each of the positions - say president, vice president, secretary and treasurer. Then, the numerator would be ${5 \choose 1} {2 \choose 1}{3 \choose 2}{2 \choose 2} 4!$ because once the 4 people are chosen so that each ethnic group is represented, they can be placed in various positions in $4!$ ways. For the same reason, the denominator is ${12 \choose 4} 4!$. The $4!$'s cancel out and we get the same probability. | Confused about order in probability
The numerator is the number of ways to choose $4$ people from $12$ people such that exactly one person is chosen from each ethnic group. The numerator is thus ${5 \choose 1} {2 \choose 1}{3 \choose 2 |
50,400 | Can you perform a multiple imputation on data that is missing not at random (MNAR)? | Is there a way to identify if your data is MNAR, MAR, or MCAR?
There is Little's MCAR test, which can evaluate if your missings are MCAR. More informations can be found here on page 12. As far as I know there is no test available, which differentiates between MAR and MNAR. In practice I would say that many people just assume MAR, since the treatment of NMAR is very difficult. However, some information about appropriate methods for MNAR can be found here.
And when performing multiple imputation, should you include all predictor variables even if only 1 or 2 variables have missing values?
That depends strongly on your specific data. For data consisting of few variables it is often a good approach to use all variables. With larger data, you should usually do a variable selection, mainly due to computational reasons and to exclude noisy predictors (see IWS' comment below). You can find some guidelines here on page 128. There are 3 groups of variables, which should be included into imputation models: variables that are used in later analyses of imputed data, variables that are related to the missingness structure, and variables that are strong predictors for the variable you want to impute.
Also once I run my MI and build my logistic model, how do I decide if it is better to go with a model that excludes all missing values through list-wise deletion or with my imputed model?
If done right, it should always be better to use the imputed data, since you are able to keep a larger data set and you will eventually be able to reduce bias, which results from the missingness. | Can you perform a multiple imputation on data that is missing not at random (MNAR)? | Is there a way to identify if your data is MNAR, MAR, or MCAR?
There is Little's MCAR test, which can evaluate if your missings are MCAR. More informations can be found here on page 12. As far as I k | Can you perform a multiple imputation on data that is missing not at random (MNAR)?
Is there a way to identify if your data is MNAR, MAR, or MCAR?
There is Little's MCAR test, which can evaluate if your missings are MCAR. More informations can be found here on page 12. As far as I know there is no test available, which differentiates between MAR and MNAR. In practice I would say that many people just assume MAR, since the treatment of NMAR is very difficult. However, some information about appropriate methods for MNAR can be found here.
And when performing multiple imputation, should you include all predictor variables even if only 1 or 2 variables have missing values?
That depends strongly on your specific data. For data consisting of few variables it is often a good approach to use all variables. With larger data, you should usually do a variable selection, mainly due to computational reasons and to exclude noisy predictors (see IWS' comment below). You can find some guidelines here on page 128. There are 3 groups of variables, which should be included into imputation models: variables that are used in later analyses of imputed data, variables that are related to the missingness structure, and variables that are strong predictors for the variable you want to impute.
Also once I run my MI and build my logistic model, how do I decide if it is better to go with a model that excludes all missing values through list-wise deletion or with my imputed model?
If done right, it should always be better to use the imputed data, since you are able to keep a larger data set and you will eventually be able to reduce bias, which results from the missingness. | Can you perform a multiple imputation on data that is missing not at random (MNAR)?
Is there a way to identify if your data is MNAR, MAR, or MCAR?
There is Little's MCAR test, which can evaluate if your missings are MCAR. More informations can be found here on page 12. As far as I k |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.